mirror of
https://github.com/Arrowar/StreamingCommunity.git
synced 2025-06-05 02:55:25 +00:00
v1.0
This commit is contained in:
parent
869b4c57cf
commit
172b09ea46
41
.gitignore
vendored
41
.gitignore
vendored
@ -28,35 +28,22 @@ MANIFEST
|
||||
*.manifest
|
||||
*.spec
|
||||
|
||||
# Installer logs
|
||||
pip-log.txt
|
||||
pip-delete-this-directory.txt
|
||||
|
||||
# Translations
|
||||
*.mo
|
||||
*.pot
|
||||
|
||||
# Django stuff:
|
||||
*.log
|
||||
local_settings.py
|
||||
db.sqlite3
|
||||
db.sqlite3-journal
|
||||
|
||||
# Jupyter Notebook
|
||||
.ipynb_checkpoints
|
||||
|
||||
# Environments
|
||||
.env
|
||||
.venv
|
||||
env/
|
||||
venv/
|
||||
ENV/
|
||||
env.bak/
|
||||
venv.bak/
|
||||
|
||||
# Other
|
||||
Video
|
||||
note.txt
|
||||
list_proxy.txt
|
||||
cmd.txt
|
||||
downloaded_files
|
||||
downloaded_files
|
||||
|
||||
# Cache
|
||||
__pycache__/
|
||||
**/__pycache__/
|
||||
|
||||
# Ignore node_modules directory in the client dashboard to avoid committing dependencies
|
||||
/client/dashboard/node_modules
|
||||
|
||||
# Ignore build directory in the client dashboard to avoid committing build artifacts
|
||||
/client/dashboard/build
|
||||
|
||||
# PER PYCACHE -> pyclean .
|
||||
key.t
|
674
LICENSE
674
LICENSE
@ -1,674 +0,0 @@
|
||||
GNU GENERAL PUBLIC LICENSE
|
||||
Version 3, 29 June 2007
|
||||
|
||||
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
|
||||
Everyone is permitted to copy and distribute verbatim copies
|
||||
of this license document, but changing it is not allowed.
|
||||
|
||||
Preamble
|
||||
|
||||
The GNU General Public License is a free, copyleft license for
|
||||
software and other kinds of works.
|
||||
|
||||
The licenses for most software and other practical works are designed
|
||||
to take away your freedom to share and change the works. By contrast,
|
||||
the GNU General Public License is intended to guarantee your freedom to
|
||||
share and change all versions of a program--to make sure it remains free
|
||||
software for all its users. We, the Free Software Foundation, use the
|
||||
GNU General Public License for most of our software; it applies also to
|
||||
any other work released this way by its authors. You can apply it to
|
||||
your programs, too.
|
||||
|
||||
When we speak of free software, we are referring to freedom, not
|
||||
price. Our General Public Licenses are designed to make sure that you
|
||||
have the freedom to distribute copies of free software (and charge for
|
||||
them if you wish), that you receive source code or can get it if you
|
||||
want it, that you can change the software or use pieces of it in new
|
||||
free programs, and that you know you can do these things.
|
||||
|
||||
To protect your rights, we need to prevent others from denying you
|
||||
these rights or asking you to surrender the rights. Therefore, you have
|
||||
certain responsibilities if you distribute copies of the software, or if
|
||||
you modify it: responsibilities to respect the freedom of others.
|
||||
|
||||
For example, if you distribute copies of such a program, whether
|
||||
gratis or for a fee, you must pass on to the recipients the same
|
||||
freedoms that you received. You must make sure that they, too, receive
|
||||
or can get the source code. And you must show them these terms so they
|
||||
know their rights.
|
||||
|
||||
Developers that use the GNU GPL protect your rights with two steps:
|
||||
(1) assert copyright on the software, and (2) offer you this License
|
||||
giving you legal permission to copy, distribute and/or modify it.
|
||||
|
||||
For the developers' and authors' protection, the GPL clearly explains
|
||||
that there is no warranty for this free software. For both users' and
|
||||
authors' sake, the GPL requires that modified versions be marked as
|
||||
changed, so that their problems will not be attributed erroneously to
|
||||
authors of previous versions.
|
||||
|
||||
Some devices are designed to deny users access to install or run
|
||||
modified versions of the software inside them, although the manufacturer
|
||||
can do so. This is fundamentally incompatible with the aim of
|
||||
protecting users' freedom to change the software. The systematic
|
||||
pattern of such abuse occurs in the area of products for individuals to
|
||||
use, which is precisely where it is most unacceptable. Therefore, we
|
||||
have designed this version of the GPL to prohibit the practice for those
|
||||
products. If such problems arise substantially in other domains, we
|
||||
stand ready to extend this provision to those domains in future versions
|
||||
of the GPL, as needed to protect the freedom of users.
|
||||
|
||||
Finally, every program is threatened constantly by software patents.
|
||||
States should not allow patents to restrict development and use of
|
||||
software on general-purpose computers, but in those that do, we wish to
|
||||
avoid the special danger that patents applied to a free program could
|
||||
make it effectively proprietary. To prevent this, the GPL assures that
|
||||
patents cannot be used to render the program non-free.
|
||||
|
||||
The precise terms and conditions for copying, distribution and
|
||||
modification follow.
|
||||
|
||||
TERMS AND CONDITIONS
|
||||
|
||||
0. Definitions.
|
||||
|
||||
"This License" refers to version 3 of the GNU General Public License.
|
||||
|
||||
"Copyright" also means copyright-like laws that apply to other kinds of
|
||||
works, such as semiconductor masks.
|
||||
|
||||
"The Program" refers to any copyrightable work licensed under this
|
||||
License. Each licensee is addressed as "you". "Licensees" and
|
||||
"recipients" may be individuals or organizations.
|
||||
|
||||
To "modify" a work means to copy from or adapt all or part of the work
|
||||
in a fashion requiring copyright permission, other than the making of an
|
||||
exact copy. The resulting work is called a "modified version" of the
|
||||
earlier work or a work "based on" the earlier work.
|
||||
|
||||
A "covered work" means either the unmodified Program or a work based
|
||||
on the Program.
|
||||
|
||||
To "propagate" a work means to do anything with it that, without
|
||||
permission, would make you directly or secondarily liable for
|
||||
infringement under applicable copyright law, except executing it on a
|
||||
computer or modifying a private copy. Propagation includes copying,
|
||||
distribution (with or without modification), making available to the
|
||||
public, and in some countries other activities as well.
|
||||
|
||||
To "convey" a work means any kind of propagation that enables other
|
||||
parties to make or receive copies. Mere interaction with a user through
|
||||
a computer network, with no transfer of a copy, is not conveying.
|
||||
|
||||
An interactive user interface displays "Appropriate Legal Notices"
|
||||
to the extent that it includes a convenient and prominently visible
|
||||
feature that (1) displays an appropriate copyright notice, and (2)
|
||||
tells the user that there is no warranty for the work (except to the
|
||||
extent that warranties are provided), that licensees may convey the
|
||||
work under this License, and how to view a copy of this License. If
|
||||
the interface presents a list of user commands or options, such as a
|
||||
menu, a prominent item in the list meets this criterion.
|
||||
|
||||
1. Source Code.
|
||||
|
||||
The "source code" for a work means the preferred form of the work
|
||||
for making modifications to it. "Object code" means any non-source
|
||||
form of a work.
|
||||
|
||||
A "Standard Interface" means an interface that either is an official
|
||||
standard defined by a recognized standards body, or, in the case of
|
||||
interfaces specified for a particular programming language, one that
|
||||
is widely used among developers working in that language.
|
||||
|
||||
The "System Libraries" of an executable work include anything, other
|
||||
than the work as a whole, that (a) is included in the normal form of
|
||||
packaging a Major Component, but which is not part of that Major
|
||||
Component, and (b) serves only to enable use of the work with that
|
||||
Major Component, or to implement a Standard Interface for which an
|
||||
implementation is available to the public in source code form. A
|
||||
"Major Component", in this context, means a major essential component
|
||||
(kernel, window system, and so on) of the specific operating system
|
||||
(if any) on which the executable work runs, or a compiler used to
|
||||
produce the work, or an object code interpreter used to run it.
|
||||
|
||||
The "Corresponding Source" for a work in object code form means all
|
||||
the source code needed to generate, install, and (for an executable
|
||||
work) run the object code and to modify the work, including scripts to
|
||||
control those activities. However, it does not include the work's
|
||||
System Libraries, or general-purpose tools or generally available free
|
||||
programs which are used unmodified in performing those activities but
|
||||
which are not part of the work. For example, Corresponding Source
|
||||
includes interface definition files associated with source files for
|
||||
the work, and the source code for shared libraries and dynamically
|
||||
linked subprograms that the work is specifically designed to require,
|
||||
such as by intimate data communication or control flow between those
|
||||
subprograms and other parts of the work.
|
||||
|
||||
The Corresponding Source need not include anything that users
|
||||
can regenerate automatically from other parts of the Corresponding
|
||||
Source.
|
||||
|
||||
The Corresponding Source for a work in source code form is that
|
||||
same work.
|
||||
|
||||
2. Basic Permissions.
|
||||
|
||||
All rights granted under this License are granted for the term of
|
||||
copyright on the Program, and are irrevocable provided the stated
|
||||
conditions are met. This License explicitly affirms your unlimited
|
||||
permission to run the unmodified Program. The output from running a
|
||||
covered work is covered by this License only if the output, given its
|
||||
content, constitutes a covered work. This License acknowledges your
|
||||
rights of fair use or other equivalent, as provided by copyright law.
|
||||
|
||||
You may make, run and propagate covered works that you do not
|
||||
convey, without conditions so long as your license otherwise remains
|
||||
in force. You may convey covered works to others for the sole purpose
|
||||
of having them make modifications exclusively for you, or provide you
|
||||
with facilities for running those works, provided that you comply with
|
||||
the terms of this License in conveying all material for which you do
|
||||
not control copyright. Those thus making or running the covered works
|
||||
for you must do so exclusively on your behalf, under your direction
|
||||
and control, on terms that prohibit them from making any copies of
|
||||
your copyrighted material outside their relationship with you.
|
||||
|
||||
Conveying under any other circumstances is permitted solely under
|
||||
the conditions stated below. Sublicensing is not allowed; section 10
|
||||
makes it unnecessary.
|
||||
|
||||
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
|
||||
|
||||
No covered work shall be deemed part of an effective technological
|
||||
measure under any applicable law fulfilling obligations under article
|
||||
11 of the WIPO copyright treaty adopted on 20 December 1996, or
|
||||
similar laws prohibiting or restricting circumvention of such
|
||||
measures.
|
||||
|
||||
When you convey a covered work, you waive any legal power to forbid
|
||||
circumvention of technological measures to the extent such circumvention
|
||||
is effected by exercising rights under this License with respect to
|
||||
the covered work, and you disclaim any intention to limit operation or
|
||||
modification of the work as a means of enforcing, against the work's
|
||||
users, your or third parties' legal rights to forbid circumvention of
|
||||
technological measures.
|
||||
|
||||
4. Conveying Verbatim Copies.
|
||||
|
||||
You may convey verbatim copies of the Program's source code as you
|
||||
receive it, in any medium, provided that you conspicuously and
|
||||
appropriately publish on each copy an appropriate copyright notice;
|
||||
keep intact all notices stating that this License and any
|
||||
non-permissive terms added in accord with section 7 apply to the code;
|
||||
keep intact all notices of the absence of any warranty; and give all
|
||||
recipients a copy of this License along with the Program.
|
||||
|
||||
You may charge any price or no price for each copy that you convey,
|
||||
and you may offer support or warranty protection for a fee.
|
||||
|
||||
5. Conveying Modified Source Versions.
|
||||
|
||||
You may convey a work based on the Program, or the modifications to
|
||||
produce it from the Program, in the form of source code under the
|
||||
terms of section 4, provided that you also meet all of these conditions:
|
||||
|
||||
a) The work must carry prominent notices stating that you modified
|
||||
it, and giving a relevant date.
|
||||
|
||||
b) The work must carry prominent notices stating that it is
|
||||
released under this License and any conditions added under section
|
||||
7. This requirement modifies the requirement in section 4 to
|
||||
"keep intact all notices".
|
||||
|
||||
c) You must license the entire work, as a whole, under this
|
||||
License to anyone who comes into possession of a copy. This
|
||||
License will therefore apply, along with any applicable section 7
|
||||
additional terms, to the whole of the work, and all its parts,
|
||||
regardless of how they are packaged. This License gives no
|
||||
permission to license the work in any other way, but it does not
|
||||
invalidate such permission if you have separately received it.
|
||||
|
||||
d) If the work has interactive user interfaces, each must display
|
||||
Appropriate Legal Notices; however, if the Program has interactive
|
||||
interfaces that do not display Appropriate Legal Notices, your
|
||||
work need not make them do so.
|
||||
|
||||
A compilation of a covered work with other separate and independent
|
||||
works, which are not by their nature extensions of the covered work,
|
||||
and which are not combined with it such as to form a larger program,
|
||||
in or on a volume of a storage or distribution medium, is called an
|
||||
"aggregate" if the compilation and its resulting copyright are not
|
||||
used to limit the access or legal rights of the compilation's users
|
||||
beyond what the individual works permit. Inclusion of a covered work
|
||||
in an aggregate does not cause this License to apply to the other
|
||||
parts of the aggregate.
|
||||
|
||||
6. Conveying Non-Source Forms.
|
||||
|
||||
You may convey a covered work in object code form under the terms
|
||||
of sections 4 and 5, provided that you also convey the
|
||||
machine-readable Corresponding Source under the terms of this License,
|
||||
in one of these ways:
|
||||
|
||||
a) Convey the object code in, or embodied in, a physical product
|
||||
(including a physical distribution medium), accompanied by the
|
||||
Corresponding Source fixed on a durable physical medium
|
||||
customarily used for software interchange.
|
||||
|
||||
b) Convey the object code in, or embodied in, a physical product
|
||||
(including a physical distribution medium), accompanied by a
|
||||
written offer, valid for at least three years and valid for as
|
||||
long as you offer spare parts or customer support for that product
|
||||
model, to give anyone who possesses the object code either (1) a
|
||||
copy of the Corresponding Source for all the software in the
|
||||
product that is covered by this License, on a durable physical
|
||||
medium customarily used for software interchange, for a price no
|
||||
more than your reasonable cost of physically performing this
|
||||
conveying of source, or (2) access to copy the
|
||||
Corresponding Source from a network server at no charge.
|
||||
|
||||
c) Convey individual copies of the object code with a copy of the
|
||||
written offer to provide the Corresponding Source. This
|
||||
alternative is allowed only occasionally and noncommercially, and
|
||||
only if you received the object code with such an offer, in accord
|
||||
with subsection 6b.
|
||||
|
||||
d) Convey the object code by offering access from a designated
|
||||
place (gratis or for a charge), and offer equivalent access to the
|
||||
Corresponding Source in the same way through the same place at no
|
||||
further charge. You need not require recipients to copy the
|
||||
Corresponding Source along with the object code. If the place to
|
||||
copy the object code is a network server, the Corresponding Source
|
||||
may be on a different server (operated by you or a third party)
|
||||
that supports equivalent copying facilities, provided you maintain
|
||||
clear directions next to the object code saying where to find the
|
||||
Corresponding Source. Regardless of what server hosts the
|
||||
Corresponding Source, you remain obligated to ensure that it is
|
||||
available for as long as needed to satisfy these requirements.
|
||||
|
||||
e) Convey the object code using peer-to-peer transmission, provided
|
||||
you inform other peers where the object code and Corresponding
|
||||
Source of the work are being offered to the general public at no
|
||||
charge under subsection 6d.
|
||||
|
||||
A separable portion of the object code, whose source code is excluded
|
||||
from the Corresponding Source as a System Library, need not be
|
||||
included in conveying the object code work.
|
||||
|
||||
A "User Product" is either (1) a "consumer product", which means any
|
||||
tangible personal property which is normally used for personal, family,
|
||||
or household purposes, or (2) anything designed or sold for incorporation
|
||||
into a dwelling. In determining whether a product is a consumer product,
|
||||
doubtful cases shall be resolved in favor of coverage. For a particular
|
||||
product received by a particular user, "normally used" refers to a
|
||||
typical or common use of that class of product, regardless of the status
|
||||
of the particular user or of the way in which the particular user
|
||||
actually uses, or expects or is expected to use, the product. A product
|
||||
is a consumer product regardless of whether the product has substantial
|
||||
commercial, industrial or non-consumer uses, unless such uses represent
|
||||
the only significant mode of use of the product.
|
||||
|
||||
"Installation Information" for a User Product means any methods,
|
||||
procedures, authorization keys, or other information required to install
|
||||
and execute modified versions of a covered work in that User Product from
|
||||
a modified version of its Corresponding Source. The information must
|
||||
suffice to ensure that the continued functioning of the modified object
|
||||
code is in no case prevented or interfered with solely because
|
||||
modification has been made.
|
||||
|
||||
If you convey an object code work under this section in, or with, or
|
||||
specifically for use in, a User Product, and the conveying occurs as
|
||||
part of a transaction in which the right of possession and use of the
|
||||
User Product is transferred to the recipient in perpetuity or for a
|
||||
fixed term (regardless of how the transaction is characterized), the
|
||||
Corresponding Source conveyed under this section must be accompanied
|
||||
by the Installation Information. But this requirement does not apply
|
||||
if neither you nor any third party retains the ability to install
|
||||
modified object code on the User Product (for example, the work has
|
||||
been installed in ROM).
|
||||
|
||||
The requirement to provide Installation Information does not include a
|
||||
requirement to continue to provide support service, warranty, or updates
|
||||
for a work that has been modified or installed by the recipient, or for
|
||||
the User Product in which it has been modified or installed. Access to a
|
||||
network may be denied when the modification itself materially and
|
||||
adversely affects the operation of the network or violates the rules and
|
||||
protocols for communication across the network.
|
||||
|
||||
Corresponding Source conveyed, and Installation Information provided,
|
||||
in accord with this section must be in a format that is publicly
|
||||
documented (and with an implementation available to the public in
|
||||
source code form), and must require no special password or key for
|
||||
unpacking, reading or copying.
|
||||
|
||||
7. Additional Terms.
|
||||
|
||||
"Additional permissions" are terms that supplement the terms of this
|
||||
License by making exceptions from one or more of its conditions.
|
||||
Additional permissions that are applicable to the entire Program shall
|
||||
be treated as though they were included in this License, to the extent
|
||||
that they are valid under applicable law. If additional permissions
|
||||
apply only to part of the Program, that part may be used separately
|
||||
under those permissions, but the entire Program remains governed by
|
||||
this License without regard to the additional permissions.
|
||||
|
||||
When you convey a copy of a covered work, you may at your option
|
||||
remove any additional permissions from that copy, or from any part of
|
||||
it. (Additional permissions may be written to require their own
|
||||
removal in certain cases when you modify the work.) You may place
|
||||
additional permissions on material, added by you to a covered work,
|
||||
for which you have or can give appropriate copyright permission.
|
||||
|
||||
Notwithstanding any other provision of this License, for material you
|
||||
add to a covered work, you may (if authorized by the copyright holders of
|
||||
that material) supplement the terms of this License with terms:
|
||||
|
||||
a) Disclaiming warranty or limiting liability differently from the
|
||||
terms of sections 15 and 16 of this License; or
|
||||
|
||||
b) Requiring preservation of specified reasonable legal notices or
|
||||
author attributions in that material or in the Appropriate Legal
|
||||
Notices displayed by works containing it; or
|
||||
|
||||
c) Prohibiting misrepresentation of the origin of that material, or
|
||||
requiring that modified versions of such material be marked in
|
||||
reasonable ways as different from the original version; or
|
||||
|
||||
d) Limiting the use for publicity purposes of names of licensors or
|
||||
authors of the material; or
|
||||
|
||||
e) Declining to grant rights under trademark law for use of some
|
||||
trade names, trademarks, or service marks; or
|
||||
|
||||
f) Requiring indemnification of licensors and authors of that
|
||||
material by anyone who conveys the material (or modified versions of
|
||||
it) with contractual assumptions of liability to the recipient, for
|
||||
any liability that these contractual assumptions directly impose on
|
||||
those licensors and authors.
|
||||
|
||||
All other non-permissive additional terms are considered "further
|
||||
restrictions" within the meaning of section 10. If the Program as you
|
||||
received it, or any part of it, contains a notice stating that it is
|
||||
governed by this License along with a term that is a further
|
||||
restriction, you may remove that term. If a license document contains
|
||||
a further restriction but permits relicensing or conveying under this
|
||||
License, you may add to a covered work material governed by the terms
|
||||
of that license document, provided that the further restriction does
|
||||
not survive such relicensing or conveying.
|
||||
|
||||
If you add terms to a covered work in accord with this section, you
|
||||
must place, in the relevant source files, a statement of the
|
||||
additional terms that apply to those files, or a notice indicating
|
||||
where to find the applicable terms.
|
||||
|
||||
Additional terms, permissive or non-permissive, may be stated in the
|
||||
form of a separately written license, or stated as exceptions;
|
||||
the above requirements apply either way.
|
||||
|
||||
8. Termination.
|
||||
|
||||
You may not propagate or modify a covered work except as expressly
|
||||
provided under this License. Any attempt otherwise to propagate or
|
||||
modify it is void, and will automatically terminate your rights under
|
||||
this License (including any patent licenses granted under the third
|
||||
paragraph of section 11).
|
||||
|
||||
However, if you cease all violation of this License, then your
|
||||
license from a particular copyright holder is reinstated (a)
|
||||
provisionally, unless and until the copyright holder explicitly and
|
||||
finally terminates your license, and (b) permanently, if the copyright
|
||||
holder fails to notify you of the violation by some reasonable means
|
||||
prior to 60 days after the cessation.
|
||||
|
||||
Moreover, your license from a particular copyright holder is
|
||||
reinstated permanently if the copyright holder notifies you of the
|
||||
violation by some reasonable means, this is the first time you have
|
||||
received notice of violation of this License (for any work) from that
|
||||
copyright holder, and you cure the violation prior to 30 days after
|
||||
your receipt of the notice.
|
||||
|
||||
Termination of your rights under this section does not terminate the
|
||||
licenses of parties who have received copies or rights from you under
|
||||
this License. If your rights have been terminated and not permanently
|
||||
reinstated, you do not qualify to receive new licenses for the same
|
||||
material under section 10.
|
||||
|
||||
9. Acceptance Not Required for Having Copies.
|
||||
|
||||
You are not required to accept this License in order to receive or
|
||||
run a copy of the Program. Ancillary propagation of a covered work
|
||||
occurring solely as a consequence of using peer-to-peer transmission
|
||||
to receive a copy likewise does not require acceptance. However,
|
||||
nothing other than this License grants you permission to propagate or
|
||||
modify any covered work. These actions infringe copyright if you do
|
||||
not accept this License. Therefore, by modifying or propagating a
|
||||
covered work, you indicate your acceptance of this License to do so.
|
||||
|
||||
10. Automatic Licensing of Downstream Recipients.
|
||||
|
||||
Each time you convey a covered work, the recipient automatically
|
||||
receives a license from the original licensors, to run, modify and
|
||||
propagate that work, subject to this License. You are not responsible
|
||||
for enforcing compliance by third parties with this License.
|
||||
|
||||
An "entity transaction" is a transaction transferring control of an
|
||||
organization, or substantially all assets of one, or subdividing an
|
||||
organization, or merging organizations. If propagation of a covered
|
||||
work results from an entity transaction, each party to that
|
||||
transaction who receives a copy of the work also receives whatever
|
||||
licenses to the work the party's predecessor in interest had or could
|
||||
give under the previous paragraph, plus a right to possession of the
|
||||
Corresponding Source of the work from the predecessor in interest, if
|
||||
the predecessor has it or can get it with reasonable efforts.
|
||||
|
||||
You may not impose any further restrictions on the exercise of the
|
||||
rights granted or affirmed under this License. For example, you may
|
||||
not impose a license fee, royalty, or other charge for exercise of
|
||||
rights granted under this License, and you may not initiate litigation
|
||||
(including a cross-claim or counterclaim in a lawsuit) alleging that
|
||||
any patent claim is infringed by making, using, selling, offering for
|
||||
sale, or importing the Program or any portion of it.
|
||||
|
||||
11. Patents.
|
||||
|
||||
A "contributor" is a copyright holder who authorizes use under this
|
||||
License of the Program or a work on which the Program is based. The
|
||||
work thus licensed is called the contributor's "contributor version".
|
||||
|
||||
A contributor's "essential patent claims" are all patent claims
|
||||
owned or controlled by the contributor, whether already acquired or
|
||||
hereafter acquired, that would be infringed by some manner, permitted
|
||||
by this License, of making, using, or selling its contributor version,
|
||||
but do not include claims that would be infringed only as a
|
||||
consequence of further modification of the contributor version. For
|
||||
purposes of this definition, "control" includes the right to grant
|
||||
patent sublicenses in a manner consistent with the requirements of
|
||||
this License.
|
||||
|
||||
Each contributor grants you a non-exclusive, worldwide, royalty-free
|
||||
patent license under the contributor's essential patent claims, to
|
||||
make, use, sell, offer for sale, import and otherwise run, modify and
|
||||
propagate the contents of its contributor version.
|
||||
|
||||
In the following three paragraphs, a "patent license" is any express
|
||||
agreement or commitment, however denominated, not to enforce a patent
|
||||
(such as an express permission to practice a patent or covenant not to
|
||||
sue for patent infringement). To "grant" such a patent license to a
|
||||
party means to make such an agreement or commitment not to enforce a
|
||||
patent against the party.
|
||||
|
||||
If you convey a covered work, knowingly relying on a patent license,
|
||||
and the Corresponding Source of the work is not available for anyone
|
||||
to copy, free of charge and under the terms of this License, through a
|
||||
publicly available network server or other readily accessible means,
|
||||
then you must either (1) cause the Corresponding Source to be so
|
||||
available, or (2) arrange to deprive yourself of the benefit of the
|
||||
patent license for this particular work, or (3) arrange, in a manner
|
||||
consistent with the requirements of this License, to extend the patent
|
||||
license to downstream recipients. "Knowingly relying" means you have
|
||||
actual knowledge that, but for the patent license, your conveying the
|
||||
covered work in a country, or your recipient's use of the covered work
|
||||
in a country, would infringe one or more identifiable patents in that
|
||||
country that you have reason to believe are valid.
|
||||
|
||||
If, pursuant to or in connection with a single transaction or
|
||||
arrangement, you convey, or propagate by procuring conveyance of, a
|
||||
covered work, and grant a patent license to some of the parties
|
||||
receiving the covered work authorizing them to use, propagate, modify
|
||||
or convey a specific copy of the covered work, then the patent license
|
||||
you grant is automatically extended to all recipients of the covered
|
||||
work and works based on it.
|
||||
|
||||
A patent license is "discriminatory" if it does not include within
|
||||
the scope of its coverage, prohibits the exercise of, or is
|
||||
conditioned on the non-exercise of one or more of the rights that are
|
||||
specifically granted under this License. You may not convey a covered
|
||||
work if you are a party to an arrangement with a third party that is
|
||||
in the business of distributing software, under which you make payment
|
||||
to the third party based on the extent of your activity of conveying
|
||||
the work, and under which the third party grants, to any of the
|
||||
parties who would receive the covered work from you, a discriminatory
|
||||
patent license (a) in connection with copies of the covered work
|
||||
conveyed by you (or copies made from those copies), or (b) primarily
|
||||
for and in connection with specific products or compilations that
|
||||
contain the covered work, unless you entered into that arrangement,
|
||||
or that patent license was granted, prior to 28 March 2007.
|
||||
|
||||
Nothing in this License shall be construed as excluding or limiting
|
||||
any implied license or other defenses to infringement that may
|
||||
otherwise be available to you under applicable patent law.
|
||||
|
||||
12. No Surrender of Others' Freedom.
|
||||
|
||||
If conditions are imposed on you (whether by court order, agreement or
|
||||
otherwise) that contradict the conditions of this License, they do not
|
||||
excuse you from the conditions of this License. If you cannot convey a
|
||||
covered work so as to satisfy simultaneously your obligations under this
|
||||
License and any other pertinent obligations, then as a consequence you may
|
||||
not convey it at all. For example, if you agree to terms that obligate you
|
||||
to collect a royalty for further conveying from those to whom you convey
|
||||
the Program, the only way you could satisfy both those terms and this
|
||||
License would be to refrain entirely from conveying the Program.
|
||||
|
||||
13. Use with the GNU Affero General Public License.
|
||||
|
||||
Notwithstanding any other provision of this License, you have
|
||||
permission to link or combine any covered work with a work licensed
|
||||
under version 3 of the GNU Affero General Public License into a single
|
||||
combined work, and to convey the resulting work. The terms of this
|
||||
License will continue to apply to the part which is the covered work,
|
||||
but the special requirements of the GNU Affero General Public License,
|
||||
section 13, concerning interaction through a network will apply to the
|
||||
combination as such.
|
||||
|
||||
14. Revised Versions of this License.
|
||||
|
||||
The Free Software Foundation may publish revised and/or new versions of
|
||||
the GNU General Public License from time to time. Such new versions will
|
||||
be similar in spirit to the present version, but may differ in detail to
|
||||
address new problems or concerns.
|
||||
|
||||
Each version is given a distinguishing version number. If the
|
||||
Program specifies that a certain numbered version of the GNU General
|
||||
Public License "or any later version" applies to it, you have the
|
||||
option of following the terms and conditions either of that numbered
|
||||
version or of any later version published by the Free Software
|
||||
Foundation. If the Program does not specify a version number of the
|
||||
GNU General Public License, you may choose any version ever published
|
||||
by the Free Software Foundation.
|
||||
|
||||
If the Program specifies that a proxy can decide which future
|
||||
versions of the GNU General Public License can be used, that proxy's
|
||||
public statement of acceptance of a version permanently authorizes you
|
||||
to choose that version for the Program.
|
||||
|
||||
Later license versions may give you additional or different
|
||||
permissions. However, no additional obligations are imposed on any
|
||||
author or copyright holder as a result of your choosing to follow a
|
||||
later version.
|
||||
|
||||
15. Disclaimer of Warranty.
|
||||
|
||||
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
|
||||
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
|
||||
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
|
||||
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
|
||||
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
|
||||
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
|
||||
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
|
||||
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
|
||||
|
||||
16. Limitation of Liability.
|
||||
|
||||
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
|
||||
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
|
||||
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
|
||||
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
|
||||
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
|
||||
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
|
||||
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
|
||||
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
|
||||
SUCH DAMAGES.
|
||||
|
||||
17. Interpretation of Sections 15 and 16.
|
||||
|
||||
If the disclaimer of warranty and limitation of liability provided
|
||||
above cannot be given local legal effect according to their terms,
|
||||
reviewing courts shall apply local law that most closely approximates
|
||||
an absolute waiver of all civil liability in connection with the
|
||||
Program, unless a warranty or assumption of liability accompanies a
|
||||
copy of the Program in return for a fee.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
How to Apply These Terms to Your New Programs
|
||||
|
||||
If you develop a new program, and you want it to be of the greatest
|
||||
possible use to the public, the best way to achieve this is to make it
|
||||
free software which everyone can redistribute and change under these terms.
|
||||
|
||||
To do so, attach the following notices to the program. It is safest
|
||||
to attach them to the start of each source file to most effectively
|
||||
state the exclusion of warranty; and each file should have at least
|
||||
the "copyright" line and a pointer to where the full notice is found.
|
||||
|
||||
<one line to give the program's name and a brief idea of what it does.>
|
||||
Copyright (C) <year> <name of author>
|
||||
|
||||
This program is free software: you can redistribute it and/or modify
|
||||
it under the terms of the GNU General Public License as published by
|
||||
the Free Software Foundation, either version 3 of the License, or
|
||||
(at your option) any later version.
|
||||
|
||||
This program is distributed in the hope that it will be useful,
|
||||
but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
GNU General Public License for more details.
|
||||
|
||||
You should have received a copy of the GNU General Public License
|
||||
along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||
|
||||
Also add information on how to contact you by electronic and paper mail.
|
||||
|
||||
If the program does terminal interaction, make it output a short
|
||||
notice like this when it starts in an interactive mode:
|
||||
|
||||
<program> Copyright (C) <year> <name of author>
|
||||
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
|
||||
This is free software, and you are welcome to redistribute it
|
||||
under certain conditions; type `show c' for details.
|
||||
|
||||
The hypothetical commands `show w' and `show c' should show the appropriate
|
||||
parts of the General Public License. Of course, your program's commands
|
||||
might be different; for a GUI interface, you would use an "about box".
|
||||
|
||||
You should also get your employer (if you work as a programmer) or school,
|
||||
if any, to sign a "copyright disclaimer" for the program, if necessary.
|
||||
For more information on this, and how to apply and follow the GNU GPL, see
|
||||
<https://www.gnu.org/licenses/>.
|
||||
|
||||
The GNU General Public License does not permit incorporating your program
|
||||
into proprietary programs. If your program is a subroutine library, you
|
||||
may consider it more useful to permit linking proprietary applications with
|
||||
the library. If this is what you want to do, use the GNU Lesser General
|
||||
Public License instead of this License. But first, please read
|
||||
<https://www.gnu.org/licenses/why-not-lgpl.html>.
|
@ -1,6 +0,0 @@
|
||||
include README.md
|
||||
include LICENSE
|
||||
include requirements.txt
|
||||
include config.py
|
||||
recursive-include Test/ *
|
||||
recursive-include StreamingCommunity/ *.py
|
5
Makefile
5
Makefile
@ -1,5 +0,0 @@
|
||||
build-container:
|
||||
docker build -t streaming-community-api .
|
||||
|
||||
run-container:
|
||||
docker run --rm -it -p 8000:8000 -v ${LOCAL_DIR}:/app/Video -v ./config.json:/app/config.json streaming-community-api
|
429
README.md
429
README.md
@ -1,412 +1,17 @@
|
||||
# StreamingCommunity Downloader 🎬
|
||||
|
||||

|
||||
|
||||
A versatile script designed to download films and series from various supported streaming platforms.
|
||||
|
||||
# 🤝 Join our Community
|
||||
|
||||
Chat, contribute, and have fun in our **Git_StreamingCommunity** Discord [Server](https://discord.com/invite/8vV68UGRc7)
|
||||
|
||||
# 📋 Table of Contents
|
||||
|
||||
- [Website available](#website-status)
|
||||
- [Installation](#installation)
|
||||
- [PyPI Installation](#1-pypi-installation)
|
||||
- [Automatic Installation](#2-automatic-installation)
|
||||
- [Manual Installation](#3-manual-installation)
|
||||
- [Win 7](https://github.com/Ghost6446/StreamingCommunity_api/wiki/Installation#win-7)
|
||||
- [Termux](https://github.com/Ghost6446/StreamingCommunity_api/wiki/Termux)
|
||||
- [Configuration](#configuration)
|
||||
- [Default](#default-settings)
|
||||
- [Request](#requests-settings)
|
||||
- [Browser](#browser-settings)
|
||||
- [Download](#m3u8_download-settings)
|
||||
- [Parser](#m3u8_parser-settings)
|
||||
- [Docker](#docker)
|
||||
- [Tutorial](#tutorials)
|
||||
- [To Do](#to-do)
|
||||
|
||||
|
||||
|
||||
# Installation
|
||||
|
||||
## 1. PyPI Installation
|
||||
|
||||
Install directly from PyPI:
|
||||
|
||||
```bash
|
||||
pip install StreamingCommunity
|
||||
```
|
||||
|
||||
### Creating a Run Script
|
||||
|
||||
Create `run_streaming.py`:
|
||||
|
||||
```python
|
||||
from StreamingCommunity.run import main
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
```
|
||||
|
||||
Run the script:
|
||||
```bash
|
||||
python run_streaming.py
|
||||
```
|
||||
|
||||
## Updating via PyPI
|
||||
|
||||
```bash
|
||||
pip install --upgrade StreamingCommunity
|
||||
```
|
||||
|
||||
## 2. Automatic Installation
|
||||
|
||||
### Supported Operating Systems 💿
|
||||
|
||||
| OS | Automatic Installation Support |
|
||||
|:----------------|:------------------------------:|
|
||||
| Windows 10/11 | ✔️ |
|
||||
| Windows 7 | ❌ |
|
||||
| Debian Linux | ✔️ |
|
||||
| Arch Linux | ✔️ |
|
||||
| CentOS Stream 9 | ✔️ |
|
||||
| FreeBSD | ⏳ |
|
||||
| MacOS | ✔️ |
|
||||
| Termux | ❌ |
|
||||
|
||||
### Installation Steps
|
||||
|
||||
#### On Windows:
|
||||
|
||||
```powershell
|
||||
.\win_install.bat
|
||||
```
|
||||
|
||||
#### On Linux/MacOS/BSD:
|
||||
|
||||
```bash
|
||||
sudo chmod +x unix_install.sh && ./unix_install.sh
|
||||
```
|
||||
|
||||
### Usage
|
||||
|
||||
#### On Windows:
|
||||
|
||||
```powershell
|
||||
python .\test_run.py
|
||||
```
|
||||
|
||||
or
|
||||
|
||||
```powershell
|
||||
source .venv/bin/activate && python test_run.py && deactivate
|
||||
```
|
||||
|
||||
#### On Linux/MacOS/BSD:
|
||||
|
||||
```bash
|
||||
./test_run.py
|
||||
```
|
||||
|
||||
## 3. Manual Installation
|
||||
|
||||
### Requirements 📋
|
||||
|
||||
Prerequisites:
|
||||
* [Python](https://www.python.org/downloads/) > 3.8
|
||||
* [FFmpeg](https://www.gyan.dev/ffmpeg/builds/)
|
||||
|
||||
### Install Python Dependencies
|
||||
|
||||
```bash
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
### Usage
|
||||
|
||||
#### On Windows:
|
||||
|
||||
```powershell
|
||||
python test_run.py
|
||||
```
|
||||
|
||||
#### On Linux/MacOS:
|
||||
|
||||
```bash
|
||||
python3 test_run.py
|
||||
```
|
||||
|
||||
## Update
|
||||
|
||||
Keep your script up to date with the latest features by running:
|
||||
|
||||
### On Windows:
|
||||
|
||||
```powershell
|
||||
python update.py
|
||||
```
|
||||
|
||||
### On Linux/MacOS:
|
||||
|
||||
```bash
|
||||
python3 update.py
|
||||
```
|
||||
|
||||
<br>
|
||||
|
||||
# Configuration
|
||||
|
||||
You can change some behaviors by tweaking the configuration file.
|
||||
|
||||
The configuration file is divided into several main sections:
|
||||
|
||||
## DEFAULT Settings
|
||||
|
||||
```json
|
||||
{
|
||||
"root_path": "Video",
|
||||
"movie_folder_name": "Movie",
|
||||
"serie_folder_name": "TV",
|
||||
"map_episode_name": "%(tv_name)_S%(season)E%(episode)_%(episode_name)",
|
||||
"not_close": false,
|
||||
"show_trending": false
|
||||
}
|
||||
```
|
||||
|
||||
- `root_path`: Directory where all videos will be saved
|
||||
|
||||
### Path examples:
|
||||
* Windows: `C:\\MyLibrary\\Folder` or `\\\\MyServer\\MyLibrary` (if you want to use a network folder)
|
||||
* Linux/MacOS: `Desktop/MyLibrary/Folder`
|
||||
`<br/><br/>`
|
||||
|
||||
- `movie_folder_name`: The name of the subdirectory where movies will be stored.
|
||||
- `serie_folder_name`: The name of the subdirectory where TV series will be stored.
|
||||
|
||||
- `map_episode_name`: Template for TV series episode filenames
|
||||
|
||||
### Episode name usage:
|
||||
|
||||
You can choose different vars:
|
||||
|
||||
|
||||
* `%(tv_name)` : Is the name of TV Show
|
||||
* `%(season)` : Is the number of the season
|
||||
* `%(episode)` : Is the number of the episode
|
||||
* `%(episode_name)` : Is the name of the episode
|
||||
`<br/><br/>`
|
||||
|
||||
- `not_close`: If true, continues running after downloading
|
||||
- `show_trending`: Display trending content on startup
|
||||
|
||||
### qBittorrent Configuration
|
||||
|
||||
```json
|
||||
{
|
||||
"config_qbit_tor": {
|
||||
"host": "192.168.1.59",
|
||||
"port": "8080",
|
||||
"user": "admin",
|
||||
"pass": "adminadmin"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
To enable qBittorrent integration, follow the setup guide [here](https://github.com/lgallard/qBittorrent-Controller/wiki/How-to-enable-the-qBittorrent-Web-UI).
|
||||
|
||||
<br>
|
||||
|
||||
## REQUESTS Settings
|
||||
|
||||
```json
|
||||
{
|
||||
"timeout": 20,
|
||||
"max_retry": 3
|
||||
}
|
||||
```
|
||||
|
||||
- `timeout`: Maximum timeout (in seconds) for each request
|
||||
- `max_retry`: Number of retry attempts per segment during M3U8 index download
|
||||
|
||||
<br>
|
||||
|
||||
## BROWSER Settings
|
||||
|
||||
```json
|
||||
{
|
||||
"headless": false
|
||||
}
|
||||
```
|
||||
|
||||
- `headless`: Controls whether to run browser in headless mode
|
||||
|
||||
<br>
|
||||
|
||||
## M3U8_DOWNLOAD Settings
|
||||
|
||||
```json
|
||||
{
|
||||
"tqdm_delay": 0.01,
|
||||
"tqdm_use_large_bar": true,
|
||||
"default_video_workser": 12,
|
||||
"default_audio_workser": 12,
|
||||
"cleanup_tmp_folder": true
|
||||
}
|
||||
```
|
||||
|
||||
- `tqdm_delay`: Delay between progress bar updates
|
||||
- `tqdm_use_large_bar`: Use detailed progress bar (recommended for desktop) set to false for mobile
|
||||
- `default_video_workser`: Number of threads for video download
|
||||
- `default_audio_workser`: Number of threads for audio download
|
||||
- `cleanup_tmp_folder`: Remove temporary .ts files after download
|
||||
|
||||
|
||||
<br>
|
||||
|
||||
### Language Settings
|
||||
|
||||
The following codes can be used for `specific_list_audio` and `specific_list_subtitles`:
|
||||
|
||||
```
|
||||
ara - Arabic eng - English ita - Italian por - Portuguese
|
||||
baq - Basque fil - Filipino jpn - Japanese rum - Romanian
|
||||
cat - Catalan fin - Finnish kan - Kannada rus - Russian
|
||||
chi - Chinese fre - French kor - Korean spa - Spanish
|
||||
cze - Czech ger - German mal - Malayalam swe - Swedish
|
||||
dan - Danish glg - Galician may - Malay tam - Tamil
|
||||
dut - Dutch gre - Greek nob - Norw. Bokm tel - Telugu
|
||||
heb - Hebrew nor - Norwegian tha - Thai
|
||||
forced-ita hin - Hindi pol - Polish tur - Turkish
|
||||
hun - Hungarian ukr - Ukrainian
|
||||
ind - Indonesian vie - Vietnamese
|
||||
```
|
||||
|
||||
> [!IMPORTANT]
|
||||
> Language code availability may vary by site. Some platforms might:
|
||||
>
|
||||
> - Use different language codes
|
||||
> - Support only a subset of these languages
|
||||
> - Offer additional languages not listed here
|
||||
>
|
||||
> Check the specific site's available options if downloads fail.
|
||||
|
||||
> [!TIP]
|
||||
> You can configure multiple languages by adding them to the lists:
|
||||
>
|
||||
> ```json
|
||||
> "specific_list_audio": ["ita", "eng", "spa"],
|
||||
> "specific_list_subtitles": ["ita", "eng", "spa"]
|
||||
> ```
|
||||
|
||||
## M3U8_PARSER Settings
|
||||
|
||||
```json
|
||||
{
|
||||
"force_resolution": -1,
|
||||
"get_only_link": false
|
||||
}
|
||||
```
|
||||
|
||||
- `force_resolution`: Force specific resolution (-1 for best available, or specify 1080, 720, 360)
|
||||
- `get_only_link`: Return M3U8 playlist/index URL instead of downloading
|
||||
|
||||
<br>
|
||||
|
||||
|
||||
# COMMAND
|
||||
|
||||
|
||||
- Download a specific season by entering its number.
|
||||
* **Example:** `1` will download *Season 1* only.
|
||||
|
||||
- Use the wildcard `*` to download every available season.
|
||||
* **Example:** `*` will download all seasons in the series.
|
||||
|
||||
- Specify a range of seasons using a hyphen `-`.
|
||||
* **Example:** `1-2` will download *Seasons 1 and 2*.
|
||||
|
||||
- Enter a season number followed by `-*` to download from that season to the end.
|
||||
* **Example:** `3-*` will download from *Season 3* to the final season.
|
||||
|
||||
<br>
|
||||
|
||||
# Docker
|
||||
|
||||
You can run the script in a docker container, to build the image just run
|
||||
|
||||
```
|
||||
docker build -t streaming-community-api .
|
||||
```
|
||||
|
||||
and to run it use
|
||||
|
||||
```
|
||||
docker run -it -p 8000:8000 streaming-community-api
|
||||
```
|
||||
|
||||
By default the videos will be saved in `/app/Video` inside the container, if you want to to save them in your machine instead of the container just run
|
||||
|
||||
```
|
||||
docker run -it -p 8000:8000 -v /path/to/download:/app/Video streaming-community-api
|
||||
```
|
||||
|
||||
### Docker quick setup with Make
|
||||
|
||||
Inside the Makefile (install `make`) are already configured two commands to build and run the container:
|
||||
|
||||
```
|
||||
make build-container
|
||||
|
||||
# set your download directory as ENV variable
|
||||
make LOCAL_DIR=/path/to/download run-container
|
||||
```
|
||||
|
||||
The `run-container` command mounts also the `config.json` file, so any change to the configuration file is reflected immediately without having to rebuild the image.
|
||||
|
||||
# Website Status
|
||||
|
||||
| Website | Status |
|
||||
|:-------------------|:------:|
|
||||
| 1337xx | ✅ |
|
||||
| Altadefinizione | ✅ |
|
||||
| AnimeUnity | ✅ |
|
||||
| BitSearch | ✅ |
|
||||
| CB01New | ✅ |
|
||||
| DDLStreamItaly | ✅ |
|
||||
| GuardaSerie | ✅ |
|
||||
| MostraGuarda | ✅ |
|
||||
| PirateBays | ✅ |
|
||||
| StreamingCommunity | ✅ |
|
||||
|
||||
# Tutorials
|
||||
|
||||
- [Windows Tutorial](https://www.youtube.com/watch?v=mZGqK4wdN-k)
|
||||
- [Linux Tutorial](https://www.youtube.com/watch?v=0qUNXPE_mTg)
|
||||
- [Pypy Tutorial](https://www.youtube.com/watch?v=C6m9ZKOK0p4)
|
||||
- [Compiled .exe Tutorial](https://www.youtube.com/watch?v=pm4lqsxkTVo)
|
||||
|
||||
# To Do
|
||||
|
||||
- Create website API -> https://github.com/Lovi-0/StreamingCommunity/tree/test_gui_1
|
||||
|
||||
# SUPPORT
|
||||
|
||||
If you'd like to support this project, consider making a donation!
|
||||
|
||||
[](https://www.paypal.com/donate/?hosted_button_id=UXTWMT8P6HE2C)
|
||||
|
||||
# Contributing
|
||||
|
||||
Contributions are welcome! Steps:
|
||||
1. Fork the repository
|
||||
2. Create feature branch (`git checkout -b feature/AmazingFeature`)
|
||||
3. Commit changes (`git commit -m 'Add some AmazingFeature'`)
|
||||
4. Push to branch (`git push origin feature/AmazingFeature`)
|
||||
5. Open Pull Request
|
||||
|
||||
|
||||
# Disclaimer
|
||||
|
||||
This software is provided "as is", without warranty of any kind, express or implied, including but not limited to the warranties of merchantability, fitness for a particular purpose, and noninfringement. In no event shall the authors or copyright holders be liable for any claim, damages, or other liability, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software.
|
||||
Per testare
|
||||
|
||||
1. Installare requirements
|
||||
2. Inserire url per mongodb e creare database
|
||||
3. Runnare server.py
|
||||
|
||||
4. Spostarsi su client\dashboard
|
||||
5. Eseguire npm install, npm run build, npm install -g serve
|
||||
|
||||
Cosa da fare
|
||||
- Dark mode su tutta la pagina
|
||||
- Aggiungere documentazione
|
||||
- Bottone download intera stagione
|
||||
- Messaggio con richiesta se scaricare le nuove stagione quando si fa il check in watchlist
|
||||
- Migliore player in case watch con bottone
|
||||
- Coda di download con bottone aggiungere alla coda (complessa)
|
||||
...
|
@ -1,22 +1,17 @@
|
||||
# 23.11.24
|
||||
|
||||
import re
|
||||
import logging
|
||||
from typing import Dict, Any, List, Union
|
||||
|
||||
|
||||
class Episode:
|
||||
def __init__(self, data: Dict[str, Any]):
|
||||
self.images = None
|
||||
self.data = data
|
||||
|
||||
self.id: int = data.get('id')
|
||||
self.scws_id: int = data.get('scws_id')
|
||||
self.number: int = data.get('number')
|
||||
self.name: str = data.get('name')
|
||||
self.plot: str = data.get('plot')
|
||||
self.duration: int = data.get('duration')
|
||||
|
||||
def collect_image(self, SITE_NAME, domain):
|
||||
self.image = f"https://cdn.{SITE_NAME}.{domain}/images/{self.data.get('images')[0]['filename']}"
|
||||
self.id: int = data.get('id', '')
|
||||
self.number: int = data.get('number', '')
|
||||
self.name: str = data.get('name', '')
|
||||
self.plot: str = data.get('plot', '')
|
||||
self.duration: int = data.get('duration', '')
|
||||
|
||||
def __str__(self):
|
||||
return f"Episode(id={self.id}, number={self.number}, name='{self.name}', plot='{self.plot}', duration={self.duration} sec)"
|
||||
@ -25,7 +20,7 @@ class EpisodeManager:
|
||||
def __init__(self):
|
||||
self.episodes: List[Episode] = []
|
||||
|
||||
def add(self, episode_data: Dict[str, Any]):
|
||||
def add_episode(self, episode_data: Dict[str, Any]):
|
||||
"""
|
||||
Add a new episode to the manager.
|
||||
|
||||
@ -34,20 +29,8 @@ class EpisodeManager:
|
||||
"""
|
||||
episode = Episode(episode_data)
|
||||
self.episodes.append(episode)
|
||||
|
||||
def get(self, index: int) -> Episode:
|
||||
"""
|
||||
Retrieve an episode by its index in the episodes list.
|
||||
|
||||
Parameters:
|
||||
- index (int): The zero-based index of the episode to retrieve.
|
||||
|
||||
Returns:
|
||||
Episode: The Episode object at the specified index.
|
||||
"""
|
||||
return self.episodes[index]
|
||||
|
||||
def length(self) -> int:
|
||||
def get_length(self) -> int:
|
||||
"""
|
||||
Get the number of episodes in the manager.
|
||||
|
||||
@ -71,23 +54,61 @@ class EpisodeManager:
|
||||
|
||||
class Season:
|
||||
def __init__(self, season_data: Dict[str, Union[int, str, None]]):
|
||||
self.images = {}
|
||||
self.season_data = season_data
|
||||
|
||||
self.id: int = season_data.get('id')
|
||||
self.scws_id: int = season_data.get('scws_id')
|
||||
self.imdb_id: int = season_data.get('imdb_id')
|
||||
self.number: int = season_data.get('number')
|
||||
self.name: str = season_data.get('name')
|
||||
self.slug: str = season_data.get('slug')
|
||||
self.plot: str = season_data.get('plot')
|
||||
self.type: str = season_data.get('type')
|
||||
self.seasons_count: int = season_data.get('seasons_count')
|
||||
self.episodes: EpisodeManager = EpisodeManager()
|
||||
|
||||
def collect_images(self, SITE_NAME, domain):
|
||||
for dict_image in self.season_data.get('images'):
|
||||
self.images[dict_image.get('type')] = f"https://cdn.{SITE_NAME}.{domain}/images/{dict_image.get('filename')}"
|
||||
self.episodes_count: int = season_data.get('episodes_count')
|
||||
|
||||
def __str__(self):
|
||||
return f"Season(id={self.id}, number={self.number}, name='{self.name}', plot='{self.plot}', episodes_count={self.episodes_count})"
|
||||
|
||||
class SeasonManager:
|
||||
def __init__(self):
|
||||
self.seasons: List[Season] = []
|
||||
|
||||
def add_season(self, season_data: Dict[str, Union[int, str, None]]):
|
||||
"""
|
||||
Add a new season to the manager.
|
||||
|
||||
Parameters:
|
||||
season_data (Dict[str, Union[int, str, None]]): A dictionary containing data for the new season.
|
||||
"""
|
||||
season = Season(season_data)
|
||||
self.seasons.append(season)
|
||||
|
||||
def get(self, index: int) -> Season:
|
||||
"""
|
||||
Get a season item from the list by index.
|
||||
|
||||
Parameters:
|
||||
index (int): The index of the seasons item to retrieve.
|
||||
|
||||
Returns:
|
||||
Season: The media item at the specified index.
|
||||
"""
|
||||
return self.media_list[index]
|
||||
|
||||
def get_length(self) -> int:
|
||||
"""
|
||||
Get the number of seasons in the manager.
|
||||
|
||||
Returns:
|
||||
int: Number of seasons.
|
||||
"""
|
||||
return len(self.seasons)
|
||||
|
||||
def clear(self) -> None:
|
||||
"""
|
||||
This method clears the seasons list.
|
||||
|
||||
Parameters:
|
||||
self: The object instance.
|
||||
"""
|
||||
self.seasons.clear()
|
||||
|
||||
def __str__(self):
|
||||
return f"SeasonManager(num_seasons={len(self.seasons)})"
|
||||
|
||||
|
||||
class Stream:
|
||||
|
@ -1,89 +0,0 @@
|
||||
# 14.06.24
|
||||
|
||||
import logging
|
||||
|
||||
|
||||
# External libraries
|
||||
import httpx
|
||||
from bs4 import BeautifulSoup
|
||||
|
||||
|
||||
# Internal utilities
|
||||
from StreamingCommunity.Util._jsonConfig import config_manager
|
||||
from StreamingCommunity.Util.headers import get_headers
|
||||
|
||||
|
||||
# Variable
|
||||
from StreamingCommunity.Api.Site.ddlstreamitaly.costant import COOKIE
|
||||
max_timeout = config_manager.get_int("REQUESTS", "timeout")
|
||||
|
||||
|
||||
class VideoSource:
|
||||
def __init__(self) -> None:
|
||||
"""
|
||||
Initializes the VideoSource object with default values.
|
||||
"""
|
||||
self.headers = {'user-agent': get_headers()}
|
||||
self.cookie = COOKIE
|
||||
|
||||
def setup(self, url: str) -> None:
|
||||
"""
|
||||
Sets up the video source with the provided URL.
|
||||
|
||||
Parameters:
|
||||
- url (str): The URL of the video source.
|
||||
"""
|
||||
self.url = url
|
||||
|
||||
def make_request(self, url: str) -> str:
|
||||
"""
|
||||
Make an HTTP GET request to the provided URL.
|
||||
|
||||
Parameters:
|
||||
- url (str): The URL to make the request to.
|
||||
|
||||
Returns:
|
||||
- str: The response content if successful, None otherwise.
|
||||
"""
|
||||
try:
|
||||
response = httpx.get(
|
||||
url=url,
|
||||
headers=self.headers,
|
||||
cookies=self.cookie,
|
||||
timeout=max_timeout
|
||||
)
|
||||
response.raise_for_status()
|
||||
|
||||
return response.text
|
||||
|
||||
except Exception as err:
|
||||
logging.error(f"An error occurred: {err}")
|
||||
|
||||
return None
|
||||
|
||||
def get_playlist(self):
|
||||
"""
|
||||
Retrieves the playlist URL from the video source.
|
||||
|
||||
Returns:
|
||||
- tuple: The mp4 link if found, None otherwise.
|
||||
"""
|
||||
try:
|
||||
text = self.make_request(self.url)
|
||||
|
||||
if text:
|
||||
soup = BeautifulSoup(text, "html.parser")
|
||||
source = soup.find("source")
|
||||
|
||||
if source:
|
||||
mp4_link = source.get("src")
|
||||
return mp4_link
|
||||
|
||||
else:
|
||||
logging.error("No <source> tag found in the HTML.")
|
||||
|
||||
else:
|
||||
logging.error("Failed to retrieve content from the URL.")
|
||||
|
||||
except Exception as e:
|
||||
logging.error(f"An error occurred while parsing the playlist: {e}")
|
@ -1,151 +0,0 @@
|
||||
# 05.07.24
|
||||
|
||||
import re
|
||||
import logging
|
||||
|
||||
|
||||
# External libraries
|
||||
import httpx
|
||||
import jsbeautifier
|
||||
from bs4 import BeautifulSoup
|
||||
|
||||
|
||||
# Internal utilities
|
||||
from StreamingCommunity.Util._jsonConfig import config_manager
|
||||
from StreamingCommunity.Util.headers import get_headers
|
||||
|
||||
|
||||
# Variable
|
||||
max_timeout = config_manager.get_int("REQUESTS", "timeout")
|
||||
|
||||
|
||||
class VideoSource:
|
||||
def __init__(self, url: str):
|
||||
"""
|
||||
Sets up the video source with the provided URL.
|
||||
|
||||
Parameters:
|
||||
- url (str): The URL of the video.
|
||||
"""
|
||||
self.url = url
|
||||
self.redirect_url = None
|
||||
self.maxstream_url = None
|
||||
self.m3u8_url = None
|
||||
self.headers = {'user-agent': get_headers()}
|
||||
|
||||
def get_redirect_url(self):
|
||||
"""
|
||||
Sends a request to the initial URL and extracts the redirect URL.
|
||||
"""
|
||||
try:
|
||||
|
||||
# Send a GET request to the initial URL
|
||||
response = httpx.get(self.url, headers=self.headers, follow_redirects=True, timeout=max_timeout)
|
||||
response.raise_for_status()
|
||||
|
||||
# Extract the redirect URL from the HTML
|
||||
soup = BeautifulSoup(response.text, "html.parser")
|
||||
self.redirect_url = soup.find("div", id="iframen1").get("data-src")
|
||||
logging.info(f"Redirect URL: {self.redirect_url}")
|
||||
|
||||
return self.redirect_url
|
||||
|
||||
except httpx.RequestError as e:
|
||||
logging.error(f"Error during the initial request: {e}")
|
||||
raise
|
||||
|
||||
except AttributeError as e:
|
||||
logging.error(f"Error parsing HTML: {e}")
|
||||
raise
|
||||
|
||||
def get_maxstream_url(self):
|
||||
"""
|
||||
Sends a request to the redirect URL and extracts the Maxstream URL.
|
||||
"""
|
||||
try:
|
||||
|
||||
# Send a GET request to the redirect URL
|
||||
response = httpx.get(self.redirect_url, headers=self.headers, follow_redirects=True, timeout=max_timeout)
|
||||
response.raise_for_status()
|
||||
|
||||
# Extract the Maxstream URL from the HTML
|
||||
soup = BeautifulSoup(response.text, "html.parser")
|
||||
maxstream_url = soup.find("a")
|
||||
|
||||
if maxstream_url is None:
|
||||
|
||||
# If no anchor tag is found, try the alternative method
|
||||
logging.warning("Anchor tag not found. Trying the alternative method.")
|
||||
headers = {
|
||||
'origin': 'https://stayonline.pro',
|
||||
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/125.0.0.0 Safari/537.36 OPR/111.0.0.0',
|
||||
'x-requested-with': 'XMLHttpRequest',
|
||||
}
|
||||
|
||||
# Make request to stayonline api
|
||||
data = {'id': self.redirect_url.split("/")[-2], 'ref': ''}
|
||||
response = httpx.post('https://stayonline.pro/ajax/linkEmbedView.php', headers=headers, data=data)
|
||||
response.raise_for_status()
|
||||
uprot_url = response.json()['data']['value']
|
||||
|
||||
# Retry getting maxtstream url
|
||||
response = httpx.get(uprot_url, headers=self.headers, follow_redirects=True, timeout=max_timeout)
|
||||
response.raise_for_status()
|
||||
soup = BeautifulSoup(response.text, "html.parser")
|
||||
maxstream_url = soup.find("a").get("href")
|
||||
|
||||
else:
|
||||
maxstream_url = maxstream_url.get("href")
|
||||
|
||||
self.maxstream_url = maxstream_url
|
||||
logging.info(f"Maxstream URL: {self.maxstream_url}")
|
||||
|
||||
return self.maxstream_url
|
||||
|
||||
except httpx.RequestError as e:
|
||||
logging.error(f"Error during the request to the redirect URL: {e}")
|
||||
raise
|
||||
|
||||
except AttributeError as e:
|
||||
logging.error(f"Error parsing HTML: {e}")
|
||||
raise
|
||||
|
||||
def get_m3u8_url(self):
|
||||
"""
|
||||
Sends a request to the Maxstream URL and extracts the .m3u8 file URL.
|
||||
"""
|
||||
try:
|
||||
|
||||
# Send a GET request to the Maxstream URL
|
||||
response = httpx.get(self.maxstream_url, headers=self.headers, follow_redirects=True, timeout=max_timeout)
|
||||
response.raise_for_status()
|
||||
soup = BeautifulSoup(response.text, "html.parser")
|
||||
|
||||
# Iterate over all script tags in the HTML
|
||||
for script in soup.find_all("script"):
|
||||
if "eval(function(p,a,c,k,e,d)" in script.text:
|
||||
|
||||
# Execute the script using
|
||||
data_js = jsbeautifier.beautify(script.text)
|
||||
|
||||
# Extract the .m3u8 URL from the script's output
|
||||
match = re.search(r'sources:\s*\[\{\s*src:\s*"([^"]+)"', data_js)
|
||||
|
||||
if match:
|
||||
self.m3u8_url = match.group(1)
|
||||
logging.info(f"M3U8 URL: {self.m3u8_url}")
|
||||
break
|
||||
|
||||
return self.m3u8_url
|
||||
|
||||
except Exception as e:
|
||||
logging.error(f"Error executing the Node.js script: {e}")
|
||||
raise
|
||||
|
||||
def get_playlist(self):
|
||||
"""
|
||||
Executes the entire flow to obtain the final .m3u8 file URL.
|
||||
"""
|
||||
self.get_redirect_url()
|
||||
self.get_maxstream_url()
|
||||
return self.get_m3u8_url()
|
@ -1,194 +0,0 @@
|
||||
# 26.05.24
|
||||
|
||||
import re
|
||||
import logging
|
||||
|
||||
|
||||
# External libraries
|
||||
import httpx
|
||||
import jsbeautifier
|
||||
from bs4 import BeautifulSoup
|
||||
|
||||
|
||||
# Internal utilities
|
||||
from StreamingCommunity.Util._jsonConfig import config_manager
|
||||
from StreamingCommunity.Util.headers import get_headers
|
||||
|
||||
|
||||
# Variable
|
||||
max_timeout = config_manager.get_int("REQUESTS", "timeout")
|
||||
|
||||
|
||||
class VideoSource:
|
||||
def __init__(self, url: str) -> None:
|
||||
"""
|
||||
Initializes the VideoSource object with default values.
|
||||
|
||||
Attributes:
|
||||
- url (str): The URL of the video source.
|
||||
"""
|
||||
self.headers = {
|
||||
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8',
|
||||
'accept-language': 'it-IT,it;q=0.9,en-US;q=0.8,en;q=0.7',
|
||||
'User-Agent': get_headers()
|
||||
}
|
||||
self.client = httpx.Client()
|
||||
self.url = url
|
||||
|
||||
def make_request(self, url: str) -> str:
|
||||
"""
|
||||
Make an HTTP GET request to the provided URL.
|
||||
|
||||
Parameters:
|
||||
- url (str): The URL to make the request to.
|
||||
|
||||
Returns:
|
||||
- str: The response content if successful, None otherwise.
|
||||
"""
|
||||
|
||||
try:
|
||||
response = self.client.get(
|
||||
url=url,
|
||||
headers=self.headers,
|
||||
follow_redirects=True,
|
||||
timeout=max_timeout
|
||||
)
|
||||
response.raise_for_status()
|
||||
return response.text
|
||||
|
||||
except Exception as e:
|
||||
logging.error(f"Request failed: {e}")
|
||||
return None
|
||||
|
||||
def parse_html(self, html_content: str) -> BeautifulSoup:
|
||||
"""
|
||||
Parse the provided HTML content using BeautifulSoup.
|
||||
|
||||
Parameters:
|
||||
- html_content (str): The HTML content to parse.
|
||||
|
||||
Returns:
|
||||
- BeautifulSoup: Parsed HTML content if successful, None otherwise.
|
||||
"""
|
||||
|
||||
try:
|
||||
soup = BeautifulSoup(html_content, "html.parser")
|
||||
return soup
|
||||
|
||||
except Exception as e:
|
||||
logging.error(f"Failed to parse HTML content: {e}")
|
||||
return None
|
||||
|
||||
def get_iframe(self, soup):
|
||||
"""
|
||||
Extracts the source URL of the second iframe in the provided BeautifulSoup object.
|
||||
|
||||
Parameters:
|
||||
- soup (BeautifulSoup): A BeautifulSoup object representing the parsed HTML.
|
||||
|
||||
Returns:
|
||||
- str: The source URL of the second iframe, or None if not found.
|
||||
"""
|
||||
iframes = soup.find_all("iframe")
|
||||
if iframes and len(iframes) > 1:
|
||||
return iframes[1].get("src")
|
||||
|
||||
return None
|
||||
|
||||
def find_content(self, url):
|
||||
"""
|
||||
Makes a request to the specified URL and parses the HTML content.
|
||||
|
||||
Parameters:
|
||||
- url (str): The URL to fetch content from.
|
||||
|
||||
Returns:
|
||||
- BeautifulSoup: A BeautifulSoup object representing the parsed HTML content, or None if the request fails.
|
||||
"""
|
||||
content = self.make_request(url)
|
||||
if content:
|
||||
return self.parse_html(content)
|
||||
|
||||
return None
|
||||
|
||||
def get_result_node_js(self, soup):
|
||||
"""
|
||||
Prepares and runs a Node.js script from the provided BeautifulSoup object to retrieve the video URL.
|
||||
|
||||
Parameters:
|
||||
- soup (BeautifulSoup): A BeautifulSoup object representing the parsed HTML content.
|
||||
|
||||
Returns:
|
||||
- str: The output from the Node.js script, or None if the script cannot be found or executed.
|
||||
"""
|
||||
for script in soup.find_all("script"):
|
||||
if "eval" in str(script):
|
||||
return jsbeautifier.beautify(script.text)
|
||||
|
||||
return None
|
||||
|
||||
def get_playlist(self) -> str:
|
||||
"""
|
||||
Download a video from the provided URL.
|
||||
|
||||
Returns:
|
||||
str: The URL of the downloaded video if successful, None otherwise.
|
||||
"""
|
||||
try:
|
||||
html_content = self.make_request(self.url)
|
||||
if not html_content:
|
||||
logging.error("Failed to fetch HTML content.")
|
||||
return None
|
||||
|
||||
soup = self.parse_html(html_content)
|
||||
if not soup:
|
||||
logging.error("Failed to parse HTML content.")
|
||||
return None
|
||||
|
||||
# Find master playlist
|
||||
data_js = self.get_result_node_js(soup)
|
||||
|
||||
if data_js is not None:
|
||||
match = re.search(r'sources:\s*\[\{\s*file:\s*"([^"]+)"', data_js)
|
||||
|
||||
if match:
|
||||
return match.group(1)
|
||||
|
||||
else:
|
||||
|
||||
iframe_src = self.get_iframe(soup)
|
||||
if not iframe_src:
|
||||
logging.error("No iframe found.")
|
||||
return None
|
||||
|
||||
down_page_soup = self.find_content(iframe_src)
|
||||
if not down_page_soup:
|
||||
logging.error("Failed to fetch down page content.")
|
||||
return None
|
||||
|
||||
pattern = r'data-link="(//supervideo[^"]+)"'
|
||||
match = re.search(pattern, str(down_page_soup))
|
||||
if not match:
|
||||
logging.error("No player available for download.")
|
||||
return None
|
||||
|
||||
supervideo_url = "https:" + match.group(1)
|
||||
supervideo_soup = self.find_content(supervideo_url)
|
||||
if not supervideo_soup:
|
||||
logging.error("Failed to fetch supervideo content.")
|
||||
return None
|
||||
|
||||
# Find master playlist
|
||||
data_js = self.get_result_node_js(supervideo_soup)
|
||||
|
||||
match = re.search(r'sources:\s*\[\{\s*file:\s*"([^"]+)"', data_js)
|
||||
|
||||
if match:
|
||||
return match.group(1)
|
||||
|
||||
return None
|
||||
|
||||
except Exception as e:
|
||||
logging.error(f"An error occurred: {e}")
|
||||
return None
|
||||
|
@ -120,7 +120,8 @@ class VideoSource:
|
||||
response.raise_for_status()
|
||||
|
||||
except Exception as e:
|
||||
logging.error(f"Failed to get vixcloud contente with error: {e}")
|
||||
print("\n")
|
||||
console.print(Panel("[red bold]Coming soon", title="Notification", title_align="left", border_style="yellow"))
|
||||
sys.exit(0)
|
||||
|
||||
# Parse response with BeautifulSoup to get content
|
||||
@ -168,56 +169,6 @@ class VideoSource:
|
||||
# Construct the new URL with updated query parameters
|
||||
return urlunparse(parsed_url._replace(query=query_string))
|
||||
|
||||
def get_mp4(self, url_to_download: str, scws_id: str) -> list:
|
||||
"""
|
||||
Generate download links for the specified resolutions from StreamingCommunity.
|
||||
|
||||
Args:
|
||||
url_to_download (str): URL of the video page.
|
||||
scws_id (str): SCWS ID of the title.
|
||||
|
||||
Returns:
|
||||
list: A list of video download URLs.
|
||||
"""
|
||||
headers = {
|
||||
'referer': url_to_download,
|
||||
'user-agent': get_headers(),
|
||||
}
|
||||
|
||||
# API request to get video details
|
||||
video_api_url = f'https://{self.base_name}.{self.domain}/api/video/{scws_id}'
|
||||
response = httpx.get(video_api_url, headers=headers)
|
||||
|
||||
if response.status_code == 200:
|
||||
response_json = response.json()
|
||||
|
||||
video_tracks = response_json.get('video_tracks', [])
|
||||
track = video_tracks[-1]
|
||||
console.print(f"[cyan]Available resolutions: [red]{[str(track['quality']) for track in video_tracks]}")
|
||||
|
||||
# Request download link generation for each track
|
||||
download_response = httpx.post(
|
||||
url=f'https://{self.base_name}.{self.domain}/api/download/generate_link?scws_id={track["video_id"]}&rendition={track["quality"]}',
|
||||
headers={
|
||||
'referer': url_to_download,
|
||||
'user-agent': get_headers(),
|
||||
'x-xsrf-token': config_manager.get("SITE", self.base_name)['extra']['x-xsrf-token']
|
||||
},
|
||||
cookies={
|
||||
'streamingcommunity_session': config_manager.get("SITE", self.base_name)['extra']['streamingcommunity_session']
|
||||
}
|
||||
)
|
||||
|
||||
if download_response.status_code == 200:
|
||||
return {'url': download_response.text, 'quality': track["quality"]}
|
||||
|
||||
else:
|
||||
logging.error(f"Failed to generate link for resolution {track['quality']} (HTTP {download_response.status_code}).")
|
||||
|
||||
else:
|
||||
logging.error(f"Error fetching video API URL (HTTP {response.status_code}).")
|
||||
return []
|
||||
|
||||
|
||||
class VideoSourceAnime(VideoSource):
|
||||
def __init__(self, site_name: str):
|
||||
@ -270,4 +221,4 @@ class VideoSourceAnime(VideoSource):
|
||||
|
||||
except Exception as e:
|
||||
logging.error(f"Error fetching embed URL: {e}")
|
||||
return None
|
||||
return None
|
||||
|
@ -1,51 +0,0 @@
|
||||
# 02.07.24
|
||||
|
||||
from unidecode import unidecode
|
||||
|
||||
|
||||
# Internal utilities
|
||||
from StreamingCommunity.Util.console import console, msg
|
||||
|
||||
|
||||
# Logic class
|
||||
from .site import title_search, run_get_select_title, media_search_manager
|
||||
from .title import download_title
|
||||
|
||||
|
||||
# Variable
|
||||
indice = 8
|
||||
_useFor = "film_serie"
|
||||
_deprecate = False
|
||||
_priority = 2
|
||||
_engineDownload = "tor"
|
||||
from .costant import SITE_NAME
|
||||
|
||||
|
||||
def search(string_to_search: str = None, get_onylDatabase: bool = False):
|
||||
"""
|
||||
Main function of the application for film and series.
|
||||
"""
|
||||
|
||||
if string_to_search is None:
|
||||
string_to_search = msg.ask(f"\n[purple]Insert word to search in [red]{SITE_NAME}").strip()
|
||||
|
||||
# Search on database
|
||||
len_database = title_search(unidecode(string_to_search))
|
||||
|
||||
# Return list of elements
|
||||
if get_onylDatabase:
|
||||
return media_search_manager
|
||||
|
||||
if len_database > 0:
|
||||
|
||||
# Select title from list
|
||||
select_title = run_get_select_title()
|
||||
|
||||
# Download title
|
||||
download_title(select_title)
|
||||
|
||||
else:
|
||||
console.print(f"\n[red]Nothing matching was found for[white]: [purple]{string_to_search}")
|
||||
|
||||
# Retry
|
||||
search()
|
@ -1,15 +0,0 @@
|
||||
# 09.06.24
|
||||
|
||||
import os
|
||||
|
||||
|
||||
# Internal utilities
|
||||
from StreamingCommunity.Util._jsonConfig import config_manager
|
||||
|
||||
|
||||
SITE_NAME = os.path.basename(os.path.dirname(os.path.abspath(__file__)))
|
||||
ROOT_PATH = config_manager.get('DEFAULT', 'root_path')
|
||||
DOMAIN_NOW = config_manager.get_dict('SITE', SITE_NAME)['domain']
|
||||
|
||||
SERIES_FOLDER = config_manager.get('DEFAULT', 'serie_folder_name')
|
||||
MOVIE_FOLDER = config_manager.get('DEFAULT', 'movie_folder_name')
|
@ -1,84 +0,0 @@
|
||||
# 02.07.24
|
||||
|
||||
# External libraries
|
||||
import httpx
|
||||
from bs4 import BeautifulSoup
|
||||
|
||||
|
||||
# Internal utilities
|
||||
from StreamingCommunity.Util.console import console
|
||||
from StreamingCommunity.Util._jsonConfig import config_manager
|
||||
from StreamingCommunity.Util.headers import get_headers
|
||||
from StreamingCommunity.Util.table import TVShowManager
|
||||
|
||||
|
||||
# Logic class
|
||||
from StreamingCommunity.Api.Template import get_select_title
|
||||
from StreamingCommunity.Api.Template.Util import search_domain
|
||||
from StreamingCommunity.Api.Template.Class.SearchType import MediaManager
|
||||
|
||||
|
||||
# Variable
|
||||
from .costant import SITE_NAME
|
||||
media_search_manager = MediaManager()
|
||||
table_show_manager = TVShowManager()
|
||||
|
||||
|
||||
def title_search(word_to_search: str) -> int:
|
||||
"""
|
||||
Search for titles based on a search query.
|
||||
|
||||
Parameters:
|
||||
- title_search (str): The title to search for.
|
||||
|
||||
Returns:
|
||||
- int: The number of titles found.
|
||||
"""
|
||||
|
||||
# Find new domain if prev dont work
|
||||
max_timeout = config_manager.get_int("REQUESTS", "timeout")
|
||||
domain_to_use, _ = search_domain(SITE_NAME, f"https://{SITE_NAME}")
|
||||
|
||||
# Construct the full site URL and load the search page
|
||||
try:
|
||||
response = httpx.get(
|
||||
url=f"https://{SITE_NAME}.{domain_to_use}/search/{word_to_search}/1/",
|
||||
headers={'user-agent': get_headers()},
|
||||
follow_redirects=True,
|
||||
timeout=max_timeout
|
||||
)
|
||||
response.raise_for_status()
|
||||
|
||||
except Exception as e:
|
||||
console.print(f"Site: {SITE_NAME}, request search error: {e}")
|
||||
|
||||
# Create soup and find table
|
||||
soup = BeautifulSoup(response.text, "html.parser")
|
||||
|
||||
# Scrape div film in table on single page
|
||||
for tr in soup.find_all('tr'):
|
||||
try:
|
||||
|
||||
title_info = {
|
||||
'name': tr.find_all("a")[1].get_text(strip=True),
|
||||
'url': tr.find_all("a")[1].get("href"),
|
||||
'seader': tr.find_all("td")[-5].get_text(strip=True),
|
||||
'leacher': tr.find_all("td")[-4].get_text(strip=True),
|
||||
'date': tr.find_all("td")[-3].get_text(strip=True).replace("'", ""),
|
||||
'size': tr.find_all("td")[-2].get_text(strip=True)
|
||||
}
|
||||
|
||||
media_search_manager.add_media(title_info)
|
||||
|
||||
except:
|
||||
continue
|
||||
|
||||
# Return the number of titles found
|
||||
return media_search_manager.get_length()
|
||||
|
||||
|
||||
def run_get_select_title():
|
||||
"""
|
||||
Display a selection of titles and prompt the user to choose one.
|
||||
"""
|
||||
return get_select_title(table_show_manager, media_search_manager)
|
@ -1,66 +0,0 @@
|
||||
# 02.07.24
|
||||
|
||||
import os
|
||||
|
||||
|
||||
# External libraries
|
||||
import httpx
|
||||
from bs4 import BeautifulSoup
|
||||
|
||||
|
||||
# Internal utilities
|
||||
from StreamingCommunity.Util.console import console
|
||||
from StreamingCommunity.Util.os import os_manager
|
||||
from StreamingCommunity.Util.message import start_message
|
||||
from StreamingCommunity.Util.headers import get_headers
|
||||
from StreamingCommunity.Lib.Downloader import TOR_downloader
|
||||
|
||||
|
||||
# Logic class
|
||||
from StreamingCommunity.Api.Template.Class.SearchType import MediaItem
|
||||
|
||||
|
||||
# Config
|
||||
from .costant import ROOT_PATH, DOMAIN_NOW, SITE_NAME, MOVIE_FOLDER
|
||||
|
||||
|
||||
def download_title(select_title: MediaItem):
|
||||
"""
|
||||
Downloads a media item and saves it as an MP4 file.
|
||||
|
||||
Parameters:
|
||||
- select_title (MediaItem): The media item to be downloaded. This should be an instance of the MediaItem class, containing attributes like `name` and `url`.
|
||||
"""
|
||||
|
||||
start_message()
|
||||
console.print(f"[yellow]Download: [red]{select_title.name} \n")
|
||||
print()
|
||||
|
||||
# Define output path
|
||||
title_name = os_manager.get_sanitize_file(select_title.name)
|
||||
mp4_path = os_manager.get_sanitize_path(
|
||||
os.path.join(ROOT_PATH, SITE_NAME, MOVIE_FOLDER, title_name.replace(".mp4", ""))
|
||||
)
|
||||
|
||||
# Create output folder
|
||||
os_manager.create_path(mp4_path)
|
||||
|
||||
# Make request to page with magnet
|
||||
full_site_name = f"{SITE_NAME}.{DOMAIN_NOW}"
|
||||
response = httpx.get(
|
||||
url="https://" + full_site_name + select_title.url,
|
||||
headers={
|
||||
'user-agent': get_headers()
|
||||
},
|
||||
follow_redirects=True
|
||||
)
|
||||
|
||||
# Create soup and find table
|
||||
soup = BeautifulSoup(response.text, "html.parser")
|
||||
final_url = soup.find("a", class_="torrentdown1").get("href")
|
||||
|
||||
# Tor manager
|
||||
manager = TOR_downloader()
|
||||
manager.add_magnet_link(final_url)
|
||||
manager.start_download()
|
||||
manager.move_downloaded_files(mp4_path)
|
@ -1,51 +0,0 @@
|
||||
# 26.05.24
|
||||
|
||||
from unidecode import unidecode
|
||||
|
||||
|
||||
# Internal utilities
|
||||
from StreamingCommunity.Util.console import console, msg
|
||||
|
||||
|
||||
# Logic class
|
||||
from .site import title_search, run_get_select_title, media_search_manager
|
||||
from .film import download_film
|
||||
|
||||
|
||||
# Variable
|
||||
indice = 2
|
||||
_useFor = "film"
|
||||
_deprecate = False
|
||||
_priority = 2
|
||||
_engineDownload = "hls"
|
||||
from .costant import SITE_NAME
|
||||
|
||||
|
||||
def search(string_to_search: str = None, get_onylDatabase: bool = False):
|
||||
"""
|
||||
Main function of the application for film and series.
|
||||
"""
|
||||
|
||||
if string_to_search is None:
|
||||
string_to_search = msg.ask(f"\n[purple]Insert word to search in [red]{SITE_NAME}").strip()
|
||||
|
||||
# Search on database
|
||||
len_database = title_search(unidecode(string_to_search))
|
||||
|
||||
# Return list of elements
|
||||
if get_onylDatabase:
|
||||
return media_search_manager
|
||||
|
||||
if len_database > 0:
|
||||
|
||||
# Select title from list
|
||||
select_title = run_get_select_title()
|
||||
|
||||
# Download only film
|
||||
download_film(select_title)
|
||||
|
||||
else:
|
||||
console.print(f"\n[red]Nothing matching was found for[white]: [purple]{string_to_search}")
|
||||
|
||||
# Retry
|
||||
search()
|
@ -1,15 +0,0 @@
|
||||
# 26.05.24
|
||||
|
||||
import os
|
||||
|
||||
|
||||
# Internal utilities
|
||||
from StreamingCommunity.Util._jsonConfig import config_manager
|
||||
|
||||
|
||||
SITE_NAME = os.path.basename(os.path.dirname(os.path.abspath(__file__)))
|
||||
ROOT_PATH = config_manager.get('DEFAULT', 'root_path')
|
||||
DOMAIN_NOW = config_manager.get_dict('SITE', SITE_NAME)['domain']
|
||||
|
||||
SERIES_FOLDER = config_manager.get('DEFAULT', 'serie_folder_name')
|
||||
MOVIE_FOLDER = config_manager.get('DEFAULT', 'movie_folder_name')
|
@ -1,69 +0,0 @@
|
||||
# 26.05.24
|
||||
|
||||
import os
|
||||
import time
|
||||
|
||||
|
||||
# Internal utilities
|
||||
from StreamingCommunity.Util.console import console, msg
|
||||
from StreamingCommunity.Util.os import os_manager
|
||||
from StreamingCommunity.Util.message import start_message
|
||||
from StreamingCommunity.Util.call_stack import get_call_stack
|
||||
from StreamingCommunity.Lib.Downloader import HLS_Downloader
|
||||
|
||||
|
||||
# Logic class
|
||||
from StreamingCommunity.Api.Template.Util import execute_search
|
||||
from StreamingCommunity.Api.Template.Class.SearchType import MediaItem
|
||||
|
||||
|
||||
# Player
|
||||
from StreamingCommunity.Api.Player.supervideo import VideoSource
|
||||
|
||||
|
||||
# Config
|
||||
from .costant import ROOT_PATH, SITE_NAME, MOVIE_FOLDER
|
||||
|
||||
|
||||
def download_film(select_title: MediaItem):
|
||||
"""
|
||||
Downloads a film using the provided film ID, title name, and domain.
|
||||
|
||||
Parameters:
|
||||
- title_name (str): The name of the film title.
|
||||
- url (str): The url of the video
|
||||
"""
|
||||
|
||||
# Start message and display film information
|
||||
start_message()
|
||||
console.print(f"[yellow]Download: [red]{select_title.name} \n")
|
||||
|
||||
# Set domain and media ID for the video source
|
||||
video_source = VideoSource(select_title.url)
|
||||
|
||||
# Define output path
|
||||
title_name = os_manager.get_sanitize_file(select_title.name) + ".mp4"
|
||||
mp4_path = os_manager.get_sanitize_path(
|
||||
os.path.join(ROOT_PATH, SITE_NAME, MOVIE_FOLDER, title_name.replace(".mp4", ""))
|
||||
)
|
||||
|
||||
# Get m3u8 master playlist
|
||||
master_playlist = video_source.get_playlist()
|
||||
|
||||
# Download the film using the m3u8 playlist, and output filename
|
||||
r_proc = HLS_Downloader(
|
||||
m3u8_playlist=master_playlist,
|
||||
output_filename=os.path.join(mp4_path, title_name)
|
||||
).start()
|
||||
|
||||
if r_proc == 404:
|
||||
time.sleep(2)
|
||||
|
||||
# Re call search function
|
||||
if msg.ask("[green]Do you want to continue [white]([red]y[white])[green] or return at home[white]([red]n[white]) ", choices=['y', 'n'], default='y', show_choices=True) == "n":
|
||||
frames = get_call_stack()
|
||||
execute_search(frames[-4])
|
||||
|
||||
if r_proc != None:
|
||||
console.print("[green]Result: ")
|
||||
console.print(r_proc)
|
@ -1,86 +0,0 @@
|
||||
# 26.05.24
|
||||
|
||||
# External libraries
|
||||
import httpx
|
||||
from bs4 import BeautifulSoup
|
||||
|
||||
|
||||
# Internal utilities
|
||||
from StreamingCommunity.Util.console import console
|
||||
from StreamingCommunity.Util._jsonConfig import config_manager
|
||||
from StreamingCommunity.Util.headers import get_headers
|
||||
from StreamingCommunity.Util.table import TVShowManager
|
||||
|
||||
|
||||
# Logic class
|
||||
from StreamingCommunity.Api.Template import get_select_title
|
||||
from StreamingCommunity.Api.Template.Util import search_domain
|
||||
from StreamingCommunity.Api.Template.Class.SearchType import MediaManager
|
||||
|
||||
|
||||
# Variable
|
||||
from .costant import SITE_NAME, DOMAIN_NOW
|
||||
media_search_manager = MediaManager()
|
||||
table_show_manager = TVShowManager()
|
||||
|
||||
|
||||
def title_search(title_search: str) -> int:
|
||||
"""
|
||||
Search for titles based on a search query.
|
||||
|
||||
Parameters:
|
||||
- title_search (str): The title to search for.
|
||||
|
||||
Returns:
|
||||
int: The number of titles found.
|
||||
"""
|
||||
client = httpx.Client()
|
||||
|
||||
# Find new domain if prev dont work
|
||||
max_timeout = config_manager.get_int("REQUESTS", "timeout")
|
||||
domain_to_use, _ = search_domain(SITE_NAME, f"https://{SITE_NAME}")
|
||||
|
||||
# Send request to search for title
|
||||
try:
|
||||
response = client.get(
|
||||
url=f"https://{SITE_NAME}.{domain_to_use}/?story={title_search.replace(' ', '+')}&do=search&subaction=search&titleonly=3",
|
||||
headers={
|
||||
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8',
|
||||
'accept-language': 'it-IT,it;q=0.9,en-US;q=0.8,en;q=0.7',
|
||||
'User-Agent': get_headers()
|
||||
},
|
||||
timeout=max_timeout
|
||||
)
|
||||
response.raise_for_status()
|
||||
|
||||
except Exception as e:
|
||||
console.print(f"Site: {SITE_NAME}, request search error: {e}")
|
||||
raise
|
||||
|
||||
# Create soup and find table
|
||||
soup = BeautifulSoup(response.text, "html.parser")
|
||||
table_content = soup.find('div', id="dle-content")
|
||||
|
||||
# Scrape div film in table on single page
|
||||
for film_div in table_content.find_all('div', class_='col-lg-3'):
|
||||
title = film_div.find('h2', class_='titleFilm').get_text(strip=True)
|
||||
link = film_div.find('h2', class_='titleFilm').find('a')['href']
|
||||
imdb_rating = film_div.find('div', class_='imdb-rate').get_text(strip=True).split(":")[-1]
|
||||
|
||||
film_info = {
|
||||
'name': title,
|
||||
'url': link,
|
||||
'score': imdb_rating
|
||||
}
|
||||
|
||||
media_search_manager.add_media(film_info)
|
||||
|
||||
# Return the number of titles found
|
||||
return media_search_manager.get_length()
|
||||
|
||||
|
||||
def run_get_select_title():
|
||||
"""
|
||||
Display a selection of titles and prompt the user to choose one.
|
||||
"""
|
||||
return get_select_title(table_show_manager, media_search_manager)
|
@ -1,51 +0,0 @@
|
||||
# 21.05.24
|
||||
|
||||
from unidecode import unidecode
|
||||
|
||||
|
||||
# Internal utilities
|
||||
from StreamingCommunity.Util.console import console, msg
|
||||
|
||||
|
||||
# Logic class
|
||||
from .site import title_search, run_get_select_title, media_search_manager
|
||||
from .film_serie import download_film, download_series
|
||||
|
||||
|
||||
# Variable
|
||||
indice = 1
|
||||
_useFor = "anime"
|
||||
_deprecate = False
|
||||
_priority = 2
|
||||
_engineDownload = "mp4"
|
||||
from .costant import SITE_NAME
|
||||
|
||||
|
||||
def search(string_to_search: str = None, get_onylDatabase: bool = False):
|
||||
|
||||
if string_to_search is None:
|
||||
string_to_search = msg.ask(f"\n[purple]Insert word to search in [red]{SITE_NAME}").strip()
|
||||
|
||||
# Search on database
|
||||
len_database = title_search(unidecode(string_to_search))
|
||||
|
||||
# Return list of elements
|
||||
if get_onylDatabase:
|
||||
return media_search_manager
|
||||
|
||||
if len_database > 0:
|
||||
|
||||
# Select title from list (type: TV \ Movie \ OVA)
|
||||
select_title = run_get_select_title()
|
||||
|
||||
if select_title.type == 'Movie' or select_title.type == 'OVA':
|
||||
download_film(select_title)
|
||||
|
||||
else:
|
||||
download_series(select_title)
|
||||
|
||||
else:
|
||||
console.print(f"\n[red]Nothing matching was found for[white]: [purple]{string_to_search}")
|
||||
|
||||
# Retry
|
||||
search()
|
@ -1,15 +0,0 @@
|
||||
# 26.05.24
|
||||
|
||||
import os
|
||||
|
||||
|
||||
# Internal utilities
|
||||
from StreamingCommunity.Util._jsonConfig import config_manager
|
||||
|
||||
|
||||
SITE_NAME = os.path.basename(os.path.dirname(os.path.abspath(__file__)))
|
||||
ROOT_PATH = config_manager.get('DEFAULT', 'root_path')
|
||||
DOMAIN_NOW = config_manager.get_dict('SITE', SITE_NAME)['domain']
|
||||
|
||||
SERIES_FOLDER = config_manager.get('DEFAULT', 'serie_folder_name')
|
||||
MOVIE_FOLDER = config_manager.get('DEFAULT', 'movie_folder_name')
|
@ -1,130 +0,0 @@
|
||||
# 11.03.24
|
||||
|
||||
import os
|
||||
import sys
|
||||
import logging
|
||||
|
||||
|
||||
# Internal utilities
|
||||
from StreamingCommunity.Util.console import console, msg
|
||||
from StreamingCommunity.Util.os import os_manager
|
||||
from StreamingCommunity.Util.message import start_message
|
||||
from StreamingCommunity.Lib.Downloader import MP4_downloader
|
||||
|
||||
|
||||
# Logic class
|
||||
from .util.ScrapeSerie import ScrapeSerieAnime
|
||||
from StreamingCommunity.Api.Template.Util import manage_selection
|
||||
from StreamingCommunity.Api.Template.Class.SearchType import MediaItem
|
||||
|
||||
|
||||
# Player
|
||||
from StreamingCommunity.Api.Player.vixcloud import VideoSourceAnime
|
||||
|
||||
|
||||
# Variable
|
||||
from .costant import ROOT_PATH, SITE_NAME, SERIES_FOLDER, MOVIE_FOLDER
|
||||
|
||||
|
||||
|
||||
def download_episode(index_select: int, scrape_serie: ScrapeSerieAnime, video_source: VideoSourceAnime):
|
||||
"""
|
||||
Downloads the selected episode.
|
||||
|
||||
Parameters:
|
||||
- index_select (int): Index of the episode to download.
|
||||
"""
|
||||
|
||||
# Get information about the selected episode
|
||||
obj_episode = scrape_serie.get_info_episode(index_select)
|
||||
|
||||
if obj_episode is not None:
|
||||
|
||||
start_message()
|
||||
console.print(f"[yellow]Download: [red]EP_{obj_episode.number} \n")
|
||||
|
||||
# Collect mp4 url
|
||||
video_source.get_embed(obj_episode.id)
|
||||
|
||||
# Create output path
|
||||
title_name = f"{obj_episode.number}.mp4"
|
||||
|
||||
if scrape_serie.is_series:
|
||||
mp4_path = os_manager.get_sanitize_path(
|
||||
os.path.join(ROOT_PATH, SITE_NAME, SERIES_FOLDER, scrape_serie.series_name)
|
||||
)
|
||||
else:
|
||||
mp4_path = os_manager.get_sanitize_path(
|
||||
os.path.join(ROOT_PATH, SITE_NAME, MOVIE_FOLDER, scrape_serie.series_name)
|
||||
)
|
||||
|
||||
# Create output folder
|
||||
os_manager.create_path(mp4_path)
|
||||
|
||||
# Start downloading
|
||||
r_proc = MP4_downloader(
|
||||
url = str(video_source.src_mp4).strip(),
|
||||
path = os.path.join(mp4_path, title_name)
|
||||
)
|
||||
|
||||
if r_proc != None:
|
||||
console.print("[green]Result: ")
|
||||
console.print(r_proc)
|
||||
|
||||
else:
|
||||
logging.error(f"Skip index: {index_select} cant find info with api.")
|
||||
|
||||
|
||||
def download_series(select_title: MediaItem):
|
||||
"""
|
||||
Function to download episodes of a TV series.
|
||||
|
||||
Parameters:
|
||||
- tv_id (int): The ID of the TV series.
|
||||
- tv_name (str): The name of the TV series.
|
||||
"""
|
||||
scrape_serie = ScrapeSerieAnime(SITE_NAME)
|
||||
video_source = VideoSourceAnime(SITE_NAME)
|
||||
|
||||
# Set up video source
|
||||
scrape_serie.setup(None, select_title.id, select_title.slug)
|
||||
|
||||
# Get the count of episodes for the TV series
|
||||
episoded_count = scrape_serie.get_count_episodes()
|
||||
console.print(f"[cyan]Episodes find: [red]{episoded_count}")
|
||||
|
||||
# Prompt user to select an episode index
|
||||
last_command = msg.ask("\n[cyan]Insert media [red]index [yellow]or [red](*) [cyan]to download all media [yellow]or [red][1-2] [cyan]or [red][3-*] [cyan]for a range of media")
|
||||
|
||||
# Manage user selection
|
||||
list_episode_select = manage_selection(last_command, episoded_count)
|
||||
|
||||
# Download selected episodes
|
||||
if len(list_episode_select) == 1 and last_command != "*":
|
||||
download_episode(list_episode_select[0]-1, scrape_serie, video_source)
|
||||
|
||||
# Download all other episodes selecter
|
||||
else:
|
||||
for i_episode in list_episode_select:
|
||||
download_episode(i_episode-1, scrape_serie, video_source)
|
||||
|
||||
|
||||
def download_film(select_title: MediaItem):
|
||||
"""
|
||||
Function to download a film.
|
||||
|
||||
Parameters:
|
||||
- id_film (int): The ID of the film.
|
||||
- title_name (str): The title of the film.
|
||||
"""
|
||||
|
||||
# Init class
|
||||
scrape_serie = ScrapeSerieAnime(SITE_NAME)
|
||||
video_source = VideoSourceAnime(SITE_NAME)
|
||||
|
||||
# Set up video source
|
||||
scrape_serie.setup(None, select_title.id, select_title.slug)
|
||||
scrape_serie.is_series = False
|
||||
|
||||
# Start download
|
||||
download_episode(0, scrape_serie, video_source)
|
@ -1,165 +0,0 @@
|
||||
# 10.12.23
|
||||
|
||||
import logging
|
||||
|
||||
|
||||
# External libraries
|
||||
import httpx
|
||||
from bs4 import BeautifulSoup
|
||||
|
||||
|
||||
# Internal utilities
|
||||
from StreamingCommunity.Util.console import console
|
||||
from StreamingCommunity.Util._jsonConfig import config_manager
|
||||
from StreamingCommunity.Util.table import TVShowManager
|
||||
|
||||
|
||||
# Logic class
|
||||
from StreamingCommunity.Api.Template import get_select_title
|
||||
from StreamingCommunity.Api.Template.Util import search_domain
|
||||
from StreamingCommunity.Api.Template.Class.SearchType import MediaManager
|
||||
|
||||
|
||||
# Variable
|
||||
from .costant import SITE_NAME
|
||||
media_search_manager = MediaManager()
|
||||
table_show_manager = TVShowManager()
|
||||
|
||||
|
||||
|
||||
def get_token(site_name: str, domain: str) -> dict:
|
||||
"""
|
||||
Function to retrieve session tokens from a specified website.
|
||||
|
||||
Parameters:
|
||||
- site_name (str): The name of the site.
|
||||
- domain (str): The domain of the site.
|
||||
|
||||
Returns:
|
||||
- dict: A dictionary containing session tokens. The keys are 'XSRF_TOKEN', 'animeunity_session', and 'csrf_token'.
|
||||
"""
|
||||
|
||||
# Send a GET request to the specified URL composed of the site name and domain
|
||||
response = httpx.get(f"https://www.{site_name}.{domain}")
|
||||
response.raise_for_status()
|
||||
|
||||
# Initialize variables to store CSRF token
|
||||
find_csrf_token = None
|
||||
|
||||
# Parse the HTML response using BeautifulSoup
|
||||
soup = BeautifulSoup(response.text, "html.parser")
|
||||
|
||||
# Loop through all meta tags in the HTML response
|
||||
for html_meta in soup.find_all("meta"):
|
||||
|
||||
# Check if the meta tag has a 'name' attribute equal to "csrf-token"
|
||||
if html_meta.get('name') == "csrf-token":
|
||||
|
||||
# If found, retrieve the content of the meta tag, which is the CSRF token
|
||||
find_csrf_token = html_meta.get('content')
|
||||
|
||||
logging.info(f"Extract: ('animeunity_session': {response.cookies['animeunity_session']}, 'csrf_token': {find_csrf_token})")
|
||||
return {
|
||||
'animeunity_session': response.cookies['animeunity_session'],
|
||||
'csrf_token': find_csrf_token
|
||||
}
|
||||
|
||||
|
||||
def get_real_title(record):
|
||||
"""
|
||||
Get the real title from a record.
|
||||
|
||||
This function takes a record, which is assumed to be a dictionary representing a row of JSON data.
|
||||
It looks for a title in the record, prioritizing English over Italian titles if available.
|
||||
|
||||
Parameters:
|
||||
- record (dict): A dictionary representing a row of JSON data.
|
||||
|
||||
Returns:
|
||||
- str: The title found in the record. If no title is found, returns None.
|
||||
"""
|
||||
|
||||
if record['title'] is not None:
|
||||
return record['title']
|
||||
|
||||
elif record['title_eng'] is not None:
|
||||
return record['title_eng']
|
||||
|
||||
else:
|
||||
return record['title_it']
|
||||
|
||||
|
||||
def title_search(title: str) -> int:
|
||||
"""
|
||||
Function to perform an anime search using a provided title.
|
||||
|
||||
Parameters:
|
||||
- title_search (str): The title to search for.
|
||||
|
||||
Returns:
|
||||
- int: A number containing the length of media search manager.
|
||||
"""
|
||||
|
||||
# Get token and session value from configuration
|
||||
max_timeout = config_manager.get_int("REQUESTS", "timeout")
|
||||
domain_to_use, _ = search_domain(SITE_NAME, f"https://www.{SITE_NAME}")
|
||||
|
||||
data = get_token(SITE_NAME, domain_to_use)
|
||||
|
||||
# Prepare cookies to be used in the request
|
||||
cookies = {
|
||||
'animeunity_session': data.get('animeunity_session')
|
||||
}
|
||||
|
||||
# Prepare headers for the request
|
||||
headers = {
|
||||
'accept': 'application/json, text/plain, */*',
|
||||
'accept-language': 'it-IT,it;q=0.9,en-US;q=0.8,en;q=0.7',
|
||||
'content-type': 'application/json;charset=UTF-8',
|
||||
'x-csrf-token': data.get('csrf_token')
|
||||
}
|
||||
|
||||
# Prepare JSON data to be sent in the request
|
||||
json_data = {
|
||||
'title': title # Use the provided title for the search
|
||||
}
|
||||
|
||||
# Send a POST request to the API endpoint for live search
|
||||
try:
|
||||
response = httpx.post(
|
||||
url=f'https://www.{SITE_NAME}.{domain_to_use}/livesearch',
|
||||
cookies=cookies,
|
||||
headers=headers,
|
||||
json=json_data,
|
||||
timeout=max_timeout
|
||||
)
|
||||
response.raise_for_status()
|
||||
|
||||
except Exception as e:
|
||||
console.print(f"Site: {SITE_NAME}, request search error: {e}")
|
||||
|
||||
# Process each record returned in the response
|
||||
for dict_title in response.json()['records']:
|
||||
|
||||
# Rename keys for consistency
|
||||
dict_title['name'] = get_real_title(dict_title)
|
||||
|
||||
# Add the record to media search manager if the name is not None
|
||||
media_search_manager.add_media({
|
||||
'id': dict_title.get('id'),
|
||||
'slug': dict_title.get('slug'),
|
||||
'name': dict_title.get('name'),
|
||||
'type': dict_title.get('type'),
|
||||
'score': dict_title.get('score'),
|
||||
'episodes_count': dict_title.get('episodes_count')
|
||||
})
|
||||
|
||||
# Return the length of media search manager
|
||||
return media_search_manager.get_length()
|
||||
|
||||
|
||||
def run_get_select_title():
|
||||
"""
|
||||
Display a selection of titles and prompt the user to choose one.
|
||||
"""
|
||||
return get_select_title(table_show_manager, media_search_manager)
|
@ -1,97 +0,0 @@
|
||||
# 01.03.24
|
||||
|
||||
import logging
|
||||
|
||||
|
||||
# External libraries
|
||||
import httpx
|
||||
|
||||
|
||||
# Internal utilities
|
||||
from StreamingCommunity.Util.headers import get_headers
|
||||
from StreamingCommunity.Util._jsonConfig import config_manager
|
||||
from StreamingCommunity.Api.Player.Helper.Vixcloud.util import EpisodeManager, Episode
|
||||
|
||||
|
||||
# Variable
|
||||
max_timeout = config_manager.get_int("REQUESTS", "timeout")
|
||||
|
||||
|
||||
|
||||
class ScrapeSerieAnime():
|
||||
def __init__(self, site_name: str):
|
||||
"""
|
||||
Initialize the media scraper for a specific website.
|
||||
|
||||
Args:
|
||||
site_name (str): Name of the streaming site to scrape
|
||||
"""
|
||||
self.is_series = False
|
||||
self.headers = {'user-agent': get_headers()}
|
||||
self.base_name = site_name
|
||||
self.domain = config_manager.get_dict('SITE', self.base_name)['domain']
|
||||
|
||||
def setup(self, version: str = None, media_id: int = None, series_name: str = None):
|
||||
self.version = version
|
||||
self.media_id = media_id
|
||||
|
||||
if series_name is not None:
|
||||
self.is_series = True
|
||||
self.series_name = series_name
|
||||
self.obj_episode_manager: EpisodeManager = EpisodeManager()
|
||||
|
||||
def get_count_episodes(self):
|
||||
"""
|
||||
Retrieve total number of episodes for the selected media.
|
||||
|
||||
Returns:
|
||||
int: Total episode count
|
||||
"""
|
||||
try:
|
||||
|
||||
response = httpx.get(
|
||||
url=f"https://www.{self.base_name}.{self.domain}/info_api/{self.media_id}/",
|
||||
headers=self.headers,
|
||||
timeout=max_timeout
|
||||
)
|
||||
response.raise_for_status()
|
||||
|
||||
# Parse JSON response and return episode count
|
||||
return response.json()["episodes_count"]
|
||||
|
||||
except Exception as e:
|
||||
logging.error(f"Error fetching episode count: {e}")
|
||||
return None
|
||||
|
||||
def get_info_episode(self, index_ep: int) -> Episode:
|
||||
"""
|
||||
Fetch detailed information for a specific episode.
|
||||
|
||||
Args:
|
||||
index_ep (int): Zero-based index of the target episode
|
||||
|
||||
Returns:
|
||||
Episode: Detailed episode information
|
||||
"""
|
||||
try:
|
||||
|
||||
params = {
|
||||
"start_range": index_ep,
|
||||
"end_range": index_ep + 1
|
||||
}
|
||||
|
||||
response = httpx.get(
|
||||
url=f"https://www.{self.base_name}.{self.domain}/info_api/{self.media_id}/{index_ep}",
|
||||
headers=self.headers,
|
||||
params=params,
|
||||
timeout=max_timeout
|
||||
)
|
||||
response.raise_for_status()
|
||||
|
||||
# Return information about the episode
|
||||
json_data = response.json()["episodes"][-1]
|
||||
return Episode(json_data)
|
||||
|
||||
except Exception as e:
|
||||
logging.error(f"Error fetching episode information: {e}")
|
||||
return None
|
@ -1,52 +0,0 @@
|
||||
# 01.07.24
|
||||
|
||||
from unidecode import unidecode
|
||||
|
||||
|
||||
# Internal utilities
|
||||
from StreamingCommunity.Util.console import console, msg
|
||||
|
||||
|
||||
# Logic class
|
||||
from .site import title_search, run_get_select_title, media_search_manager
|
||||
from .title import download_title
|
||||
|
||||
|
||||
# Variable
|
||||
indice = 7
|
||||
_useFor = "film_serie"
|
||||
_deprecate = False
|
||||
_priority = 2
|
||||
_engineDownload = "tor"
|
||||
from .costant import SITE_NAME
|
||||
|
||||
|
||||
def search(string_to_search: str = None, get_onylDatabase:bool = False):
|
||||
"""
|
||||
Main function of the application for film and series.
|
||||
"""
|
||||
|
||||
if string_to_search is None:
|
||||
string_to_search = msg.ask(f"\n[purple]Insert word to search in [red]{SITE_NAME}").strip()
|
||||
|
||||
# Search on database
|
||||
len_database = title_search(unidecode(string_to_search))
|
||||
|
||||
# Return list of elements
|
||||
if get_onylDatabase:
|
||||
return media_search_manager
|
||||
|
||||
if len_database > 0:
|
||||
|
||||
# Select title from list
|
||||
select_title = run_get_select_title()
|
||||
|
||||
# Download title
|
||||
download_title(select_title)
|
||||
|
||||
|
||||
else:
|
||||
console.print(f"\n[red]Nothing matching was found for[white]: [purple]{string_to_search}")
|
||||
|
||||
# Retry
|
||||
search()
|
@ -1,15 +0,0 @@
|
||||
# 01.07.24
|
||||
|
||||
import os
|
||||
|
||||
|
||||
# Internal utilities
|
||||
from StreamingCommunity.Util._jsonConfig import config_manager
|
||||
|
||||
|
||||
SITE_NAME = os.path.basename(os.path.dirname(os.path.abspath(__file__)))
|
||||
ROOT_PATH = config_manager.get('DEFAULT', 'root_path')
|
||||
DOMAIN_NOW = config_manager.get_dict('SITE', SITE_NAME)['domain']
|
||||
|
||||
SERIES_FOLDER = config_manager.get('DEFAULT', 'serie_folder_name')
|
||||
MOVIE_FOLDER = config_manager.get('DEFAULT', 'movie_folder_name')
|
@ -1,84 +0,0 @@
|
||||
# 01.07.24
|
||||
|
||||
# External libraries
|
||||
import httpx
|
||||
from bs4 import BeautifulSoup
|
||||
|
||||
|
||||
# Internal utilities
|
||||
from StreamingCommunity.Util.console import console
|
||||
from StreamingCommunity.Util._jsonConfig import config_manager
|
||||
from StreamingCommunity.Util.headers import get_headers
|
||||
from StreamingCommunity.Util.table import TVShowManager
|
||||
|
||||
|
||||
# Logic class
|
||||
from StreamingCommunity.Api.Template import get_select_title
|
||||
from StreamingCommunity.Api.Template.Util import search_domain
|
||||
from StreamingCommunity.Api.Template.Class.SearchType import MediaManager
|
||||
|
||||
|
||||
# Variable
|
||||
from .costant import SITE_NAME
|
||||
media_search_manager = MediaManager()
|
||||
table_show_manager = TVShowManager()
|
||||
|
||||
|
||||
def title_search(word_to_search: str) -> int:
|
||||
"""
|
||||
Search for titles based on a search query.
|
||||
|
||||
Parameters:
|
||||
- title_search (str): The title to search for.
|
||||
|
||||
Returns:
|
||||
- int: The number of titles found.
|
||||
"""
|
||||
|
||||
# Find new domain if prev dont work
|
||||
max_timeout = config_manager.get_int("REQUESTS", "timeout")
|
||||
domain_to_use, _ = search_domain(SITE_NAME, f"https://{SITE_NAME}")
|
||||
|
||||
# Construct the full site URL and load the search page
|
||||
try:
|
||||
response = httpx.get(
|
||||
url=f"https://{SITE_NAME}.{domain_to_use}/search?q={word_to_search}&category=1&subcat=2&page=1",
|
||||
headers={'user-agent': get_headers()},
|
||||
timeout=max_timeout
|
||||
)
|
||||
response.raise_for_status()
|
||||
|
||||
except Exception as e:
|
||||
console.print(f"Site: {SITE_NAME}, request search error: {e}")
|
||||
|
||||
# Create soup and find table
|
||||
soup = BeautifulSoup(response.text, "html.parser")
|
||||
|
||||
for title_div in soup.find_all("li", class_ = "card"):
|
||||
try:
|
||||
div_stats = title_div.find("div", class_ = "stats")
|
||||
|
||||
title_info = {
|
||||
'name': title_div.find("a").get_text(strip=True),
|
||||
'url': title_div.find_all("a")[-1].get("href"),
|
||||
#'nDownload': div_stats.find_all("div")[0].get_text(strip=True),
|
||||
'size': div_stats.find_all("div")[1].get_text(strip=True),
|
||||
'seader': div_stats.find_all("div")[2].get_text(strip=True),
|
||||
'leacher': div_stats.find_all("div")[3].get_text(strip=True),
|
||||
'date': div_stats.find_all("div")[4].get_text(strip=True)
|
||||
}
|
||||
|
||||
media_search_manager.add_media(title_info)
|
||||
|
||||
except:
|
||||
pass
|
||||
|
||||
# Return the number of titles found
|
||||
return media_search_manager.get_length()
|
||||
|
||||
|
||||
def run_get_select_title():
|
||||
"""
|
||||
Display a selection of titles and prompt the user to choose one.
|
||||
"""
|
||||
return get_select_title(table_show_manager, media_search_manager)
|
@ -1,47 +0,0 @@
|
||||
# 01.07.24
|
||||
|
||||
import os
|
||||
|
||||
|
||||
# Internal utilities
|
||||
from StreamingCommunity.Util.console import console
|
||||
from StreamingCommunity.Util.message import start_message
|
||||
from StreamingCommunity.Util.os import os_manager
|
||||
from StreamingCommunity.Lib.Downloader import TOR_downloader
|
||||
|
||||
|
||||
# Logic class
|
||||
from StreamingCommunity.Api.Template.Class.SearchType import MediaItem
|
||||
|
||||
|
||||
# Config
|
||||
from .costant import ROOT_PATH, SITE_NAME, MOVIE_FOLDER
|
||||
|
||||
|
||||
def download_title(select_title: MediaItem):
|
||||
"""
|
||||
Downloads a media item and saves it as an MP4 file.
|
||||
|
||||
Parameters:
|
||||
- select_title (MediaItem): The media item to be downloaded. This should be an instance of the MediaItem class, containing attributes like `name` and `url`.
|
||||
"""
|
||||
|
||||
start_message()
|
||||
|
||||
console.print(f"[yellow]Download: [red]{select_title.name} \n")
|
||||
print()
|
||||
|
||||
# Define output path
|
||||
title_name = os_manager.get_sanitize_file(select_title.name.replace("-", "_") + ".mp4")
|
||||
mp4_path = os_manager.get_sanitize_path(
|
||||
os.path.join(ROOT_PATH, SITE_NAME, MOVIE_FOLDER, title_name.replace(".mp4", ""))
|
||||
)
|
||||
|
||||
# Create output folder
|
||||
os_manager.create_path(mp4_path)
|
||||
|
||||
# Tor manager
|
||||
manager = TOR_downloader()
|
||||
manager.add_magnet_link(select_title.url)
|
||||
manager.start_download()
|
||||
manager.move_downloaded_files(mp4_path)
|
@ -1,52 +0,0 @@
|
||||
# 09.06.24
|
||||
|
||||
from unidecode import unidecode
|
||||
|
||||
|
||||
# Internal utilities
|
||||
from StreamingCommunity.Util.console import console, msg
|
||||
|
||||
|
||||
# Logic class
|
||||
from .site import title_search, run_get_select_title, media_search_manager
|
||||
from .film import download_film
|
||||
|
||||
|
||||
# Variable
|
||||
indice = 9
|
||||
_useFor = "film"
|
||||
_deprecate = False
|
||||
_priority = 2
|
||||
_engineDownload = "mp4"
|
||||
from .costant import SITE_NAME
|
||||
|
||||
|
||||
def search(string_to_search: str = None, get_onylDatabase: bool = False):
|
||||
"""
|
||||
Main function of the application for film and series.
|
||||
"""
|
||||
|
||||
if string_to_search is None:
|
||||
string_to_search = msg.ask(f"\n[purple]Insert word to search in [red]{SITE_NAME}").strip()
|
||||
|
||||
# Search on database
|
||||
len_database = title_search(unidecode(string_to_search))
|
||||
|
||||
# Return list of elements
|
||||
if get_onylDatabase:
|
||||
return media_search_manager
|
||||
|
||||
if len_database > 0:
|
||||
|
||||
# Select title from list
|
||||
select_title = run_get_select_title()
|
||||
|
||||
# !!! ADD TYPE DONT WORK FOR SERIE
|
||||
download_film(select_title)
|
||||
|
||||
|
||||
else:
|
||||
console.print(f"\n[red]Nothing matching was found for[white]: [purple]{string_to_search}")
|
||||
|
||||
# Retry
|
||||
search()
|
@ -1,15 +0,0 @@
|
||||
# 03.07.24
|
||||
|
||||
import os
|
||||
|
||||
|
||||
# Internal utilities
|
||||
from StreamingCommunity.Util._jsonConfig import config_manager
|
||||
|
||||
|
||||
SITE_NAME = os.path.basename(os.path.dirname(os.path.abspath(__file__)))
|
||||
ROOT_PATH = config_manager.get('DEFAULT', 'root_path')
|
||||
DOMAIN_NOW = config_manager.get_dict('SITE', SITE_NAME)['domain']
|
||||
|
||||
SERIES_FOLDER = config_manager.get('DEFAULT', 'serie_folder_name')
|
||||
MOVIE_FOLDER = config_manager.get('DEFAULT', 'movie_folder_name')
|
@ -1,69 +0,0 @@
|
||||
# 03.07.24
|
||||
|
||||
import os
|
||||
import time
|
||||
|
||||
|
||||
# Internal utilities
|
||||
from StreamingCommunity.Util.console import console, msg
|
||||
from StreamingCommunity.Util.os import os_manager
|
||||
from StreamingCommunity.Util.message import start_message
|
||||
from StreamingCommunity.Util.call_stack import get_call_stack
|
||||
from StreamingCommunity.Lib.Downloader import HLS_Downloader
|
||||
|
||||
|
||||
# Logic class
|
||||
from StreamingCommunity.Api.Template.Util import execute_search
|
||||
from StreamingCommunity.Api.Template.Class.SearchType import MediaItem
|
||||
|
||||
|
||||
# Player
|
||||
from StreamingCommunity.Api.Player.maxstream import VideoSource
|
||||
|
||||
|
||||
# Config
|
||||
from .costant import ROOT_PATH, SITE_NAME, MOVIE_FOLDER
|
||||
|
||||
|
||||
def download_film(select_title: MediaItem):
|
||||
"""
|
||||
Downloads a film using the provided obj.
|
||||
|
||||
Parameters:
|
||||
- select_title (MediaItem): The media item to be downloaded. This should be an instance of the MediaItem class, containing attributes like `name` and `url`.
|
||||
"""
|
||||
|
||||
# Start message and display film information
|
||||
start_message()
|
||||
console.print(f"[yellow]Download: [red]{select_title.name} \n")
|
||||
|
||||
# Setup api manger
|
||||
print(select_title.url)
|
||||
video_source = VideoSource(select_title.url)
|
||||
|
||||
# Define output path
|
||||
title_name = os_manager.get_sanitize_file(select_title.name) +".mp4"
|
||||
mp4_path = os_manager.get_sanitize_path(
|
||||
os.path.join(ROOT_PATH, SITE_NAME, MOVIE_FOLDER, title_name.replace(".mp4", ""))
|
||||
)
|
||||
|
||||
# Get m3u8 master playlist
|
||||
master_playlist = video_source.get_playlist()
|
||||
|
||||
# Download the film using the m3u8 playlist, and output filename
|
||||
r_proc = HLS_Downloader(
|
||||
m3u8_playlist=master_playlist,
|
||||
output_filename=os.path.join(mp4_path, title_name)
|
||||
).start()
|
||||
|
||||
if r_proc == 404:
|
||||
time.sleep(2)
|
||||
|
||||
# Re call search function
|
||||
if msg.ask("[green]Do you want to continue [white]([red]y[white])[green] or return at home[white]([red]n[white]) ", choices=['y', 'n'], default='y', show_choices=True) == "n":
|
||||
frames = get_call_stack()
|
||||
execute_search(frames[-4])
|
||||
|
||||
if r_proc != None:
|
||||
console.print("[green]Result: ")
|
||||
console.print(r_proc)
|
@ -1,74 +0,0 @@
|
||||
# 03.07.24
|
||||
|
||||
# External libraries
|
||||
import httpx
|
||||
from bs4 import BeautifulSoup
|
||||
|
||||
|
||||
# Internal utilities
|
||||
from StreamingCommunity.Util._jsonConfig import config_manager
|
||||
from StreamingCommunity.Util.headers import get_headers
|
||||
from StreamingCommunity.Util.table import TVShowManager
|
||||
|
||||
|
||||
# Logic class
|
||||
from StreamingCommunity.Api.Template import get_select_title
|
||||
from StreamingCommunity.Api.Template.Util import search_domain
|
||||
from StreamingCommunity.Api.Template.Class.SearchType import MediaManager
|
||||
|
||||
|
||||
# Variable
|
||||
from .costant import SITE_NAME
|
||||
media_search_manager = MediaManager()
|
||||
table_show_manager = TVShowManager()
|
||||
|
||||
|
||||
def title_search(word_to_search: str) -> int:
|
||||
"""
|
||||
Search for titles based on a search query.
|
||||
|
||||
Parameters:
|
||||
- title_search (str): The title to search for.
|
||||
|
||||
Returns:
|
||||
- int: The number of titles found.
|
||||
"""
|
||||
|
||||
# Find new domain if prev dont work
|
||||
max_timeout = config_manager.get_int("REQUESTS", "timeout")
|
||||
domain_to_use, _ = search_domain(SITE_NAME, f"https://{SITE_NAME}")
|
||||
|
||||
response = httpx.get(
|
||||
url=f"https://{SITE_NAME}.{domain_to_use}/?s={word_to_search}",
|
||||
headers={'user-agent': get_headers()},
|
||||
timeout=max_timeout
|
||||
)
|
||||
response.raise_for_status()
|
||||
|
||||
# Create soup and find table
|
||||
soup = BeautifulSoup(response.text, "html.parser")
|
||||
|
||||
# For all element in table
|
||||
for div in soup.find_all("div", class_ = "card-content"):
|
||||
|
||||
url = div.find("h3").find("a").get("href")
|
||||
title = div.find("h3").find("a").get_text(strip=True)
|
||||
desc = div.find("p").find("strong").text
|
||||
|
||||
title_info = {
|
||||
'name': title,
|
||||
'desc': desc,
|
||||
'url': url
|
||||
}
|
||||
|
||||
media_search_manager.add_media(title_info)
|
||||
|
||||
# Return the number of titles found
|
||||
return media_search_manager.get_length()
|
||||
|
||||
|
||||
def run_get_select_title():
|
||||
"""
|
||||
Display a selection of titles and prompt the user to choose one.
|
||||
"""
|
||||
return get_select_title(table_show_manager, media_search_manager)
|
@ -1,58 +0,0 @@
|
||||
# 09.06.24
|
||||
|
||||
import logging
|
||||
from unidecode import unidecode
|
||||
|
||||
|
||||
# Internal utilities
|
||||
from StreamingCommunity.Util.console import console, msg
|
||||
|
||||
|
||||
# Logic class
|
||||
from .site import title_search, run_get_select_title, media_search_manager
|
||||
from .series import download_thread
|
||||
|
||||
|
||||
# Variable
|
||||
indice = 3
|
||||
_useFor = "serie"
|
||||
_deprecate = False
|
||||
_priority = 2
|
||||
_engineDownload = "mp4"
|
||||
from .costant import SITE_NAME
|
||||
|
||||
|
||||
def search(string_to_search: str = None, get_onylDatabase: bool = False):
|
||||
"""
|
||||
Main function of the application for film and series.
|
||||
"""
|
||||
|
||||
if string_to_search is None:
|
||||
|
||||
# Make request to site to get content that corrsisponde to that string
|
||||
string_to_search = msg.ask(f"\n[purple]Insert word to search in [red]{SITE_NAME}").strip()
|
||||
|
||||
# Search on database
|
||||
len_database = title_search(unidecode(string_to_search))
|
||||
|
||||
# Return list of elements
|
||||
if get_onylDatabase:
|
||||
return media_search_manager
|
||||
|
||||
if len_database > 0:
|
||||
|
||||
# Select title from list
|
||||
select_title = run_get_select_title()
|
||||
|
||||
# Download only film
|
||||
if "Serie TV" in str(select_title.type):
|
||||
download_thread(select_title)
|
||||
|
||||
else:
|
||||
logging.error(f"Not supported: {select_title.type}")
|
||||
|
||||
else:
|
||||
console.print(f"\n[red]Nothing matching was found for[white]: [purple]{string_to_search}")
|
||||
|
||||
# Retry
|
||||
search()
|
@ -1,16 +0,0 @@
|
||||
# 09.06.24
|
||||
|
||||
import os
|
||||
|
||||
|
||||
# Internal utilities
|
||||
from StreamingCommunity.Util._jsonConfig import config_manager
|
||||
|
||||
|
||||
SITE_NAME = os.path.basename(os.path.dirname(os.path.abspath(__file__)))
|
||||
ROOT_PATH = config_manager.get('DEFAULT', 'root_path')
|
||||
DOMAIN_NOW = config_manager.get_dict('SITE', SITE_NAME)['domain']
|
||||
COOKIE = config_manager.get_dict('SITE', SITE_NAME)['extra']
|
||||
|
||||
SERIES_FOLDER = config_manager.get('DEFAULT', 'serie_folder_name')
|
||||
MOVIE_FOLDER = config_manager.get('DEFAULT', 'movie_folder_name')
|
@ -1,141 +0,0 @@
|
||||
# 13.06.24
|
||||
|
||||
import os
|
||||
import sys
|
||||
from urllib.parse import urlparse
|
||||
|
||||
|
||||
# Internal utilities
|
||||
from StreamingCommunity.Util.console import console
|
||||
from StreamingCommunity.Util.message import start_message
|
||||
from StreamingCommunity.Util.os import os_manager
|
||||
from StreamingCommunity.Util.table import TVShowManager
|
||||
from StreamingCommunity.Lib.Downloader import MP4_downloader
|
||||
|
||||
|
||||
# Logic class
|
||||
from StreamingCommunity.Api.Template.Class.SearchType import MediaItem
|
||||
from StreamingCommunity.Api.Template.Util import manage_selection, map_episode_title, validate_episode_selection
|
||||
|
||||
|
||||
# Player
|
||||
from .util.ScrapeSerie import GetSerieInfo
|
||||
from StreamingCommunity.Api.Player.ddl import VideoSource
|
||||
|
||||
|
||||
# Variable
|
||||
from .costant import ROOT_PATH, SITE_NAME, SERIES_FOLDER
|
||||
table_show_manager = TVShowManager()
|
||||
|
||||
|
||||
|
||||
def download_video(index_episode_selected: int, scape_info_serie: GetSerieInfo, video_source: VideoSource) -> None:
|
||||
"""
|
||||
Download a single episode video.
|
||||
|
||||
Parameters:
|
||||
- tv_name (str): Name of the TV series.
|
||||
- index_episode_selected (int): Index of the selected episode.
|
||||
"""
|
||||
|
||||
start_message()
|
||||
|
||||
# Get info about episode
|
||||
obj_episode = scape_info_serie.list_episodes[index_episode_selected - 1]
|
||||
console.print(f"[yellow]Download: [red]{obj_episode.get('name')}")
|
||||
print()
|
||||
|
||||
# Define filename and path for the downloaded video
|
||||
title_name = os_manager.get_sanitize_file(
|
||||
f"{map_episode_title(scape_info_serie.tv_name, None, index_episode_selected, obj_episode.get('name'))}.mp4"
|
||||
)
|
||||
mp4_path = os.path.join(ROOT_PATH, SITE_NAME, SERIES_FOLDER, scape_info_serie.tv_name)
|
||||
|
||||
# Create output folder
|
||||
os_manager.create_path(mp4_path)
|
||||
|
||||
# Setup video source
|
||||
video_source.setup(obj_episode.get('url'))
|
||||
|
||||
# Get m3u8 master playlist
|
||||
master_playlist = video_source.get_playlist()
|
||||
|
||||
# Parse start page url
|
||||
parsed_url = urlparse(obj_episode.get('url'))
|
||||
|
||||
# Start download
|
||||
r_proc = MP4_downloader(
|
||||
url = master_playlist,
|
||||
path = os.path.join(mp4_path, title_name),
|
||||
referer = f"{parsed_url.scheme}://{parsed_url.netloc}/",
|
||||
)
|
||||
|
||||
if r_proc != None:
|
||||
console.print("[green]Result: ")
|
||||
console.print(r_proc)
|
||||
|
||||
|
||||
def download_thread(dict_serie: MediaItem):
|
||||
"""
|
||||
Download all episode of a thread
|
||||
"""
|
||||
|
||||
# Start message and set up video source
|
||||
start_message()
|
||||
|
||||
# Init class
|
||||
scape_info_serie = GetSerieInfo(dict_serie)
|
||||
video_source = VideoSource()
|
||||
|
||||
# Collect information about thread
|
||||
list_dict_episode = scape_info_serie.get_episode_number()
|
||||
episodes_count = len(list_dict_episode)
|
||||
|
||||
# Display episodes list and manage user selection
|
||||
last_command = display_episodes_list(scape_info_serie.list_episodes)
|
||||
list_episode_select = manage_selection(last_command, episodes_count)
|
||||
|
||||
try:
|
||||
list_episode_select = validate_episode_selection(list_episode_select, episodes_count)
|
||||
except ValueError as e:
|
||||
console.print(f"[red]{str(e)}")
|
||||
return
|
||||
|
||||
# Download selected episodes
|
||||
for i_episode in list_episode_select:
|
||||
download_video(i_episode, scape_info_serie, video_source)
|
||||
|
||||
|
||||
def display_episodes_list(obj_episode_manager) -> str:
|
||||
"""
|
||||
Display episodes list and handle user input.
|
||||
|
||||
Returns:
|
||||
last_command (str): Last command entered by the user.
|
||||
"""
|
||||
|
||||
# Set up table for displaying episodes
|
||||
table_show_manager.set_slice_end(10)
|
||||
|
||||
# Add columns to the table
|
||||
column_info = {
|
||||
"Index": {'color': 'red'},
|
||||
"Name": {'color': 'magenta'},
|
||||
}
|
||||
table_show_manager.add_column(column_info)
|
||||
|
||||
# Populate the table with episodes information
|
||||
for i, media in enumerate(obj_episode_manager):
|
||||
table_show_manager.add_tv_show({
|
||||
'Index': str(i+1),
|
||||
'Name': media.get('name'),
|
||||
})
|
||||
|
||||
# Run the table and handle user input
|
||||
last_command = table_show_manager.run()
|
||||
|
||||
if last_command == "q":
|
||||
console.print("\n[red]Quit [white]...")
|
||||
sys.exit(0)
|
||||
|
||||
return last_command
|
@ -1,93 +0,0 @@
|
||||
# 09.06.24
|
||||
|
||||
import logging
|
||||
|
||||
|
||||
# External libraries
|
||||
import httpx
|
||||
from bs4 import BeautifulSoup
|
||||
|
||||
|
||||
# Internal utilities
|
||||
from StreamingCommunity.Util.console import console
|
||||
from StreamingCommunity.Util._jsonConfig import config_manager
|
||||
from StreamingCommunity.Util.headers import get_headers
|
||||
from StreamingCommunity.Util.table import TVShowManager
|
||||
|
||||
|
||||
# Logic class
|
||||
from StreamingCommunity.Api.Template import get_select_title
|
||||
from StreamingCommunity.Api.Template.Util import search_domain
|
||||
from StreamingCommunity.Api.Template.Class.SearchType import MediaManager
|
||||
|
||||
|
||||
# Variable
|
||||
from .costant import SITE_NAME
|
||||
media_search_manager = MediaManager()
|
||||
table_show_manager = TVShowManager()
|
||||
|
||||
|
||||
def title_search(word_to_search: str) -> int:
|
||||
"""
|
||||
Search for titles based on a search query.
|
||||
|
||||
Parameters:
|
||||
- title_search (str): The title to search for.
|
||||
|
||||
Returns:
|
||||
- int: The number of titles found.
|
||||
"""
|
||||
|
||||
# Find new domain if prev dont work
|
||||
max_timeout = config_manager.get_int("REQUESTS", "timeout")
|
||||
domain_to_use, _ = search_domain(SITE_NAME, f"https://{SITE_NAME}")
|
||||
|
||||
# Send request to search for titles
|
||||
try:
|
||||
response = httpx.get(
|
||||
url=f"https://{SITE_NAME}.{domain_to_use}/search/?&q={word_to_search}&quick=1&type=videobox_video&nodes=11",
|
||||
headers={'user-agent': get_headers()},
|
||||
timeout=max_timeout
|
||||
)
|
||||
response.raise_for_status()
|
||||
|
||||
except Exception as e:
|
||||
console.print(f"Site: {SITE_NAME}, request search error: {e}")
|
||||
|
||||
# Create soup and find table
|
||||
soup = BeautifulSoup(response.text, "html.parser")
|
||||
table_content = soup.find('ol', class_="ipsStream")
|
||||
|
||||
if table_content:
|
||||
for title_div in table_content.find_all('li', class_='ipsStreamItem'):
|
||||
try:
|
||||
|
||||
title_type = title_div.find("p", class_="ipsType_reset").find_all("a")[-1].get_text(strip=True)
|
||||
name = title_div.find("span", class_="ipsContained").find("a").get_text(strip=True)
|
||||
link = title_div.find("span", class_="ipsContained").find("a").get("href")
|
||||
|
||||
title_info = {
|
||||
'name': name,
|
||||
'url': link,
|
||||
'type': title_type
|
||||
}
|
||||
|
||||
media_search_manager.add_media(title_info)
|
||||
|
||||
except Exception as e:
|
||||
logging.error(f"Error processing title div: {e}")
|
||||
|
||||
return media_search_manager.get_length()
|
||||
|
||||
else:
|
||||
logging.error("No table content found.")
|
||||
return -999
|
||||
|
||||
return -9999
|
||||
|
||||
|
||||
def run_get_select_title():
|
||||
"""
|
||||
Display a selection of titles and prompt the user to choose one.
|
||||
"""
|
||||
return get_select_title(table_show_manager, media_search_manager)
|
@ -1,85 +0,0 @@
|
||||
# 13.06.24
|
||||
|
||||
import sys
|
||||
import logging
|
||||
from typing import List, Dict
|
||||
|
||||
|
||||
# External libraries
|
||||
import httpx
|
||||
from bs4 import BeautifulSoup
|
||||
|
||||
|
||||
# Internal utilities
|
||||
from StreamingCommunity.Util._jsonConfig import config_manager
|
||||
from StreamingCommunity.Util.headers import get_headers
|
||||
|
||||
|
||||
# Logic class
|
||||
from StreamingCommunity.Api.Template.Class.SearchType import MediaItem
|
||||
|
||||
|
||||
# Variable
|
||||
from ..costant import COOKIE
|
||||
max_timeout = config_manager.get_int("REQUESTS", "timeout")
|
||||
|
||||
|
||||
class GetSerieInfo:
|
||||
def __init__(self, dict_serie: MediaItem) -> None:
|
||||
"""
|
||||
Initializes the GetSerieInfo object with default values.
|
||||
|
||||
Parameters:
|
||||
- dict_serie (MediaItem): Dictionary containing series information (optional).
|
||||
"""
|
||||
self.headers = {'user-agent': get_headers()}
|
||||
self.cookies = COOKIE
|
||||
self.url = dict_serie.url
|
||||
self.tv_name = None
|
||||
self.list_episodes = None
|
||||
|
||||
def get_episode_number(self) -> List[Dict[str, str]]:
|
||||
"""
|
||||
Retrieves the number of episodes for a specific season.
|
||||
|
||||
Parameters:
|
||||
n_season (int): The season number.
|
||||
|
||||
Returns:
|
||||
List[Dict[str, str]]: List of dictionaries containing episode information.
|
||||
"""
|
||||
|
||||
try:
|
||||
response = httpx.get(f"{self.url}?area=online", cookies=self.cookies, headers=self.headers, timeout=max_timeout)
|
||||
response.raise_for_status()
|
||||
|
||||
except Exception as e:
|
||||
logging.error(f"Insert value for [ips4_device_key, ips4_member_id, ips4_login_key] in config.json file SITE \\ ddlstreamitaly \\ cookie. Use browser debug and cookie request with a valid account, filter by DOC. Error: {e}")
|
||||
sys.exit(0)
|
||||
|
||||
# Parse HTML content of the page
|
||||
soup = BeautifulSoup(response.text, "html.parser")
|
||||
|
||||
# Get tv name
|
||||
self.tv_name = soup.find("span", class_= "ipsType_break").get_text(strip=True)
|
||||
|
||||
# Find the container of episodes for the specified season
|
||||
table_content = soup.find('div', class_='ipsMargin_bottom:half')
|
||||
list_dict_episode = []
|
||||
|
||||
for episode_div in table_content.find_all('a', href=True):
|
||||
|
||||
# Get text of episode
|
||||
part_name = episode_div.get_text(strip=True)
|
||||
|
||||
if part_name:
|
||||
obj_episode = {
|
||||
'name': part_name,
|
||||
'url': episode_div['href']
|
||||
}
|
||||
|
||||
list_dict_episode.append(obj_episode)
|
||||
|
||||
self.list_episodes = list_dict_episode
|
||||
return list_dict_episode
|
||||
|
@ -1,53 +0,0 @@
|
||||
# 09.06.24
|
||||
|
||||
from unidecode import unidecode
|
||||
|
||||
|
||||
# Internal utilities
|
||||
from StreamingCommunity.Util.console import console, msg
|
||||
|
||||
|
||||
# Logic class
|
||||
from .site import title_search, run_get_select_title, media_search_manager
|
||||
from .series import download_series
|
||||
|
||||
|
||||
# Variable
|
||||
indice = 4
|
||||
_useFor = "serie"
|
||||
_deprecate = False
|
||||
_priority = 2
|
||||
_engineDownload = "hls"
|
||||
from .costant import SITE_NAME
|
||||
|
||||
|
||||
def search(string_to_search: str = None, get_onylDatabase: bool = False):
|
||||
"""
|
||||
Main function of the application for film and series.
|
||||
"""
|
||||
|
||||
if string_to_search is None:
|
||||
|
||||
# Make request to site to get content that corrsisponde to that string
|
||||
string_to_search = msg.ask(f"\n[purple]Insert word to search in [red]{SITE_NAME}").strip()
|
||||
|
||||
# Search on database
|
||||
len_database = title_search(unidecode(string_to_search))
|
||||
|
||||
# Return list of elements
|
||||
if get_onylDatabase:
|
||||
return media_search_manager
|
||||
|
||||
if len_database > 0:
|
||||
|
||||
# Select title from list
|
||||
select_title = run_get_select_title()
|
||||
|
||||
# Download only film
|
||||
download_series(select_title)
|
||||
|
||||
else:
|
||||
console.print(f"\n[red]Nothing matching was found for[white]: [purple]{string_to_search}")
|
||||
|
||||
# Retry
|
||||
search()
|
@ -1,15 +0,0 @@
|
||||
# 09.06.24
|
||||
|
||||
import os
|
||||
|
||||
|
||||
# Internal utilities
|
||||
from StreamingCommunity.Util._jsonConfig import config_manager
|
||||
|
||||
|
||||
SITE_NAME = os.path.basename(os.path.dirname(os.path.abspath(__file__)))
|
||||
ROOT_PATH = config_manager.get('DEFAULT', 'root_path')
|
||||
DOMAIN_NOW = config_manager.get_dict('SITE', SITE_NAME)['domain']
|
||||
|
||||
SERIES_FOLDER = config_manager.get('DEFAULT', 'serie_folder_name')
|
||||
MOVIE_FOLDER = config_manager.get('DEFAULT', 'movie_folder_name')
|
@ -1,195 +0,0 @@
|
||||
# 13.06.24
|
||||
|
||||
import os
|
||||
import sys
|
||||
import time
|
||||
|
||||
|
||||
# Internal utilities
|
||||
from StreamingCommunity.Util.console import console, msg
|
||||
from StreamingCommunity.Util.message import start_message
|
||||
from StreamingCommunity.Util.call_stack import get_call_stack
|
||||
from StreamingCommunity.Util.table import TVShowManager
|
||||
from StreamingCommunity.Lib.Downloader import HLS_Downloader
|
||||
|
||||
|
||||
# Logic class
|
||||
from StreamingCommunity.Api.Template.Util import manage_selection, map_episode_title, validate_selection, validate_episode_selection, execute_search
|
||||
from StreamingCommunity.Api.Template.Class.SearchType import MediaItem
|
||||
|
||||
|
||||
# Player
|
||||
from .util.ScrapeSerie import GetSerieInfo
|
||||
from StreamingCommunity.Api.Player.supervideo import VideoSource
|
||||
|
||||
|
||||
# Variable
|
||||
from .costant import ROOT_PATH, SITE_NAME, SERIES_FOLDER
|
||||
table_show_manager = TVShowManager()
|
||||
|
||||
|
||||
|
||||
def download_video(index_season_selected: int, index_episode_selected: int, scape_info_serie: GetSerieInfo) -> None:
|
||||
"""
|
||||
Download a single episode video.
|
||||
|
||||
Parameters:
|
||||
- tv_name (str): Name of the TV series.
|
||||
- index_season_selected (int): Index of the selected season.
|
||||
- index_episode_selected (int): Index of the selected episode.
|
||||
"""
|
||||
|
||||
start_message()
|
||||
|
||||
# Get info about episode
|
||||
obj_episode = scape_info_serie.list_episodes[index_episode_selected - 1]
|
||||
console.print(f"[yellow]Download: [red]{index_season_selected}:{index_episode_selected} {obj_episode.get('name')}")
|
||||
print()
|
||||
|
||||
# Define filename and path for the downloaded video
|
||||
mp4_name = f"{map_episode_title(scape_info_serie.tv_name, index_season_selected, index_episode_selected, obj_episode.get('name'))}.mp4"
|
||||
mp4_path = os.path.join(ROOT_PATH, SITE_NAME, SERIES_FOLDER, scape_info_serie.tv_name, f"S{index_season_selected}")
|
||||
|
||||
# Setup video source
|
||||
video_source = VideoSource(obj_episode.get('url'))
|
||||
|
||||
# Get m3u8 master playlist
|
||||
master_playlist = video_source.get_playlist()
|
||||
|
||||
# Download the film using the m3u8 playlist, and output filename
|
||||
r_proc = HLS_Downloader(
|
||||
m3u8_playlist=master_playlist,
|
||||
output_filename=os.path.join(mp4_path, mp4_name)
|
||||
).start()
|
||||
|
||||
if r_proc == 404:
|
||||
time.sleep(2)
|
||||
|
||||
# Re call search function
|
||||
if msg.ask("[green]Do you want to continue [white]([red]y[white])[green] or return at home[white]([red]n[white]) ", choices=['y', 'n'], default='y', show_choices=True) == "n":
|
||||
frames = get_call_stack()
|
||||
execute_search(frames[-4])
|
||||
|
||||
if r_proc != None:
|
||||
console.print("[green]Result: ")
|
||||
console.print(r_proc)
|
||||
|
||||
|
||||
|
||||
def download_episode(scape_info_serie: GetSerieInfo, index_season_selected: int, download_all: bool = False) -> None:
|
||||
"""
|
||||
Download all episodes of a season.
|
||||
|
||||
Parameters:
|
||||
- tv_name (str): Name of the TV series.
|
||||
- index_season_selected (int): Index of the selected season.
|
||||
- download_all (bool): Download all seasons episodes
|
||||
"""
|
||||
|
||||
# Start message and collect information about episodes
|
||||
start_message()
|
||||
list_dict_episode = scape_info_serie.get_episode_number(index_season_selected)
|
||||
episodes_count = len(list_dict_episode)
|
||||
|
||||
if download_all:
|
||||
|
||||
# Download all episodes without asking
|
||||
for i_episode in range(1, episodes_count + 1):
|
||||
download_video(index_season_selected, i_episode, scape_info_serie)
|
||||
console.print(f"\n[red]End downloaded [yellow]season: [red]{index_season_selected}.")
|
||||
|
||||
else:
|
||||
|
||||
# Display episodes list and manage user selection
|
||||
last_command = display_episodes_list(scape_info_serie.list_episodes)
|
||||
list_episode_select = manage_selection(last_command, episodes_count)
|
||||
|
||||
try:
|
||||
list_episode_select = validate_episode_selection(list_episode_select, episodes_count)
|
||||
except ValueError as e:
|
||||
console.print(f"[red]{str(e)}")
|
||||
return
|
||||
|
||||
# Download selected episodes
|
||||
for i_episode in list_episode_select:
|
||||
download_video(index_season_selected, i_episode, scape_info_serie)
|
||||
|
||||
|
||||
def download_series(dict_serie: MediaItem) -> None:
|
||||
"""
|
||||
Download all episodes of a TV series.
|
||||
|
||||
Parameters:
|
||||
- dict_serie (MediaItem): obj with url name type and score
|
||||
"""
|
||||
|
||||
# Start message and set up video source
|
||||
start_message()
|
||||
|
||||
# Init class
|
||||
scape_info_serie = GetSerieInfo(dict_serie)
|
||||
|
||||
# Collect information about seasons
|
||||
seasons_count = scape_info_serie.get_seasons_number()
|
||||
|
||||
# Prompt user for season selection and download episodes
|
||||
console.print(f"\n[green]Seasons found: [red]{seasons_count}")
|
||||
index_season_selected = msg.ask(
|
||||
"\n[cyan]Insert season number [yellow](e.g., 1), [red]* [cyan]to download all seasons, "
|
||||
"[yellow](e.g., 1-2) [cyan]for a range of seasons, or [yellow](e.g., 3-*) [cyan]to download from a specific season to the end"
|
||||
)
|
||||
|
||||
# Manage and validate the selection
|
||||
list_season_select = manage_selection(index_season_selected, seasons_count)
|
||||
|
||||
try:
|
||||
list_season_select = validate_selection(list_season_select, seasons_count)
|
||||
except ValueError as e:
|
||||
console.print(f"[red]{str(e)}")
|
||||
return
|
||||
|
||||
# Loop through the selected seasons and download episodes
|
||||
for i_season in list_season_select:
|
||||
if len(list_season_select) > 1 or index_season_selected == "*":
|
||||
|
||||
# Download all episodes if multiple seasons are selected or if '*' is used
|
||||
download_episode(scape_info_serie, i_season, download_all=True)
|
||||
else:
|
||||
|
||||
# Otherwise, let the user select specific episodes for the single season
|
||||
download_episode(scape_info_serie, i_season, download_all=False)
|
||||
|
||||
|
||||
def display_episodes_list(obj_episode_manager) -> str:
|
||||
"""
|
||||
Display episodes list and handle user input.
|
||||
|
||||
Returns:
|
||||
last_command (str): Last command entered by the user.
|
||||
"""
|
||||
|
||||
# Set up table for displaying episodes
|
||||
table_show_manager.set_slice_end(10)
|
||||
|
||||
# Add columns to the table
|
||||
column_info = {
|
||||
"Index": {'color': 'red'},
|
||||
"Name": {'color': 'magenta'},
|
||||
}
|
||||
table_show_manager.add_column(column_info)
|
||||
|
||||
# Populate the table with episodes information
|
||||
for media in obj_episode_manager:
|
||||
table_show_manager.add_tv_show({
|
||||
'Index': str(media.get('number')),
|
||||
'Name': media.get('name'),
|
||||
})
|
||||
|
||||
# Run the table and handle user input
|
||||
last_command = table_show_manager.run()
|
||||
|
||||
if last_command == "q":
|
||||
console.print("\n[red]Quit [white]...")
|
||||
sys.exit(0)
|
||||
|
||||
return last_command
|
@ -1,84 +0,0 @@
|
||||
# 09.06.24
|
||||
|
||||
# External libraries
|
||||
import httpx
|
||||
from bs4 import BeautifulSoup
|
||||
|
||||
|
||||
# Internal utilities
|
||||
from StreamingCommunity.Util.console import console
|
||||
from StreamingCommunity.Util._jsonConfig import config_manager
|
||||
from StreamingCommunity.Util.headers import get_headers
|
||||
from StreamingCommunity.Util.table import TVShowManager
|
||||
|
||||
|
||||
# Logic class
|
||||
from StreamingCommunity.Api.Template import get_select_title
|
||||
from StreamingCommunity.Api.Template.Util import search_domain
|
||||
from StreamingCommunity.Api.Template.Class.SearchType import MediaManager
|
||||
|
||||
|
||||
# Variable
|
||||
from .costant import SITE_NAME
|
||||
media_search_manager = MediaManager()
|
||||
table_show_manager = TVShowManager()
|
||||
|
||||
|
||||
def title_search(word_to_search: str) -> int:
|
||||
"""
|
||||
Search for titles based on a search query.
|
||||
|
||||
Parameters:
|
||||
- title_search (str): The title to search for.
|
||||
|
||||
Returns:
|
||||
- int: The number of titles found.
|
||||
"""
|
||||
|
||||
# Find new domain if prev dont work
|
||||
max_timeout = config_manager.get_int("REQUESTS", "timeout")
|
||||
domain_to_use, _ = search_domain(SITE_NAME, f"https://{SITE_NAME}")
|
||||
|
||||
# Send request to search for titles
|
||||
try:
|
||||
response = httpx.get(
|
||||
url=f"https://guardaserie.{domain_to_use}/?story={word_to_search}&do=search&subaction=search",
|
||||
headers={'user-agent': get_headers()},
|
||||
timeout=max_timeout
|
||||
)
|
||||
response.raise_for_status()
|
||||
|
||||
except Exception as e:
|
||||
console.print(f"Site: {SITE_NAME}, request search error: {e}")
|
||||
|
||||
# Create soup and find table
|
||||
soup = BeautifulSoup(response.text, "html.parser")
|
||||
table_content = soup.find('div', class_="mlnew-list")
|
||||
|
||||
for serie_div in table_content.find_all('div', class_='mlnew'):
|
||||
|
||||
try:
|
||||
title = serie_div.find('div', class_='mlnh-2').find("h2").get_text(strip=True)
|
||||
link = serie_div.find('div', class_='mlnh-2').find('a')['href']
|
||||
imdb_rating = serie_div.find('span', class_='mlnh-imdb').get_text(strip=True)
|
||||
|
||||
serie_info = {
|
||||
'name': title,
|
||||
'url': link,
|
||||
'score': imdb_rating
|
||||
}
|
||||
|
||||
media_search_manager.add_media(serie_info)
|
||||
|
||||
except:
|
||||
pass
|
||||
|
||||
# Return the number of titles found
|
||||
return media_search_manager.get_length()
|
||||
|
||||
|
||||
def run_get_select_title():
|
||||
"""
|
||||
Display a selection of titles and prompt the user to choose one.
|
||||
"""
|
||||
return get_select_title(table_show_manager, media_search_manager)
|
@ -1,110 +0,0 @@
|
||||
# 13.06.24
|
||||
|
||||
import logging
|
||||
from typing import List, Dict
|
||||
|
||||
|
||||
# External libraries
|
||||
import httpx
|
||||
from bs4 import BeautifulSoup
|
||||
|
||||
|
||||
# Internal utilities
|
||||
from StreamingCommunity.Util.headers import get_headers
|
||||
|
||||
|
||||
# Logic class
|
||||
from StreamingCommunity.Api.Template .Class.SearchType import MediaItem
|
||||
|
||||
|
||||
class GetSerieInfo:
|
||||
def __init__(self, dict_serie: MediaItem) -> None:
|
||||
"""
|
||||
Initializes the GetSerieInfo object with default values.
|
||||
|
||||
Parameters:
|
||||
dict_serie (MediaItem): Dictionary containing series information (optional).
|
||||
"""
|
||||
self.headers = {'user-agent': get_headers()}
|
||||
self.url = dict_serie.url
|
||||
self.tv_name = None
|
||||
self.list_episodes = None
|
||||
|
||||
def get_seasons_number(self) -> int:
|
||||
"""
|
||||
Retrieves the number of seasons of a TV series.
|
||||
|
||||
Returns:
|
||||
int: Number of seasons of the TV series.
|
||||
"""
|
||||
try:
|
||||
|
||||
# Make an HTTP request to the series URL
|
||||
response = httpx.get(self.url, headers=self.headers, timeout=15)
|
||||
response.raise_for_status()
|
||||
|
||||
# Parse HTML content of the page
|
||||
soup = BeautifulSoup(response.text, "html.parser")
|
||||
|
||||
# Find the container of seasons
|
||||
table_content = soup.find('div', class_="tt_season")
|
||||
|
||||
# Count the number of seasons
|
||||
seasons_number = len(table_content.find_all("li"))
|
||||
|
||||
# Extract the name of the series
|
||||
self.tv_name = soup.find("h1", class_="front_title").get_text(strip=True)
|
||||
|
||||
return seasons_number
|
||||
|
||||
except Exception as e:
|
||||
logging.error(f"Error parsing HTML page: {e}")
|
||||
|
||||
return -999
|
||||
|
||||
def get_episode_number(self, n_season: int) -> List[Dict[str, str]]:
|
||||
"""
|
||||
Retrieves the number of episodes for a specific season.
|
||||
|
||||
Parameters:
|
||||
n_season (int): The season number.
|
||||
|
||||
Returns:
|
||||
List[Dict[str, str]]: List of dictionaries containing episode information.
|
||||
"""
|
||||
try:
|
||||
|
||||
# Make an HTTP request to the series URL
|
||||
response = httpx.get(self.url, headers=self.headers, timeout=15)
|
||||
response.raise_for_status()
|
||||
|
||||
# Parse HTML content of the page
|
||||
soup = BeautifulSoup(response.text, "html.parser")
|
||||
|
||||
# Find the container of episodes for the specified season
|
||||
table_content = soup.find('div', class_="tab-pane", id=f"season-{n_season}")
|
||||
|
||||
# Extract episode information
|
||||
episode_content = table_content.find_all("li")
|
||||
list_dict_episode = []
|
||||
|
||||
for episode_div in episode_content:
|
||||
index = episode_div.find("a").get("data-num")
|
||||
link = episode_div.find("a").get("data-link")
|
||||
name = episode_div.find("a").get("data-title")
|
||||
|
||||
obj_episode = {
|
||||
'number': index,
|
||||
'name': name,
|
||||
'url': link
|
||||
}
|
||||
|
||||
list_dict_episode.append(obj_episode)
|
||||
|
||||
self.list_episodes = list_dict_episode
|
||||
return list_dict_episode
|
||||
|
||||
except Exception as e:
|
||||
logging.error(f"Error parsing HTML page: {e}")
|
||||
|
||||
return []
|
@ -1,49 +0,0 @@
|
||||
# 26.05.24
|
||||
|
||||
from unidecode import unidecode
|
||||
|
||||
|
||||
# Internal utilities
|
||||
from StreamingCommunity.Util.console import console, msg
|
||||
|
||||
|
||||
# Logic class
|
||||
from StreamingCommunity.Lib.TMBD import tmdb, Json_film
|
||||
from .film import download_film
|
||||
|
||||
|
||||
# Variable
|
||||
indice = 9
|
||||
_useFor = "film"
|
||||
_deprecate = False
|
||||
_priority = 2
|
||||
_engineDownload = "hls"
|
||||
from .costant import SITE_NAME
|
||||
|
||||
|
||||
def search(string_to_search: str = None, get_onylDatabase: bool = False):
|
||||
"""
|
||||
Main function of the application for film and series.
|
||||
"""
|
||||
|
||||
if string_to_search is None:
|
||||
string_to_search = msg.ask(f"\n[purple]Insert word to search in [red]{SITE_NAME}").strip()
|
||||
|
||||
# Not available for the moment
|
||||
if get_onylDatabase:
|
||||
return 0
|
||||
|
||||
# Search on database
|
||||
movie_id = tmdb.search_movie(unidecode(string_to_search))
|
||||
|
||||
if movie_id is not None:
|
||||
movie_details: Json_film = tmdb.get_movie_details(tmdb_id=movie_id)
|
||||
|
||||
# Download only film
|
||||
download_film(movie_details)
|
||||
|
||||
else:
|
||||
console.print(f"\n[red]Nothing matching was found for[white]: [purple]{string_to_search}")
|
||||
|
||||
# Retry
|
||||
search()
|
@ -1,15 +0,0 @@
|
||||
# 26.05.24
|
||||
|
||||
import os
|
||||
|
||||
|
||||
# Internal utilities
|
||||
from StreamingCommunity.Util._jsonConfig import config_manager
|
||||
|
||||
|
||||
SITE_NAME = os.path.basename(os.path.dirname(os.path.abspath(__file__)))
|
||||
ROOT_PATH = config_manager.get('DEFAULT', 'root_path')
|
||||
DOMAIN_NOW = config_manager.get_dict('SITE', SITE_NAME)['domain']
|
||||
|
||||
SERIES_FOLDER = config_manager.get('DEFAULT', 'serie_folder_name')
|
||||
MOVIE_FOLDER = config_manager.get('DEFAULT', 'movie_folder_name')
|
@ -1,94 +0,0 @@
|
||||
# 17.09.24
|
||||
|
||||
import os
|
||||
import sys
|
||||
import time
|
||||
import logging
|
||||
|
||||
|
||||
# External libraries
|
||||
import httpx
|
||||
from bs4 import BeautifulSoup
|
||||
|
||||
|
||||
# Internal utilities
|
||||
from StreamingCommunity.Util.console import console, msg
|
||||
from StreamingCommunity.Util.os import os_manager
|
||||
from StreamingCommunity.Util.message import start_message
|
||||
from StreamingCommunity.Util.call_stack import get_call_stack
|
||||
from StreamingCommunity.Util.headers import get_headers
|
||||
from StreamingCommunity.Lib.Downloader import HLS_Downloader
|
||||
|
||||
|
||||
# Logic class
|
||||
from StreamingCommunity.Api.Template.Util import execute_search
|
||||
|
||||
|
||||
# Player
|
||||
from StreamingCommunity.Api.Player.supervideo import VideoSource
|
||||
|
||||
|
||||
# TMBD
|
||||
from StreamingCommunity.Lib.TMBD import Json_film
|
||||
|
||||
|
||||
# Config
|
||||
from .costant import ROOT_PATH, SITE_NAME, DOMAIN_NOW, MOVIE_FOLDER
|
||||
|
||||
|
||||
def download_film(movie_details: Json_film):
|
||||
"""
|
||||
Downloads a film using the provided tmbd id.
|
||||
|
||||
Parameters:
|
||||
- movie_details (Json_film): Class with info about film title.
|
||||
"""
|
||||
|
||||
# Start message and display film information
|
||||
start_message()
|
||||
console.print(f"[yellow]Download: [red]{movie_details.title} \n")
|
||||
|
||||
# Make request to main site
|
||||
try:
|
||||
url = f"https://{SITE_NAME}.{DOMAIN_NOW}/set-movie-a/{movie_details.imdb_id}"
|
||||
response = httpx.get(url, headers={'User-Agent': get_headers()})
|
||||
response.raise_for_status()
|
||||
|
||||
except:
|
||||
logging.error(f"Not found in the server. Dict: {movie_details}")
|
||||
raise
|
||||
|
||||
# Extract supervideo url
|
||||
soup = BeautifulSoup(response.text, "html.parser")
|
||||
player_links = soup.find("ul", class_ = "_player-mirrors").find_all("li")
|
||||
supervideo_url = "https:" + player_links[0].get("data-link")
|
||||
|
||||
|
||||
# Set domain and media ID for the video source
|
||||
video_source = VideoSource()
|
||||
video_source.setup(supervideo_url)
|
||||
|
||||
# Define output path
|
||||
title_name = os_manager.get_sanitize_file(movie_details.title) + ".mp4"
|
||||
mp4_path = os.path.join(ROOT_PATH, SITE_NAME, MOVIE_FOLDER, title_name.replace(".mp4", ""))
|
||||
|
||||
# Get m3u8 master playlist
|
||||
master_playlist = video_source.get_playlist()
|
||||
|
||||
# Download the film using the m3u8 playlist, and output filename
|
||||
r_proc = HLS_Downloader(
|
||||
m3u8_playlist=master_playlist,
|
||||
output_filename=os.path.join(mp4_path, title_name)
|
||||
).start()
|
||||
|
||||
if r_proc == 404:
|
||||
time.sleep(2)
|
||||
|
||||
# Re call search function
|
||||
if msg.ask("[green]Do you want to continue [white]([red]y[white])[green] or return at home[white]([red]n[white]) ", choices=['y', 'n'], default='y', show_choices=True) == "n":
|
||||
frames = get_call_stack()
|
||||
execute_search(frames[-4])
|
||||
|
||||
if r_proc != None:
|
||||
console.print("[green]Result: ")
|
||||
console.print(r_proc)
|
@ -1,51 +0,0 @@
|
||||
# 02.07.24
|
||||
|
||||
from unidecode import unidecode
|
||||
|
||||
|
||||
# Internal utilities
|
||||
from StreamingCommunity.Util.console import console, msg
|
||||
|
||||
|
||||
# Logic class
|
||||
from .site import title_search, run_get_select_title, media_search_manager
|
||||
from .title import download_title
|
||||
|
||||
|
||||
# Variable
|
||||
indice = 8
|
||||
_useFor = "film_serie"
|
||||
_deprecate = False
|
||||
_priority = 2
|
||||
_engineDownload = "tor"
|
||||
from .costant import SITE_NAME
|
||||
|
||||
|
||||
def search(string_to_search: str = None, get_onylDatabase: bool = False):
|
||||
"""
|
||||
Main function of the application for film and series.
|
||||
"""
|
||||
|
||||
if string_to_search is None:
|
||||
string_to_search = msg.ask(f"\n[purple]Insert word to search in [red]{SITE_NAME}").strip()
|
||||
|
||||
# Search on database
|
||||
len_database = title_search(unidecode(string_to_search))
|
||||
|
||||
# Return list of elements
|
||||
if get_onylDatabase:
|
||||
return media_search_manager
|
||||
|
||||
if len_database > 0:
|
||||
|
||||
# Select title from list
|
||||
select_title = run_get_select_title()
|
||||
|
||||
# Download title
|
||||
download_title(select_title)
|
||||
|
||||
else:
|
||||
console.print(f"\n[red]Nothing matching was found for[white]: [purple]{string_to_search}")
|
||||
|
||||
# Retry
|
||||
search()
|
@ -1,15 +0,0 @@
|
||||
# 09.06.24
|
||||
|
||||
import os
|
||||
|
||||
|
||||
# Internal utilities
|
||||
from StreamingCommunity.Util._jsonConfig import config_manager
|
||||
|
||||
|
||||
SITE_NAME = os.path.basename(os.path.dirname(os.path.abspath(__file__)))
|
||||
ROOT_PATH = config_manager.get('DEFAULT', 'root_path')
|
||||
DOMAIN_NOW = config_manager.get_dict('SITE', SITE_NAME)['domain']
|
||||
|
||||
SERIES_FOLDER = config_manager.get('DEFAULT', 'serie_folder_name')
|
||||
MOVIE_FOLDER = config_manager.get('DEFAULT', 'movie_folder_name')
|
@ -1,89 +0,0 @@
|
||||
# 02.07.24
|
||||
|
||||
# External libraries
|
||||
import httpx
|
||||
from bs4 import BeautifulSoup
|
||||
|
||||
|
||||
# Internal utilities
|
||||
from StreamingCommunity.Util.console import console
|
||||
from StreamingCommunity.Util._jsonConfig import config_manager
|
||||
from StreamingCommunity.Util.headers import get_headers
|
||||
from StreamingCommunity.Util.table import TVShowManager
|
||||
|
||||
|
||||
# Logic class
|
||||
from StreamingCommunity.Api.Template import get_select_title
|
||||
from StreamingCommunity.Api.Template.Class.SearchType import MediaManager
|
||||
|
||||
|
||||
# Variable
|
||||
from .costant import SITE_NAME, DOMAIN_NOW
|
||||
media_search_manager = MediaManager()
|
||||
table_show_manager = TVShowManager()
|
||||
|
||||
|
||||
def title_search(word_to_search: str) -> int:
|
||||
"""
|
||||
Search for titles based on a search query.
|
||||
|
||||
Parameters:
|
||||
- title_search (str): The title to search for.
|
||||
|
||||
Returns:
|
||||
- int: The number of titles found.
|
||||
"""
|
||||
|
||||
# Find new domain if prev dont work
|
||||
max_timeout = config_manager.get_int("REQUESTS", "timeout")
|
||||
|
||||
# Construct the full site URL and load the search page
|
||||
try:
|
||||
response = httpx.get(
|
||||
url=f"https://1.{SITE_NAME}.{DOMAIN_NOW}/s/?q={word_to_search}&video=on",
|
||||
headers={
|
||||
'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7',
|
||||
'accept-language': 'it-IT,it;q=0.9,en-US;q=0.8,en;q=0.7',
|
||||
'referer': 'https://wwv.thepiratebay3.co/',
|
||||
'user-agent': get_headers()
|
||||
},
|
||||
follow_redirects=True,
|
||||
timeout=max_timeout
|
||||
)
|
||||
response.raise_for_status()
|
||||
|
||||
except Exception as e:
|
||||
console.print(f"Site: {SITE_NAME}, request search error: {e}")
|
||||
|
||||
# Create soup and find table
|
||||
soup = BeautifulSoup(response.text, "html.parser")
|
||||
table = soup.find("tbody")
|
||||
|
||||
# Scrape div film in table on single page
|
||||
for tr in table.find_all('tr'):
|
||||
try:
|
||||
|
||||
title_info = {
|
||||
'name': tr.find_all("a")[1].get_text(strip=True),
|
||||
'url': tr.find_all("td")[3].find("a").get("href"),
|
||||
'upload': tr.find_all("td")[2].get_text(strip=True),
|
||||
'size': tr.find_all("td")[4].get_text(strip=True),
|
||||
'seader': tr.find_all("td")[5].get_text(strip=True),
|
||||
'leacher': tr.find_all("td")[6].get_text(strip=True),
|
||||
'by': tr.find_all("td")[7].get_text(strip=True),
|
||||
}
|
||||
|
||||
media_search_manager.add_media(title_info)
|
||||
|
||||
except:
|
||||
continue
|
||||
|
||||
# Return the number of titles found
|
||||
return media_search_manager.get_length()
|
||||
|
||||
|
||||
def run_get_select_title():
|
||||
"""
|
||||
Display a selection of titles and prompt the user to choose one.
|
||||
"""
|
||||
return get_select_title(table_show_manager, media_search_manager)
|
@ -1,45 +0,0 @@
|
||||
# 02.07.24
|
||||
|
||||
import os
|
||||
import sys
|
||||
|
||||
|
||||
# Internal utilities
|
||||
from StreamingCommunity.Util.console import console
|
||||
from StreamingCommunity.Util.message import start_message
|
||||
from StreamingCommunity.Util.os import os_manager
|
||||
from StreamingCommunity.Lib.Downloader import TOR_downloader
|
||||
|
||||
|
||||
# Logic class
|
||||
from StreamingCommunity.Api.Template.Class.SearchType import MediaItem
|
||||
|
||||
|
||||
# Config
|
||||
from .costant import ROOT_PATH, DOMAIN_NOW, SITE_NAME, MOVIE_FOLDER
|
||||
|
||||
|
||||
def download_title(select_title: MediaItem):
|
||||
"""
|
||||
Downloads a media item and saves it as an MP4 file.
|
||||
|
||||
Parameters:
|
||||
- select_title (MediaItem): The media item to be downloaded. This should be an instance of the MediaItem class, containing attributes like `name` and `url`.
|
||||
"""
|
||||
|
||||
start_message()
|
||||
console.print(f"[yellow]Download: [red]{select_title.name} \n")
|
||||
print()
|
||||
|
||||
# Define output path
|
||||
title_name = os_manager.get_sanitize_file(select_title.name.replace("-", "_") + ".mp4")
|
||||
mp4_path = os.path.join(ROOT_PATH, SITE_NAME, MOVIE_FOLDER, title_name.replace(".mp4", ""))
|
||||
|
||||
# Create output folder
|
||||
os_manager.create_path(mp4_path)
|
||||
|
||||
# Tor manager
|
||||
manager = TOR_downloader()
|
||||
manager.add_magnet_link(select_title.url)
|
||||
manager.start_download()
|
||||
manager.move_downloaded_files(mp4_path)
|
@ -19,7 +19,6 @@ _useFor = "film_serie"
|
||||
_deprecate = False
|
||||
_priority = 1
|
||||
_engineDownload = "hls"
|
||||
from .costant import SITE_NAME
|
||||
|
||||
|
||||
def search(string_to_search: str = None, get_onylDatabase: bool = False):
|
||||
@ -28,7 +27,7 @@ def search(string_to_search: str = None, get_onylDatabase: bool = False):
|
||||
"""
|
||||
|
||||
if string_to_search is None:
|
||||
string_to_search = msg.ask(f"\n[purple]Insert word to search in [red]{SITE_NAME}").strip()
|
||||
string_to_search = msg.ask("\n[purple]Insert word to search in all site").strip()
|
||||
|
||||
# Get site domain and version and get result of the search
|
||||
site_version, domain = get_version_and_domain()
|
||||
|
159
StreamingCommunity/Api/Site/streamingcommunity/api.py
Normal file
159
StreamingCommunity/Api/Site/streamingcommunity/api.py
Normal file
@ -0,0 +1,159 @@
|
||||
# 02.12.24
|
||||
|
||||
from datetime import datetime
|
||||
from typing import List, Dict
|
||||
|
||||
|
||||
# External
|
||||
import httpx
|
||||
|
||||
|
||||
# Util
|
||||
from StreamingCommunity.Util.headers import get_headers
|
||||
from StreamingCommunity.Util.console import console, msg
|
||||
from StreamingCommunity.Util._jsonConfig import config_manager
|
||||
|
||||
|
||||
# Internal
|
||||
from StreamingCommunity.Api.Site.streamingcommunity.costant import SITE_NAME
|
||||
from StreamingCommunity.Api.Site.streamingcommunity.site import get_version_and_domain
|
||||
|
||||
|
||||
# Variable
|
||||
max_timeout = 10
|
||||
|
||||
|
||||
def search_titles(title_search: str, domain: str) -> List[Dict]:
|
||||
"""
|
||||
Searches for content using an API based on a title and domain.
|
||||
|
||||
Args:
|
||||
title_search (str): The title to search for.
|
||||
domain (str): The domain of the API site to query.
|
||||
|
||||
Returns:
|
||||
List[Dict[str, str | int]]: A list of dictionaries containing information about the found content.
|
||||
"""
|
||||
titles = []
|
||||
|
||||
try:
|
||||
url = f"https://{SITE_NAME}.{domain}/api/search?q={title_search.replace(' ', '+')}"
|
||||
|
||||
response = httpx.get(
|
||||
url=url,
|
||||
headers={'user-agent': get_headers()},
|
||||
timeout=max_timeout
|
||||
)
|
||||
|
||||
response.raise_for_status()
|
||||
|
||||
except:
|
||||
console.print(f"[red]Error: {response.status_code}")
|
||||
return []
|
||||
|
||||
for dict_title in response.json().get('data', []):
|
||||
if dict_title.get('last_air_date'):
|
||||
release_year = datetime.strptime(dict_title['last_air_date'], '%Y-%m-%d').year
|
||||
else:
|
||||
release_year = ''
|
||||
|
||||
images = {}
|
||||
for dict_image in dict_title.get('images', []):
|
||||
images[dict_image.get('type')] = f"https://cdn.{SITE_NAME}.{domain}/images/{dict_image.get('filename')}"
|
||||
|
||||
titles.append({
|
||||
'id': dict_title.get("id", ""),
|
||||
'slug': dict_title.get("slug", ""),
|
||||
'name': dict_title.get("name", ""),
|
||||
'type': dict_title.get("type", ""),
|
||||
'seasons_count': dict_title.get("seasons_count", 0),
|
||||
'year': release_year,
|
||||
'images': images,
|
||||
'url': f"https://{SITE_NAME}.{domain}/titles/{dict_title.get('id')}-{dict_title.get('slug')}"
|
||||
})
|
||||
|
||||
return titles
|
||||
|
||||
def get_infoSelectTitle(url_title: str, domain: str, version: str):
|
||||
|
||||
headers = {
|
||||
'user-agent': get_headers(),
|
||||
'x-inertia': 'true',
|
||||
'x-inertia-version': version
|
||||
}
|
||||
|
||||
response = httpx.get(url_title, headers=headers, timeout=10)
|
||||
|
||||
if response.status_code == 200:
|
||||
json_response = response.json()['props']
|
||||
|
||||
generes = []
|
||||
for g in json_response["genres"]:
|
||||
generes.append(g["name"])
|
||||
|
||||
trailer = None
|
||||
if len(json_response['title']['trailers']) > 0:
|
||||
trailer = f"https://www.youtube.com/watch?v={json_response['title']['trailers'][0]['youtube_id']}"
|
||||
|
||||
images = {}
|
||||
for dict_image in json_response['title'].get('images', []):
|
||||
images[dict_image.get('type')] = f"https://cdn.{SITE_NAME}.{domain}/images/{dict_image.get('filename')}"
|
||||
|
||||
rsp = {
|
||||
'id': json_response['title']['id'],
|
||||
'name': json_response['title']['name'],
|
||||
'slug': json_response['title']['slug'],
|
||||
'plot': json_response['title']['plot'],
|
||||
'type': json_response['title']['type'],
|
||||
'season_count': json_response['title']['seasons_count'],
|
||||
'generes': generes,
|
||||
'trailer': trailer,
|
||||
'image': images
|
||||
}
|
||||
|
||||
if json_response['title']['type'] == 'tv':
|
||||
season = json_response["loadedSeason"]["episodes"]
|
||||
episodes = []
|
||||
|
||||
for e in season:
|
||||
episode = {
|
||||
"id": e["id"],
|
||||
"number": e["number"],
|
||||
"name": e["name"],
|
||||
"plot": e["plot"],
|
||||
"duration": e["duration"],
|
||||
"image": f"https://cdn.{SITE_NAME}.{domain}/images/{e['images'][0]['filename']}"
|
||||
}
|
||||
episodes.append(episode)
|
||||
|
||||
rsp["episodes"] = episodes
|
||||
|
||||
return rsp
|
||||
|
||||
else:
|
||||
return []
|
||||
|
||||
def get_infoSelectSeason(url_title: str, number_season: int, domain: str, version: str):
|
||||
|
||||
headers = {
|
||||
'user-agent': get_headers(),
|
||||
'x-inertia': 'true',
|
||||
'x-inertia-version': version
|
||||
}
|
||||
|
||||
response = httpx.get(f"{url_title}/stagione-{number_season}", headers=headers, timeout=10)
|
||||
|
||||
json_response = response.json().get('props').get('loadedSeason').get('episodes')
|
||||
json_episodes = []
|
||||
|
||||
for json_ep in json_response:
|
||||
|
||||
json_episodes.append({
|
||||
'id': json_ep.get('id'),
|
||||
'number': json_ep.get('number'),
|
||||
'name': json_ep.get('name'),
|
||||
'plot': json_ep.get('plot'),
|
||||
'image': f"https://cdn.{SITE_NAME}.{domain}/images/{json_ep.get('images')[0]['filename']}"
|
||||
})
|
||||
|
||||
return json_episodes
|
@ -57,14 +57,8 @@ def download_film(select_title: MediaItem):
|
||||
output_filename=os.path.join(mp4_path, title_name)
|
||||
).start()
|
||||
|
||||
if r_proc == 404:
|
||||
time.sleep(2)
|
||||
|
||||
# Re call search function
|
||||
if msg.ask("[green]Do you want to continue [white]([red]y[white])[green] or return at home[white]([red]n[white]) ", choices=['y', 'n'], default='y', show_choices=True) == "n":
|
||||
frames = get_call_stack()
|
||||
execute_search(frames[-4])
|
||||
|
||||
if r_proc != None:
|
||||
console.print("[green]Result: ")
|
||||
console.print(r_proc)
|
||||
|
||||
return os.path.join(mp4_path, title_name)
|
||||
|
@ -42,13 +42,13 @@ def download_video(tv_name: str, index_season_selected: int, index_episode_selec
|
||||
start_message()
|
||||
|
||||
# Get info about episode
|
||||
obj_episode = scrape_serie.episode_manager.get(index_episode_selected - 1)
|
||||
obj_episode = scrape_serie.obj_episode_manager.episodes[index_episode_selected - 1]
|
||||
console.print(f"[yellow]Download: [red]{index_season_selected}:{index_episode_selected} {obj_episode.name}")
|
||||
print()
|
||||
|
||||
# Define filename and path for the downloaded video
|
||||
mp4_name = f"{map_episode_title(tv_name, index_season_selected, index_episode_selected, obj_episode.name)}.mp4"
|
||||
mp4_path = os.path.join(ROOT_PATH, SITE_NAME, SERIES_FOLDER, tv_name, f"S{index_season_selected}")
|
||||
mp4_path = os.path.join(ROOT_PATH, SITE_NAME, SERIES_FOLDER, tv_name, f"S{index_season_selected}")
|
||||
|
||||
# Retrieve scws and if available master playlist
|
||||
video_source.get_iframe(obj_episode.id)
|
||||
@ -61,18 +61,12 @@ def download_video(tv_name: str, index_season_selected: int, index_episode_selec
|
||||
output_filename=os.path.join(mp4_path, mp4_name)
|
||||
).start()
|
||||
|
||||
if r_proc == 404:
|
||||
time.sleep(2)
|
||||
|
||||
# Re call search function
|
||||
if msg.ask("[green]Do you want to continue [white]([red]y[white])[green] or return at home[white]([red]n[white]) ", choices=['y', 'n'], default='y', show_choices=True) == "n":
|
||||
frames = get_call_stack()
|
||||
execute_search(frames[-4])
|
||||
|
||||
if r_proc != None:
|
||||
console.print("[green]Result: ")
|
||||
console.print(r_proc)
|
||||
|
||||
return os.path.join(mp4_path, mp4_name)
|
||||
|
||||
def download_episode(tv_name: str, index_season_selected: int, scrape_serie: ScrapeSerie, video_source: VideoSource, download_all: bool = False) -> None:
|
||||
"""
|
||||
Download episodes of a selected season.
|
||||
@ -84,12 +78,13 @@ def download_episode(tv_name: str, index_season_selected: int, scrape_serie: Scr
|
||||
"""
|
||||
|
||||
# Clean memory of all episodes and get the number of the season
|
||||
scrape_serie.episode_manager.clear()
|
||||
scrape_serie.obj_episode_manager.clear()
|
||||
season_number = scrape_serie.obj_season_manager.seasons[index_season_selected - 1].number
|
||||
|
||||
# Start message and collect information about episodes
|
||||
start_message()
|
||||
scrape_serie.collect_info_season(index_season_selected)
|
||||
episodes_count = scrape_serie.episode_manager.length()
|
||||
scrape_serie.collect_title_season(season_number)
|
||||
episodes_count = scrape_serie.obj_episode_manager.get_length()
|
||||
|
||||
if download_all:
|
||||
|
||||
@ -136,8 +131,8 @@ def download_series(select_season: MediaItem, version: str) -> None:
|
||||
video_source.setup(select_season.id)
|
||||
|
||||
# Collect information about seasons
|
||||
scrape_serie.collect_info_title()
|
||||
seasons_count = scrape_serie.season_manager.seasons_count
|
||||
scrape_serie.collect_info_seasons()
|
||||
seasons_count = scrape_serie.obj_season_manager.get_length()
|
||||
|
||||
# Prompt user for season selection and download episodes
|
||||
console.print(f"\n[green]Seasons found: [red]{seasons_count}")
|
||||
@ -187,7 +182,7 @@ def display_episodes_list(scrape_serie) -> str:
|
||||
table_show_manager.add_column(column_info)
|
||||
|
||||
# Populate the table with episodes information
|
||||
for i, media in enumerate(scrape_serie.episode_manager.episodes):
|
||||
for i, media in enumerate(scrape_serie.obj_episode_manager.episodes):
|
||||
table_show_manager.add_tv_show({
|
||||
'Index': str(media.number),
|
||||
'Name': media.name,
|
||||
|
@ -3,7 +3,6 @@
|
||||
import sys
|
||||
import json
|
||||
import logging
|
||||
import secrets
|
||||
|
||||
|
||||
# External libraries
|
||||
@ -32,7 +31,7 @@ from .costant import SITE_NAME
|
||||
# Variable
|
||||
media_search_manager = MediaManager()
|
||||
table_show_manager = TVShowManager()
|
||||
max_timeout = config_manager.get_int("REQUESTS", "timeout")
|
||||
|
||||
|
||||
|
||||
def get_version(text: str):
|
||||
@ -53,7 +52,7 @@ def get_version(text: str):
|
||||
|
||||
# Extract version
|
||||
version = json.loads(soup.find("div", {"id": "app"}).get("data-page"))['version']
|
||||
console.print(f"[cyan]Get version [white]=> [red]{version} \n")
|
||||
#console.print(f"[cyan]Get version [white]=> [red]{version} \n")
|
||||
|
||||
return version
|
||||
|
||||
@ -75,17 +74,7 @@ def get_version_and_domain():
|
||||
domain_to_use, base_url = search_domain(SITE_NAME, f"https://{SITE_NAME}")
|
||||
|
||||
# Extract version from the response
|
||||
try:
|
||||
version = get_version(httpx.get(
|
||||
url=base_url,
|
||||
headers={
|
||||
'user-agent': get_headers()
|
||||
},
|
||||
timeout=max_timeout
|
||||
).text)
|
||||
except:
|
||||
console.print("[green]Auto generate version ...")
|
||||
version = secrets.token_hex(32 // 2)
|
||||
version = get_version(httpx.get(base_url, headers={'user-agent': get_headers()}).text)
|
||||
|
||||
return version, domain_to_use
|
||||
|
||||
@ -101,6 +90,10 @@ def title_search(title_search: str, domain: str) -> int:
|
||||
Returns:
|
||||
int: The number of titles found.
|
||||
"""
|
||||
|
||||
max_timeout = config_manager.get_int("REQUESTS", "timeout")
|
||||
|
||||
# Send request to search for titles ( replace à to a and space to "+" )
|
||||
try:
|
||||
response = httpx.get(
|
||||
url=f"https://{SITE_NAME}.{domain}/api/search?q={title_search.replace(' ', '+')}",
|
||||
@ -119,7 +112,6 @@ def title_search(title_search: str, domain: str) -> int:
|
||||
'slug': dict_title.get('slug'),
|
||||
'name': dict_title.get('name'),
|
||||
'type': dict_title.get('type'),
|
||||
'date': dict_title.get('last_air_date'),
|
||||
'score': dict_title.get('score')
|
||||
})
|
||||
|
||||
|
@ -10,7 +10,7 @@ import httpx
|
||||
# Internal utilities
|
||||
from StreamingCommunity.Util.headers import get_headers
|
||||
from StreamingCommunity.Util._jsonConfig import config_manager
|
||||
from StreamingCommunity.Api.Player.Helper.Vixcloud.util import Season, EpisodeManager
|
||||
from StreamingCommunity.Api.Player.Helper.Vixcloud.util import SeasonManager, EpisodeManager
|
||||
|
||||
|
||||
# Variable
|
||||
@ -26,7 +26,7 @@ class ScrapeSerie:
|
||||
site_name (str): Name of the streaming site to scrape from
|
||||
"""
|
||||
self.is_series = False
|
||||
self.headers = {'user-agent': get_headers()}
|
||||
self.headers = {}
|
||||
self.base_name = site_name
|
||||
self.domain = config_manager.get_dict('SITE', self.base_name)['domain']
|
||||
|
||||
@ -46,22 +46,23 @@ class ScrapeSerie:
|
||||
if series_name is not None:
|
||||
self.is_series = True
|
||||
self.series_name = series_name
|
||||
self.season_manager = None
|
||||
self.episode_manager: EpisodeManager = EpisodeManager()
|
||||
|
||||
def collect_info_title(self) -> None:
|
||||
"""
|
||||
Retrieve season information for a TV series from the streaming site.
|
||||
|
||||
Raises:
|
||||
Exception: If there's an error fetching season information
|
||||
"""
|
||||
self.obj_season_manager: SeasonManager = SeasonManager()
|
||||
self.obj_episode_manager: EpisodeManager = EpisodeManager()
|
||||
|
||||
# Create headers
|
||||
self.headers = {
|
||||
'user-agent': get_headers(),
|
||||
'x-inertia': 'true',
|
||||
'x-inertia-version': self.version,
|
||||
}
|
||||
|
||||
def collect_info_seasons(self) -> None:
|
||||
"""
|
||||
Retrieve season information for a TV series from the streaming site.
|
||||
|
||||
Raises:
|
||||
Exception: If there's an error fetching season information
|
||||
"""
|
||||
try:
|
||||
|
||||
response = httpx.get(
|
||||
@ -72,22 +73,17 @@ class ScrapeSerie:
|
||||
response.raise_for_status()
|
||||
|
||||
# Extract seasons from JSON response
|
||||
json_response = response.json().get('props')
|
||||
json_response = response.json().get('props', {}).get('title', {}).get('seasons', [])
|
||||
|
||||
# Add each season to the season manager
|
||||
for dict_season in json_response:
|
||||
self.obj_season_manager.add_season(dict_season)
|
||||
|
||||
# Collect info about season
|
||||
self.season_manager = Season(json_response.get('title'))
|
||||
self.season_manager.collect_images(self.base_name, self.domain)
|
||||
|
||||
# Collect first episode info
|
||||
for i, ep in enumerate(json_response.get('loadedSeason').get('episodes')):
|
||||
self.season_manager.episodes.add(ep)
|
||||
self.season_manager.episodes.get(i).collect_image(self.base_name, self.domain)
|
||||
|
||||
except Exception as e:
|
||||
logging.error(f"Error collecting season info: {e}")
|
||||
raise
|
||||
|
||||
def collect_info_season(self, number_season: int) -> None:
|
||||
def collect_title_season(self, number_season: int) -> None:
|
||||
"""
|
||||
Retrieve episode information for a specific season.
|
||||
|
||||
@ -97,12 +93,6 @@ class ScrapeSerie:
|
||||
Raises:
|
||||
Exception: If there's an error fetching episode information
|
||||
"""
|
||||
self.headers = {
|
||||
'user-agent': get_headers(),
|
||||
'x-inertia': 'true',
|
||||
'x-inertia-version': self.version,
|
||||
}
|
||||
|
||||
try:
|
||||
response = httpx.get(
|
||||
url=f'https://{self.base_name}.{self.domain}/titles/{self.media_id}-{self.series_name}/stagione-{number_season}',
|
||||
@ -112,11 +102,11 @@ class ScrapeSerie:
|
||||
response.raise_for_status()
|
||||
|
||||
# Extract episodes from JSON response
|
||||
json_response = response.json().get('props').get('loadedSeason').get('episodes')
|
||||
json_response = response.json().get('props', {}).get('loadedSeason', {}).get('episodes', [])
|
||||
|
||||
# Add each episode to the episode manager
|
||||
for dict_episode in json_response:
|
||||
self.episode_manager.add(dict_episode)
|
||||
self.obj_episode_manager.add_episode(dict_episode)
|
||||
|
||||
except Exception as e:
|
||||
logging.error(f"Error collecting title season info: {e}")
|
||||
|
@ -49,16 +49,7 @@ def get_final_redirect_url(initial_url, max_timeout):
|
||||
|
||||
# Create a client with redirects enabled
|
||||
try:
|
||||
with httpx.Client(
|
||||
headers={
|
||||
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8',
|
||||
'accept-language': 'it-IT,it;q=0.9,en-US;q=0.8,en;q=0.7',
|
||||
'User-Agent': get_headers()
|
||||
},
|
||||
follow_redirects=True,
|
||||
timeout=max_timeout
|
||||
|
||||
) as client:
|
||||
with httpx.Client(follow_redirects=True, timeout=max_timeout, headers={'user-agent': get_headers()}) as client:
|
||||
response = client.get(initial_url)
|
||||
response.raise_for_status()
|
||||
|
||||
@ -68,7 +59,7 @@ def get_final_redirect_url(initial_url, max_timeout):
|
||||
return final_url
|
||||
|
||||
except Exception as e:
|
||||
console.print(f"\n[cyan]Test url[white]: [red]{initial_url}, [cyan]error[white]: [red]{e}")
|
||||
console.print(f"[cyan]Test url[white]: [red]{initial_url}, [cyan]error[white]: [red]{e}")
|
||||
return None
|
||||
|
||||
def search_domain(site_name: str, base_url: str):
|
||||
@ -78,6 +69,7 @@ def search_domain(site_name: str, base_url: str):
|
||||
Parameters:
|
||||
- site_name (str): The name of the site to search the domain for.
|
||||
- base_url (str): The base URL to construct complete URLs.
|
||||
- follow_redirects (bool): To follow redirect url or not.
|
||||
|
||||
Returns:
|
||||
tuple: The found domain and the complete URL.
|
||||
@ -88,67 +80,47 @@ def search_domain(site_name: str, base_url: str):
|
||||
domain = str(config_manager.get_dict("SITE", site_name)['domain'])
|
||||
|
||||
try:
|
||||
# Test the current domain
|
||||
with httpx.Client(
|
||||
headers={
|
||||
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8',
|
||||
'accept-language': 'it-IT,it;q=0.9,en-US;q=0.8,en;q=0.7',
|
||||
'User-Agent': get_headers()
|
||||
},
|
||||
follow_redirects=True,
|
||||
timeout=max_timeout
|
||||
|
||||
) as client:
|
||||
response_follow = client.get(f"{base_url}.{domain}")
|
||||
response_follow.raise_for_status()
|
||||
# Test the current domain
|
||||
response_follow = httpx.get(f"{base_url}.{domain}", headers={'user-agent': get_headers()}, timeout=max_timeout, follow_redirects=True)
|
||||
response_follow.raise_for_status()
|
||||
|
||||
except Exception as e:
|
||||
|
||||
query = base_url.split("/")[-1]
|
||||
|
||||
# Perform a Google search with multiple results
|
||||
search_results = list(search(query, num_results=5))
|
||||
#console.print(f"[green]Google search results[white]: {search_results}")
|
||||
first_url = google_search(query)
|
||||
console.print(f"[green]First url from google seach[white]: [red]{first_url}")
|
||||
|
||||
# Iterate through search results
|
||||
for first_url in search_results:
|
||||
console.print(f"[green]Checking url[white]: [red]{first_url}")
|
||||
|
||||
# Check if the base URL matches the Google search result
|
||||
parsed_first_url = urlparse(first_url)
|
||||
if first_url:
|
||||
final_url = get_final_redirect_url(first_url, max_timeout)
|
||||
|
||||
# Compare base url from google search with base url from config.json
|
||||
if parsed_first_url.netloc.split(".")[0] == base_url:
|
||||
console.print(f"[red]URL does not match base URL. Skipping.[/red]")
|
||||
continue
|
||||
if final_url != None:
|
||||
console.print(f"\n[bold yellow]Suggestion:[/bold yellow] [white](Experimental)\n"
|
||||
f"[cyan]New final URL[white]: [green]{final_url}")
|
||||
|
||||
def extract_domain(url):
|
||||
parsed_url = urlparse(url)
|
||||
domain = parsed_url.netloc
|
||||
return domain.split(".")[-1]
|
||||
|
||||
try:
|
||||
final_url = get_final_redirect_url(first_url, max_timeout)
|
||||
new_domain_extract = extract_domain(str(final_url))
|
||||
|
||||
if final_url is not None:
|
||||
if msg.ask(f"[red]Do you want to auto update config.json - '[green]{site_name}[red]' with domain: [green]{new_domain_extract}", choices=["y", "n"], default="y").lower() == "y":
|
||||
|
||||
def extract_domain(url):
|
||||
parsed_url = urlparse(url)
|
||||
domain = parsed_url.netloc
|
||||
return domain.split(".")[-1]
|
||||
# Update domain in config.json
|
||||
config_manager.config['SITE'][site_name]['domain'] = new_domain_extract
|
||||
config_manager.write_config()
|
||||
|
||||
new_domain_extract = extract_domain(str(final_url))
|
||||
|
||||
if msg.ask(f"[cyan]\nDo you want to auto site[white]: [red]{site_name}[cyan] with domain[white]: [red]{new_domain_extract}", choices=["y", "n"], default="y").lower() == "y":
|
||||
|
||||
# Update domain in config.json
|
||||
config_manager.config['SITE'][site_name]['domain'] = new_domain_extract
|
||||
config_manager.write_config()
|
||||
|
||||
# Return config domain
|
||||
return new_domain_extract, f"{base_url}.{new_domain_extract}"
|
||||
# Return config domain
|
||||
#console.print(f"[cyan]Return domain: [red]{new_domain_extract} \n")
|
||||
return new_domain_extract, f"{base_url}.{new_domain_extract}"
|
||||
|
||||
except Exception as redirect_error:
|
||||
console.print(f"[red]Error following redirect for {first_url}: {redirect_error}")
|
||||
continue
|
||||
else:
|
||||
console.print("[bold red]\nManually change the domain in the JSON file.[/bold red]")
|
||||
raise
|
||||
|
||||
# If no matching URL is found
|
||||
console.print("[bold red]No valid URL found matching the base URL.[/bold red]")
|
||||
raise Exception("No matching domain found")
|
||||
else:
|
||||
console.print("[bold red]No valid URL to follow redirects.[/bold red]")
|
||||
|
||||
# Ensure the URL is in string format before parsing
|
||||
parsed_url = urlparse(str(response_follow.url))
|
||||
@ -156,9 +128,10 @@ def search_domain(site_name: str, base_url: str):
|
||||
tld = parse_domain.split('.')[-1]
|
||||
|
||||
if tld is not None:
|
||||
|
||||
# Update domain in config.json
|
||||
config_manager.config['SITE'][site_name]['domain'] = tld
|
||||
config_manager.write_config()
|
||||
|
||||
# Return config domain
|
||||
return tld, f"{base_url}.{tld}"
|
||||
return tld, f"{base_url}.{tld}"
|
||||
|
@ -117,29 +117,16 @@ def validate_selection(list_season_select: List[int], seasons_count: int) -> Lis
|
||||
Returns:
|
||||
- List[int]: Adjusted list of valid season numbers.
|
||||
"""
|
||||
while True:
|
||||
try:
|
||||
|
||||
# Remove any seasons greater than the available seasons
|
||||
valid_seasons = [season for season in list_season_select if 1 <= season <= seasons_count]
|
||||
|
||||
# If the list is empty, the input was completely invalid
|
||||
if not valid_seasons:
|
||||
logging.error(f"Invalid selection: The selected seasons are outside the available range (1-{seasons_count}). Please try again.")
|
||||
# Remove any seasons greater than the available seasons
|
||||
valid_seasons = [season for season in list_season_select if 1 <= season <= seasons_count]
|
||||
|
||||
# Re-prompt for valid input
|
||||
input_seasons = input(f"Enter valid season numbers (1-{seasons_count}): ")
|
||||
list_season_select = list(map(int, input_seasons.split(',')))
|
||||
continue # Re-prompt the user if the selection is invalid
|
||||
|
||||
return valid_seasons # Return the valid seasons if the input is correct
|
||||
|
||||
except ValueError:
|
||||
logging.error("Error: Please enter valid integers separated by commas.")
|
||||
# If the list is empty, the input was completely invalid
|
||||
if not valid_seasons:
|
||||
print()
|
||||
raise ValueError(f"Invalid selection: The selected seasons are outside the available range (1-{seasons_count}).")
|
||||
|
||||
# Prompt the user for valid input again
|
||||
input_seasons = input(f"Enter valid season numbers (1-{seasons_count}): ")
|
||||
list_season_select = list(map(int, input_seasons.split(',')))
|
||||
return valid_seasons
|
||||
|
||||
|
||||
# --> for episode
|
||||
@ -154,26 +141,13 @@ def validate_episode_selection(list_episode_select: List[int], episodes_count: i
|
||||
Returns:
|
||||
- List[int]: Adjusted list of valid episode numbers.
|
||||
"""
|
||||
while True:
|
||||
try:
|
||||
|
||||
# Remove any episodes greater than the available episodes
|
||||
valid_episodes = [episode for episode in list_episode_select if 1 <= episode <= episodes_count]
|
||||
# Remove any episodes greater than the available episodes
|
||||
valid_episodes = [episode for episode in list_episode_select if 1 <= episode <= episodes_count]
|
||||
|
||||
# If the list is empty, the input was completely invalid
|
||||
if not valid_episodes:
|
||||
logging.error(f"Invalid selection: The selected episodes are outside the available range (1-{episodes_count}). Please try again.")
|
||||
# If the list is empty, the input was completely invalid
|
||||
if not valid_episodes:
|
||||
print()
|
||||
raise ValueError(f"Invalid selection: The selected episodes are outside the available range (1-{episodes_count}).")
|
||||
|
||||
# Re-prompt for valid input
|
||||
input_episodes = input(f"Enter valid episode numbers (1-{episodes_count}): ")
|
||||
list_episode_select = list(map(int, input_episodes.split(',')))
|
||||
continue # Re-prompt the user if the selection is invalid
|
||||
|
||||
return valid_episodes
|
||||
|
||||
except ValueError:
|
||||
logging.error("Error: Please enter valid integers separated by commas.")
|
||||
|
||||
# Prompt the user for valid input again
|
||||
input_episodes = input(f"Enter valid episode numbers (1-{episodes_count}): ")
|
||||
list_episode_select = list(map(int, input_episodes.split(',')))
|
||||
return valid_episodes
|
||||
|
@ -1,156 +0,0 @@
|
||||
# 09.06.24
|
||||
|
||||
import os
|
||||
import sys
|
||||
import ssl
|
||||
import certifi
|
||||
import logging
|
||||
|
||||
|
||||
# External libraries
|
||||
import httpx
|
||||
from tqdm import tqdm
|
||||
|
||||
|
||||
# Internal utilities
|
||||
from StreamingCommunity.Util.headers import get_headers
|
||||
from StreamingCommunity.Util.color import Colors
|
||||
from StreamingCommunity.Util.console import console, Panel
|
||||
from StreamingCommunity.Util._jsonConfig import config_manager
|
||||
from StreamingCommunity.Util.os import internet_manager
|
||||
|
||||
|
||||
# Logic class
|
||||
from ...FFmpeg import print_duration_table
|
||||
|
||||
|
||||
# Suppress SSL warnings
|
||||
import urllib3
|
||||
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
|
||||
|
||||
|
||||
# Config
|
||||
GET_ONLY_LINK = config_manager.get_bool('M3U8_PARSER', 'get_only_link')
|
||||
TQDM_USE_LARGE_BAR = config_manager.get_int('M3U8_DOWNLOAD', 'tqdm_use_large_bar')
|
||||
REQUEST_VERIFY = config_manager.get_float('REQUESTS', 'verify_ssl')
|
||||
REQUEST_TIMEOUT = config_manager.get_float('REQUESTS', 'timeout')
|
||||
|
||||
|
||||
|
||||
def MP4_downloader(url: str, path: str, referer: str = None, headers_: dict = None):
|
||||
"""
|
||||
Downloads an MP4 video from a given URL with robust error handling and SSL bypass.
|
||||
|
||||
Parameters:
|
||||
- url (str): The URL of the MP4 video to download.
|
||||
- path (str): The local path where the downloaded MP4 file will be saved.
|
||||
- referer (str, optional): The referer header value.
|
||||
- headers_ (dict, optional): Custom headers for the request.
|
||||
"""
|
||||
# Early return for link-only mode
|
||||
if GET_ONLY_LINK:
|
||||
return {'path': path, 'url': url}
|
||||
|
||||
# Validate URL
|
||||
if not (url.lower().startswith('http://') or url.lower().startswith('https://')):
|
||||
logging.error(f"Invalid URL: {url}")
|
||||
console.print(f"[bold red]Invalid URL: {url}[/bold red]")
|
||||
return None
|
||||
|
||||
# Prepare headers
|
||||
try:
|
||||
headers = {}
|
||||
if referer:
|
||||
headers['Referer'] = referer
|
||||
|
||||
# Use custom headers if provided, otherwise use default user agent
|
||||
if headers_:
|
||||
headers.update(headers_)
|
||||
else:
|
||||
headers['User-Agent'] = get_headers()
|
||||
|
||||
except Exception as header_err:
|
||||
logging.error(f"Error preparing headers: {header_err}")
|
||||
console.print(f"[bold red]Error preparing headers: {header_err}[/bold red]")
|
||||
return None
|
||||
|
||||
try:
|
||||
# Create a custom transport that bypasses SSL verification
|
||||
transport = httpx.HTTPTransport(
|
||||
verify=False, # Disable SSL certificate verification
|
||||
http2=True # Optional: enable HTTP/2 support
|
||||
)
|
||||
|
||||
# Download with streaming and progress tracking
|
||||
with httpx.Client(transport=transport, timeout=httpx.Timeout(60.0)) as client:
|
||||
with client.stream("GET", url, headers=headers, timeout=REQUEST_TIMEOUT) as response:
|
||||
response.raise_for_status()
|
||||
|
||||
# Get total file size
|
||||
total = int(response.headers.get('content-length', 0))
|
||||
|
||||
# Handle empty streams
|
||||
if total == 0:
|
||||
console.print("[bold red]No video stream found.[/bold red]")
|
||||
return None
|
||||
|
||||
# Create progress bar
|
||||
progress_bar = tqdm(
|
||||
total=total,
|
||||
ascii='░▒█',
|
||||
bar_format=f"{Colors.YELLOW}[MP4] {Colors.WHITE}({Colors.CYAN}video{Colors.WHITE}): "
|
||||
f"{Colors.RED}{{percentage:.2f}}% {Colors.MAGENTA}{{bar}} {Colors.WHITE}[ "
|
||||
f"{Colors.YELLOW}{{n_fmt}}{Colors.WHITE} / {Colors.RED}{{total_fmt}} {Colors.WHITE}] "
|
||||
f"{Colors.YELLOW}{{elapsed}} {Colors.WHITE}< {Colors.CYAN}{{remaining}} {Colors.WHITE}| "
|
||||
f"{Colors.YELLOW}{{rate_fmt}}{{postfix}} {Colors.WHITE}]",
|
||||
unit='iB',
|
||||
unit_scale=True,
|
||||
desc='Downloading',
|
||||
mininterval=0.05
|
||||
)
|
||||
|
||||
# Ensure directory exists
|
||||
os.makedirs(os.path.dirname(path), exist_ok=True)
|
||||
|
||||
# Download file
|
||||
with open(path, 'wb') as file, progress_bar as bar:
|
||||
downloaded = 0
|
||||
for chunk in response.iter_bytes(chunk_size=1024):
|
||||
if chunk:
|
||||
size = file.write(chunk)
|
||||
downloaded += size
|
||||
bar.update(size)
|
||||
|
||||
# Optional: Add a check to stop download if needed
|
||||
# if downloaded > MAX_DOWNLOAD_SIZE:
|
||||
# break
|
||||
|
||||
# Post-download processing
|
||||
if os.path.exists(path) and os.path.getsize(path) > 0:
|
||||
console.print(Panel(
|
||||
f"[bold green]Download completed![/bold green]\n"
|
||||
f"[cyan]File size: [bold red]{internet_manager.format_file_size(os.path.getsize(path))}[/bold red]\n"
|
||||
f"[cyan]Duration: [bold]{print_duration_table(path, description=False, return_string=True)}[/bold]",
|
||||
title=f"{os.path.basename(path.replace('.mp4', ''))}",
|
||||
border_style="green"
|
||||
))
|
||||
return path
|
||||
|
||||
else:
|
||||
console.print("[bold red]Download failed or file is empty.[/bold red]")
|
||||
return None
|
||||
|
||||
except httpx.HTTPStatusError as http_err:
|
||||
logging.error(f"HTTP error occurred: {http_err}")
|
||||
console.print(f"[bold red]HTTP Error: {http_err}[/bold red]")
|
||||
return None
|
||||
|
||||
except httpx.RequestError as req_err:
|
||||
logging.error(f"Request error: {req_err}")
|
||||
console.print(f"[bold red]Request Error: {req_err}[/bold red]")
|
||||
return None
|
||||
|
||||
except Exception as e:
|
||||
logging.error(f"Unexpected error during download: {e}")
|
||||
console.print(f"[bold red]Unexpected Error: {e}[/bold red]")
|
||||
return None
|
@ -1,222 +0,0 @@
|
||||
# 23.06.24
|
||||
|
||||
import os
|
||||
import sys
|
||||
import time
|
||||
import shutil
|
||||
import logging
|
||||
|
||||
|
||||
# Internal utilities
|
||||
from StreamingCommunity.Util.color import Colors
|
||||
from StreamingCommunity.Util.os import internet_manager
|
||||
from StreamingCommunity.Util._jsonConfig import config_manager
|
||||
|
||||
|
||||
# External libraries
|
||||
from tqdm import tqdm
|
||||
from qbittorrent import Client
|
||||
|
||||
|
||||
# Tor config
|
||||
HOST = str(config_manager.get_dict('DEFAULT', 'config_qbit_tor')['host'])
|
||||
PORT = str(config_manager.get_dict('DEFAULT', 'config_qbit_tor')['port'])
|
||||
USERNAME = str(config_manager.get_dict('DEFAULT', 'config_qbit_tor')['user'])
|
||||
PASSWORD = str(config_manager.get_dict('DEFAULT', 'config_qbit_tor')['pass'])
|
||||
|
||||
# Config
|
||||
TQDM_USE_LARGE_BAR = config_manager.get_int('M3U8_DOWNLOAD', 'tqdm_use_large_bar')
|
||||
REQUEST_VERIFY = config_manager.get_float('REQUESTS', 'verify_ssl')
|
||||
REQUEST_TIMEOUT = config_manager.get_float('REQUESTS', 'timeout')
|
||||
|
||||
|
||||
|
||||
class TOR_downloader:
|
||||
def __init__(self):
|
||||
"""
|
||||
Initializes the TorrentManager instance.
|
||||
|
||||
Parameters:
|
||||
- host (str): IP address or hostname of the qBittorrent Web UI.
|
||||
- port (int): Port number of the qBittorrent Web UI.
|
||||
- username (str): Username for logging into qBittorrent.
|
||||
- password (str): Password for logging into qBittorrent.
|
||||
"""
|
||||
try:
|
||||
self.qb = Client(f'http://{HOST}:{PORT}/')
|
||||
except:
|
||||
logging.error("Start qbitorrent first.")
|
||||
|
||||
self.username = USERNAME
|
||||
self.password = PASSWORD
|
||||
self.logged_in = False
|
||||
self.save_path = None
|
||||
self.torrent_name = None
|
||||
|
||||
self.login()
|
||||
|
||||
def login(self):
|
||||
"""
|
||||
Logs into the qBittorrent Web UI.
|
||||
"""
|
||||
try:
|
||||
self.qb.login(self.username, self.password)
|
||||
self.logged_in = True
|
||||
logging.info("Successfully logged in to qBittorrent.")
|
||||
|
||||
except Exception as e:
|
||||
logging.error(f"Failed to log in: {str(e)}")
|
||||
self.logged_in = False
|
||||
|
||||
def add_magnet_link(self, magnet_link):
|
||||
"""
|
||||
Adds a torrent via magnet link to qBittorrent.
|
||||
|
||||
Parameters:
|
||||
- magnet_link (str): Magnet link of the torrent to be added.
|
||||
"""
|
||||
try:
|
||||
self.qb.download_from_link(magnet_link)
|
||||
logging.info("Added magnet link to qBittorrent.")
|
||||
|
||||
# Get the hash of the latest added torrent
|
||||
torrents = self.qb.torrents()
|
||||
if torrents:
|
||||
self.latest_torrent_hash = torrents[-1]['hash']
|
||||
logging.info(f"Latest torrent hash: {self.latest_torrent_hash}")
|
||||
|
||||
except Exception as e:
|
||||
logging.error(f"Failed to add magnet link: {str(e)}")
|
||||
|
||||
def start_download(self):
|
||||
"""
|
||||
Starts downloading the latest added torrent and monitors progress.
|
||||
"""
|
||||
try:
|
||||
|
||||
torrents = self.qb.torrents()
|
||||
if not torrents:
|
||||
logging.error("No torrents found.")
|
||||
return
|
||||
|
||||
# Sleep to load magnet to qbit app
|
||||
time.sleep(10)
|
||||
latest_torrent = torrents[-1]
|
||||
torrent_hash = latest_torrent['hash']
|
||||
|
||||
# Custom bar for mobile and pc
|
||||
if TQDM_USE_LARGE_BAR:
|
||||
bar_format = (
|
||||
f"{Colors.YELLOW}[TOR] {Colors.WHITE}({Colors.CYAN}video{Colors.WHITE}): "
|
||||
f"{Colors.RED}{{percentage:.2f}}% {Colors.MAGENTA}{{bar}} {Colors.WHITE}[ "
|
||||
f"{Colors.YELLOW}{{elapsed}} {Colors.WHITE}< {Colors.CYAN}{{remaining}}{{postfix}} {Colors.WHITE}]"
|
||||
)
|
||||
else:
|
||||
bar_format = (
|
||||
f"{Colors.YELLOW}Proc{Colors.WHITE}: "
|
||||
f"{Colors.RED}{{percentage:.2f}}% {Colors.WHITE}| "
|
||||
f"{Colors.CYAN}{{remaining}}{{postfix}} {Colors.WHITE}]"
|
||||
)
|
||||
|
||||
progress_bar = tqdm(
|
||||
total=100,
|
||||
ascii='░▒█',
|
||||
bar_format=bar_format,
|
||||
unit_scale=True,
|
||||
unit_divisor=1024,
|
||||
mininterval=0.05
|
||||
)
|
||||
|
||||
with progress_bar as pbar:
|
||||
while True:
|
||||
|
||||
# Get variable from qtorrent
|
||||
torrent_info = self.qb.get_torrent(torrent_hash)
|
||||
self.save_path = torrent_info['save_path']
|
||||
self.torrent_name = torrent_info['name']
|
||||
|
||||
# Fetch important variable
|
||||
pieces_have = torrent_info['pieces_have']
|
||||
pieces_num = torrent_info['pieces_num']
|
||||
progress = (pieces_have / pieces_num) * 100 if pieces_num else 0
|
||||
pbar.n = progress
|
||||
|
||||
download_speed = torrent_info['dl_speed']
|
||||
total_size = torrent_info['total_size']
|
||||
downloaded_size = torrent_info['total_downloaded']
|
||||
|
||||
# Format variable
|
||||
downloaded_size_str = internet_manager.format_file_size(downloaded_size)
|
||||
downloaded_size = downloaded_size_str.split(' ')[0]
|
||||
|
||||
total_size_str = internet_manager.format_file_size(total_size)
|
||||
total_size = total_size_str.split(' ')[0]
|
||||
total_size_unit = total_size_str.split(' ')[1]
|
||||
|
||||
average_internet_str = internet_manager.format_transfer_speed(download_speed)
|
||||
average_internet = average_internet_str.split(' ')[0]
|
||||
average_internet_unit = average_internet_str.split(' ')[1]
|
||||
|
||||
# Update the progress bar's postfix
|
||||
if TQDM_USE_LARGE_BAR:
|
||||
pbar.set_postfix_str(
|
||||
f"{Colors.WHITE}[ {Colors.GREEN}{downloaded_size} {Colors.WHITE}< {Colors.GREEN}{total_size} {Colors.RED}{total_size_unit} "
|
||||
f"{Colors.WHITE}| {Colors.CYAN}{average_internet} {Colors.RED}{average_internet_unit}"
|
||||
)
|
||||
else:
|
||||
pbar.set_postfix_str(
|
||||
f"{Colors.WHITE}[ {Colors.GREEN}{downloaded_size}{Colors.RED} {total_size} "
|
||||
f"{Colors.WHITE}| {Colors.CYAN}{average_internet} {Colors.RED}{average_internet_unit}"
|
||||
)
|
||||
|
||||
pbar.refresh()
|
||||
time.sleep(0.2)
|
||||
|
||||
# Break at the end
|
||||
if int(progress) == 100:
|
||||
break
|
||||
|
||||
except KeyboardInterrupt:
|
||||
logging.info("Download process interrupted.")
|
||||
|
||||
except Exception as e:
|
||||
logging.error(f"Download error: {str(e)}")
|
||||
sys.exit(0)
|
||||
|
||||
def move_downloaded_files(self, destination=None):
|
||||
"""
|
||||
Moves downloaded files of the latest torrent to another location.
|
||||
|
||||
Parameters:
|
||||
- save_path (str): Current save path (output directory) of the torrent.
|
||||
- destination (str, optional): Destination directory to move files. If None, moves to current directory.
|
||||
|
||||
Returns:
|
||||
- bool: True if files are moved successfully, False otherwise.
|
||||
"""
|
||||
|
||||
video_extensions = {'.mp4', '.mkv', 'avi'}
|
||||
time.sleep(2)
|
||||
|
||||
# List directories in the save path
|
||||
dirs = [d for d in os.listdir(self.save_path) if os.path.isdir(os.path.join(self.save_path, d))]
|
||||
|
||||
for dir_name in dirs:
|
||||
if self.torrent_name.split(" ")[0] in dir_name:
|
||||
dir_path = os.path.join(self.save_path, dir_name)
|
||||
|
||||
# Ensure destination is set; if not, use current directory
|
||||
destination = destination or os.getcwd()
|
||||
|
||||
# Move only video files
|
||||
for file_name in os.listdir(dir_path):
|
||||
file_path = os.path.join(dir_path, file_name)
|
||||
|
||||
# Check if it's a file and if it has a video extension
|
||||
if os.path.isfile(file_path) and os.path.splitext(file_name)[1] in video_extensions:
|
||||
shutil.move(file_path, os.path.join(destination, file_name))
|
||||
logging.info(f"Moved file {file_name} to {destination}")
|
||||
|
||||
time.sleep(2)
|
||||
self.qb.delete_permanently(self.qb.torrents()[-1]['hash'])
|
||||
return True
|
@ -1,5 +1,3 @@
|
||||
# 23.06.24
|
||||
|
||||
from .HLS.downloader import HLS_Downloader
|
||||
from .MP4.downloader import MP4_downloader
|
||||
from .TOR.downloader import TOR_downloader
|
||||
from .HLS.downloader import HLS_Downloader
|
@ -1,76 +0,0 @@
|
||||
# 29.06.24
|
||||
|
||||
import tempfile
|
||||
import logging
|
||||
|
||||
|
||||
# External library
|
||||
from bs4 import BeautifulSoup
|
||||
from seleniumbase import Driver
|
||||
|
||||
|
||||
# Internal utilities
|
||||
from StreamingCommunity.Util._jsonConfig import config_manager
|
||||
|
||||
|
||||
# Config
|
||||
USE_HEADLESS = config_manager.get_bool("BROWSER", "headless")
|
||||
|
||||
|
||||
class WebAutomation:
|
||||
"""
|
||||
A class for automating web interactions using SeleniumBase and BeautifulSoup.
|
||||
"""
|
||||
|
||||
def __init__(self):
|
||||
"""
|
||||
Initializes the WebAutomation instance with SeleniumBase Driver.
|
||||
|
||||
Parameters:
|
||||
headless (bool, optional): Whether to run the browser in headless mode. Default is True.
|
||||
"""
|
||||
logging.getLogger('seleniumbase').setLevel(logging.ERROR)
|
||||
|
||||
self.driver = Driver(
|
||||
uc=True,
|
||||
uc_cdp_events=True,
|
||||
headless=USE_HEADLESS,
|
||||
user_data_dir = tempfile.mkdtemp(),
|
||||
chromium_arg="--disable-search-engine-choice-screen"
|
||||
)
|
||||
|
||||
def quit(self):
|
||||
"""
|
||||
Quits the WebDriver instance.
|
||||
"""
|
||||
self.driver.quit()
|
||||
|
||||
def get_page(self, url):
|
||||
"""
|
||||
Navigates the browser to the specified URL.
|
||||
|
||||
Parameters:
|
||||
url (str): The URL to navigate to.
|
||||
"""
|
||||
self.driver.get(url)
|
||||
|
||||
def retrieve_soup(self):
|
||||
"""
|
||||
Retrieves the BeautifulSoup object for the current page's HTML content.
|
||||
|
||||
Returns:
|
||||
BeautifulSoup object: Parsed HTML content of the current page.
|
||||
"""
|
||||
html_content = self.driver.page_source
|
||||
soup = BeautifulSoup(html_content, 'html.parser')
|
||||
return soup
|
||||
|
||||
def get_content(self):
|
||||
"""
|
||||
Returns the HTML content of the current page.
|
||||
|
||||
Returns:
|
||||
str: The HTML content of the current page.
|
||||
"""
|
||||
return self.driver.page_source
|
||||
|
@ -1,68 +0,0 @@
|
||||
# 01.03.2023
|
||||
|
||||
import os
|
||||
import sys
|
||||
import time
|
||||
|
||||
|
||||
# Internal utilities
|
||||
from .version import __version__, __author__, __title__
|
||||
from StreamingCommunity.Util.console import console
|
||||
|
||||
|
||||
# External library
|
||||
import httpx
|
||||
|
||||
|
||||
# Variable
|
||||
if getattr(sys, 'frozen', False): # Modalità PyInstaller
|
||||
base_path = os.path.join(sys._MEIPASS, "StreamingCommunity")
|
||||
else:
|
||||
base_path = os.path.dirname(__file__)
|
||||
|
||||
|
||||
def update():
|
||||
"""
|
||||
Check for updates on GitHub and display relevant information.
|
||||
"""
|
||||
|
||||
console.print("[green]Checking GitHub version [white]...")
|
||||
|
||||
# Make the GitHub API requests and handle potential errors
|
||||
try:
|
||||
response_reposity = httpx.get(f"https://api.github.com/repos/{__author__}/{__title__}").json()
|
||||
response_releases = httpx.get(f"https://api.github.com/repos/{__author__}/{__title__}/releases").json()
|
||||
|
||||
except Exception as e:
|
||||
console.print(f"[red]Error accessing GitHub API: {e}")
|
||||
return
|
||||
|
||||
# Get stargazers count from the repository
|
||||
stargazers_count = response_reposity.get('stargazers_count', 0)
|
||||
|
||||
# Calculate total download count from all releases
|
||||
total_download_count = sum(asset['download_count'] for release in response_releases for asset in release.get('assets', []))
|
||||
|
||||
# Get latest version name
|
||||
if response_releases:
|
||||
last_version = response_releases[0].get('name', 'Unknown')
|
||||
else:
|
||||
last_version = 'Unknown'
|
||||
|
||||
# Calculate percentual of stars based on download count
|
||||
if total_download_count > 0 and stargazers_count > 0:
|
||||
percentual_stars = round(stargazers_count / total_download_count * 100, 2)
|
||||
else:
|
||||
percentual_stars = 0
|
||||
|
||||
# Check installed version
|
||||
if str(__version__).replace('v', '') != str(last_version).replace('v', '') :
|
||||
console.print(f"[red]New version available: [yellow]{last_version}")
|
||||
else:
|
||||
console.print(f"[red]Everything is up to date")
|
||||
|
||||
console.print("\n")
|
||||
console.print(f"[red]{__title__} has been downloaded [yellow]{total_download_count} [red]times, but only [yellow]{percentual_stars}% [red]of users have starred it.\n\
|
||||
[cyan]Help the repository grow today by leaving a [yellow]star [cyan]and [yellow]sharing [cyan]it with others online!")
|
||||
|
||||
time.sleep(3)
|
@ -1,5 +0,0 @@
|
||||
__title__ = 'StreamingCommunity'
|
||||
__version__ = '1.9.2'
|
||||
__author__ = 'Lovi-0'
|
||||
__description__ = 'A command-line program to download film'
|
||||
__copyright__ = 'Copyright 2024'
|
@ -1,12 +1,5 @@
|
||||
# 03.03.24
|
||||
|
||||
import os
|
||||
import sys
|
||||
import logging
|
||||
import importlib
|
||||
|
||||
|
||||
# External library
|
||||
from rich.console import Console
|
||||
from rich.table import Table
|
||||
from rich.prompt import Prompt
|
||||
@ -16,13 +9,15 @@ from typing import Dict, List, Any
|
||||
|
||||
# Internal utilities
|
||||
from .message import start_message
|
||||
from .call_stack import get_call_stack
|
||||
|
||||
|
||||
class TVShowManager:
|
||||
def __init__(self):
|
||||
"""
|
||||
Initialize TVShowManager with provided column information.
|
||||
|
||||
Parameters:
|
||||
- column_info (Dict[str, Dict[str, str]]): Dictionary containing column names, their colors, and justification.
|
||||
"""
|
||||
self.console = Console()
|
||||
self.tv_shows: List[Dict[str, Any]] = [] # List to store TV show data as dictionaries
|
||||
@ -85,6 +80,7 @@ class TVShowManager:
|
||||
|
||||
self.console.print(table) # Use self.console.print instead of print
|
||||
|
||||
|
||||
def run(self, force_int_input: bool = False, max_int_input: int = 0) -> str:
|
||||
"""
|
||||
Run the TV show manager application.
|
||||
@ -105,16 +101,9 @@ class TVShowManager:
|
||||
# Display table
|
||||
self.display_data(self.tv_shows[self.slice_start:self.slice_end])
|
||||
|
||||
# Find research function from call stack
|
||||
research_func = None
|
||||
for reverse_fun in get_call_stack():
|
||||
if reverse_fun['function'] == 'search' and reverse_fun['script'] == '__init__.py':
|
||||
research_func = reverse_fun
|
||||
logging.info(f"Found research_func: {research_func}")
|
||||
|
||||
# Handling user input for loading more items or quitting
|
||||
if self.slice_end < total_items:
|
||||
self.console.print(f"\n\n[yellow][INFO] [green]Press [red]Enter [green]for next page, [red]'q' [green]to quit, or [red]'back' [green]to search.")
|
||||
self.console.print(f"\n\n[yellow][INFO] [green]Press [red]Enter [green]for next page, or [red]'q' [green]to quit.")
|
||||
|
||||
if not force_int_input:
|
||||
key = Prompt.ask(
|
||||
@ -124,7 +113,7 @@ class TVShowManager:
|
||||
|
||||
else:
|
||||
choices = [str(i) for i in range(0, max_int_input)]
|
||||
choices.extend(["q", "", "back"])
|
||||
choices.extend(["q", ""])
|
||||
|
||||
key = Prompt.ask("[cyan]Insert media [red]index", choices=choices, show_choices=False)
|
||||
last_command = key
|
||||
@ -138,62 +127,22 @@ class TVShowManager:
|
||||
if self.slice_end > total_items:
|
||||
self.slice_end = total_items
|
||||
|
||||
elif key.lower() == "back" and research_func:
|
||||
try:
|
||||
# Find the project root directory
|
||||
current_path = research_func['folder']
|
||||
while not os.path.exists(os.path.join(current_path, 'StreamingCommunity')):
|
||||
current_path = os.path.dirname(current_path)
|
||||
|
||||
# Add project root to Python path
|
||||
project_root = current_path
|
||||
#print(f"[DEBUG] Project Root: {project_root}")
|
||||
|
||||
if project_root not in sys.path:
|
||||
sys.path.insert(0, project_root)
|
||||
|
||||
# Import using full absolute import
|
||||
module_path = 'StreamingCommunity.Api.Site.streamingcommunity'
|
||||
#print(f"[DEBUG] Importing module: {module_path}")
|
||||
|
||||
# Import the module
|
||||
module = importlib.import_module(module_path)
|
||||
|
||||
# Get the search function
|
||||
search_func = getattr(module, 'media_search_manager')
|
||||
|
||||
# Ask for search string
|
||||
string_to_search = Prompt.ask(f"\n[purple]Insert word to search in [red]{research_func['folder_base']}").strip()
|
||||
|
||||
# Call the search function with the search string
|
||||
search_func(string_to_search)
|
||||
|
||||
except Exception as e:
|
||||
self.console.print(f"[red]Error during search: {e}")
|
||||
|
||||
# Print detailed traceback
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
|
||||
# Optionally remove the path if you want to clean up
|
||||
if project_root in sys.path:
|
||||
sys.path.remove(project_root)
|
||||
|
||||
else:
|
||||
break
|
||||
|
||||
else:
|
||||
# Last slice, ensure all remaining items are shown
|
||||
self.console.print(f"\n\n[yellow][INFO] [green]You've reached the end. [red]Enter [green]for first page, [red]'q' [green]to quit, or [red]'back' [green]to search.")
|
||||
self.console.print(f"\n\n[yellow][INFO] [red]You've reached the end. [green]Press [red]Enter [green]for next page, or [red]'q' [green]to quit.")
|
||||
if not force_int_input:
|
||||
key = Prompt.ask(
|
||||
"\n[cyan]Insert media index [yellow](e.g., 1), [red]* [cyan]to download all media, "
|
||||
"[yellow](e.g., 1-2) [cyan]for a range of media, or [yellow](e.g., 3-*) [cyan]to download from a specific index to the end"
|
||||
)
|
||||
|
||||
|
||||
else:
|
||||
choices = [str(i) for i in range(0, max_int_input)]
|
||||
choices.extend(["q", "", "back"])
|
||||
choices.extend(["q", ""])
|
||||
|
||||
key = Prompt.ask("[cyan]Insert media [red]index", choices=choices, show_choices=False)
|
||||
last_command = key
|
||||
@ -205,51 +154,10 @@ class TVShowManager:
|
||||
self.slice_start = 0
|
||||
self.slice_end = self.step
|
||||
|
||||
elif key.lower() == "back" and research_func:
|
||||
try:
|
||||
# Find the project root directory
|
||||
current_path = research_func['folder']
|
||||
while not os.path.exists(os.path.join(current_path, 'StreamingCommunity')):
|
||||
current_path = os.path.dirname(current_path)
|
||||
|
||||
# Add project root to Python path
|
||||
project_root = current_path
|
||||
#print(f"[DEBUG] Project Root: {project_root}")
|
||||
|
||||
if project_root not in sys.path:
|
||||
sys.path.insert(0, project_root)
|
||||
|
||||
# Import using full absolute import
|
||||
module_path = 'StreamingCommunity.Api.Site.streamingcommunity'
|
||||
#print(f"[DEBUG] Importing module: {module_path}")
|
||||
|
||||
# Import the module
|
||||
module = importlib.import_module(module_path)
|
||||
|
||||
# Get the search function
|
||||
search_func = getattr(module, 'media_search_manager')
|
||||
|
||||
# Ask for search string
|
||||
string_to_search = Prompt.ask(f"\n[purple]Insert word to search in [red]{research_func['folder_base']}").strip()
|
||||
|
||||
# Call the search function with the search string
|
||||
search_func(string_to_search)
|
||||
|
||||
except Exception as e:
|
||||
self.console.print(f"[red]Error during search: {e}")
|
||||
|
||||
# Print detailed traceback
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
|
||||
# Optionally remove the path if you want to clean up
|
||||
if project_root in sys.path:
|
||||
sys.path.remove(project_root)
|
||||
|
||||
else:
|
||||
break
|
||||
|
||||
return last_command
|
||||
|
||||
def clear(self):
|
||||
self.tv_shows = []
|
||||
self.tv_shows = []
|
||||
|
@ -1,201 +0,0 @@
|
||||
# 10.12.23
|
||||
|
||||
import os
|
||||
import sys
|
||||
import time
|
||||
import glob
|
||||
import logging
|
||||
import platform
|
||||
import argparse
|
||||
import importlib
|
||||
from typing import Callable
|
||||
|
||||
|
||||
# Internal utilities
|
||||
from StreamingCommunity.Util.message import start_message
|
||||
from StreamingCommunity.Util.console import console, msg
|
||||
from StreamingCommunity.Util._jsonConfig import config_manager
|
||||
from StreamingCommunity.Upload.update import update as git_update
|
||||
from StreamingCommunity.Util.os import os_summary
|
||||
from StreamingCommunity.Lib.TMBD import tmdb
|
||||
from StreamingCommunity.Util.logger import Logger
|
||||
|
||||
|
||||
# Config
|
||||
CLOSE_CONSOLE = config_manager.get_bool('DEFAULT', 'not_close')
|
||||
SHOW_TRENDING = config_manager.get_bool('DEFAULT', 'show_trending')
|
||||
|
||||
|
||||
def run_function(func: Callable[..., None], close_console: bool = False) -> None:
|
||||
"""
|
||||
Run a given function indefinitely or once, depending on the value of close_console.
|
||||
|
||||
Parameters:
|
||||
func (Callable[..., None]): The function to run.
|
||||
close_console (bool, optional): Whether to close the console after running the function once. Defaults to False.
|
||||
"""
|
||||
if close_console:
|
||||
while 1:
|
||||
func()
|
||||
else:
|
||||
func()
|
||||
|
||||
|
||||
def load_search_functions():
|
||||
modules = []
|
||||
loaded_functions = {}
|
||||
|
||||
# Find api home directory
|
||||
if getattr(sys, 'frozen', False): # Modalità PyInstaller
|
||||
base_path = os.path.join(sys._MEIPASS, "StreamingCommunity")
|
||||
else:
|
||||
base_path = os.path.dirname(__file__)
|
||||
|
||||
api_dir = os.path.join(base_path, 'Api', 'Site')
|
||||
init_files = glob.glob(os.path.join(api_dir, '*', '__init__.py'))
|
||||
|
||||
# Retrieve modules and their indices
|
||||
for init_file in init_files:
|
||||
|
||||
# Get folder name as module name
|
||||
module_name = os.path.basename(os.path.dirname(init_file))
|
||||
logging.info(f"Load module name: {module_name}")
|
||||
|
||||
try:
|
||||
# Dynamically import the module
|
||||
mod = importlib.import_module(f'StreamingCommunity.Api.Site.{module_name}')
|
||||
|
||||
# Get 'indice' from the module
|
||||
indice = getattr(mod, 'indice', 0)
|
||||
is_deprecate = bool(getattr(mod, '_deprecate', True))
|
||||
use_for = getattr(mod, '_useFor', 'other')
|
||||
|
||||
if not is_deprecate:
|
||||
modules.append((module_name, indice, use_for))
|
||||
|
||||
except Exception as e:
|
||||
console.print(f"[red]Failed to import module {module_name}: {str(e)}")
|
||||
|
||||
# Sort modules by 'indice'
|
||||
modules.sort(key=lambda x: x[1])
|
||||
|
||||
# Load search functions in the sorted order
|
||||
for module_name, _, use_for in modules:
|
||||
|
||||
# Construct a unique alias for the module
|
||||
module_alias = f'{module_name}_search'
|
||||
|
||||
try:
|
||||
|
||||
# Dynamically import the module
|
||||
mod = importlib.import_module(f'StreamingCommunity.Api.Site.{module_name}')
|
||||
|
||||
# Get the search function from the module (assuming the function is named 'search' and defined in __init__.py)
|
||||
search_function = getattr(mod, 'search')
|
||||
|
||||
# Add the function to the loaded functions dictionary
|
||||
loaded_functions[module_alias] = (search_function, use_for)
|
||||
|
||||
except Exception as e:
|
||||
console.print(f"[red]Failed to load search function from module {module_name}: {str(e)}")
|
||||
|
||||
return loaded_functions
|
||||
|
||||
|
||||
def initialize():
|
||||
|
||||
# Get start message
|
||||
start_message()
|
||||
|
||||
# Get system info
|
||||
os_summary.get_system_summary()
|
||||
|
||||
# Set terminal size for win 7
|
||||
if platform.system() == "Windows" and "7" in platform.version():
|
||||
os.system('mode 120, 40')
|
||||
|
||||
# Check python version
|
||||
if sys.version_info < (3, 7):
|
||||
console.log("[red]Install python version > 3.7.16")
|
||||
sys.exit(0)
|
||||
|
||||
"""# Attempting GitHub update
|
||||
try:
|
||||
git_update()
|
||||
print()
|
||||
except:
|
||||
console.log("[red]Error with loading github.")"""
|
||||
|
||||
# Show trending film and series
|
||||
if SHOW_TRENDING:
|
||||
tmdb.display_trending_films()
|
||||
print()
|
||||
tmdb.display_trending_tv_shows()
|
||||
print()
|
||||
|
||||
|
||||
def main():
|
||||
|
||||
start = time.time()
|
||||
|
||||
# Create logger
|
||||
log_not = Logger()
|
||||
initialize()
|
||||
|
||||
# Load search functions
|
||||
search_functions = load_search_functions()
|
||||
logging.info(f"Load module in: {time.time() - start} s")
|
||||
|
||||
# Create dynamic argument parser
|
||||
parser = argparse.ArgumentParser(description='Script to download film and series from the internet.')
|
||||
|
||||
color_map = {
|
||||
"anime": "red",
|
||||
"film_serie": "yellow",
|
||||
"film": "blue",
|
||||
"serie": "green",
|
||||
"other": "white"
|
||||
}
|
||||
|
||||
# Add dynamic arguments based on loaded search modules
|
||||
for alias, (_, use_for) in search_functions.items():
|
||||
short_option = alias[:3].upper()
|
||||
long_option = alias
|
||||
parser.add_argument(f'-{short_option}', f'--{long_option}', action='store_true', help=f'Search for {alias.split("_")[0]} on streaming platforms.')
|
||||
|
||||
# Parse command line arguments
|
||||
args = parser.parse_args()
|
||||
|
||||
# Mapping command-line arguments to functions
|
||||
arg_to_function = {alias: func for alias, (func, _) in search_functions.items()}
|
||||
|
||||
# Check which argument is provided and run the corresponding function
|
||||
for arg, func in arg_to_function.items():
|
||||
if getattr(args, arg):
|
||||
run_function(func)
|
||||
return
|
||||
|
||||
# Mapping user input to functions
|
||||
input_to_function = {str(i): func for i, (alias, (func, _)) in enumerate(search_functions.items())}
|
||||
|
||||
# Create dynamic prompt message and choices
|
||||
choice_labels = {str(i): (alias.split("_")[0].capitalize(), use_for) for i, (alias, (_, use_for)) in enumerate(search_functions.items())}
|
||||
|
||||
# Display the category legend in a single line
|
||||
legend_text = " | ".join([f"[{color}]{category.capitalize()}[/{color}]" for category, color in color_map.items()])
|
||||
console.print(f"\n[bold green]Category Legend:[/bold green] {legend_text}")
|
||||
|
||||
# Construct the prompt message with color-coded site names
|
||||
prompt_message = "[green]Insert category [white](" + ", ".join(
|
||||
[f"{key}: [{color_map[label[1]]}]{label[0]}[/{color_map[label[1]]}]" for key, label in choice_labels.items()]
|
||||
) + "[white])"
|
||||
|
||||
# Ask the user for input
|
||||
category = msg.ask(prompt_message, choices=list(choice_labels.keys()), default="0", show_choices=False, show_default=False)
|
||||
|
||||
# Run the corresponding function based on user input
|
||||
if category in input_to_function:
|
||||
run_function(input_to_function[category])
|
||||
else:
|
||||
console.print("[red]Invalid category.")
|
||||
sys.exit(0)
|
@ -1,23 +0,0 @@
|
||||
# 23.06.24
|
||||
|
||||
# Fix import
|
||||
import sys
|
||||
import os
|
||||
src_path = os.path.abspath(os.path.join(os.path.dirname(__file__), '..', '..'))
|
||||
sys.path.append(src_path)
|
||||
|
||||
|
||||
|
||||
# Import
|
||||
from StreamingCommunity.Util.message import start_message
|
||||
from StreamingCommunity.Util.logger import Logger
|
||||
from StreamingCommunity.Lib.Downloader import HLS_Downloader
|
||||
|
||||
|
||||
# Test
|
||||
start_message()
|
||||
logger = Logger()
|
||||
print("Return: ", HLS_Downloader(
|
||||
output_filename="test.mp4",
|
||||
m3u8_index=""
|
||||
).start())
|
@ -1,23 +0,0 @@
|
||||
# 23.06.24
|
||||
|
||||
# Fix import
|
||||
import sys
|
||||
import os
|
||||
src_path = os.path.abspath(os.path.join(os.path.dirname(__file__), '..', '..'))
|
||||
sys.path.append(src_path)
|
||||
|
||||
|
||||
|
||||
# Import
|
||||
from StreamingCommunity.Util.message import start_message
|
||||
from StreamingCommunity.Util.logger import Logger
|
||||
from StreamingCommunity.Lib.Downloader import MP4_downloader
|
||||
|
||||
|
||||
# Test
|
||||
start_message()
|
||||
logger = Logger()
|
||||
print("Return: ", MP4_downloader(
|
||||
url="",
|
||||
path=r".\Video\undefined.mp4"
|
||||
))
|
@ -1,25 +0,0 @@
|
||||
# 23.06.24
|
||||
|
||||
# Fix import
|
||||
import sys
|
||||
import os
|
||||
src_path = os.path.abspath(os.path.join(os.path.dirname(__file__), '..', '..'))
|
||||
sys.path.append(src_path)
|
||||
|
||||
|
||||
|
||||
# Import
|
||||
from StreamingCommunity.Util.message import start_message
|
||||
from StreamingCommunity.Util.logger import Logger
|
||||
from StreamingCommunity.Lib.Downloader import TOR_downloader
|
||||
|
||||
|
||||
# Test
|
||||
start_message()
|
||||
logger = Logger()
|
||||
manager = TOR_downloader()
|
||||
|
||||
magnet_link = "magnet:?x"
|
||||
manager.add_magnet_link(magnet_link)
|
||||
manager.start_download()
|
||||
manager.move_downloaded_files()
|
@ -1,40 +0,0 @@
|
||||
# Fix import
|
||||
import sys
|
||||
import os
|
||||
src_path = os.path.abspath(os.path.join(os.path.dirname(__file__), '..', '..', '..'))
|
||||
sys.path.append(src_path)
|
||||
|
||||
|
||||
# Import
|
||||
import json
|
||||
from StreamingCommunity.Api.Player.Helper.Vixcloud.js_parser import JavaScriptParser
|
||||
from StreamingCommunity.Api.Player.Helper.Vixcloud.util import WindowVideo, WindowParameter, StreamsCollection
|
||||
|
||||
|
||||
# Data
|
||||
script_text = '''
|
||||
window.video = {"id":271977,"name":"Smile 2","filename":"Smile.2.2024.1080p.WEB-DL.DDP5.1.H.264-FHC.mkv","size":10779891,"quality":1080,"duration":7758,"views":0,"is_viewable":1,"status":"public","fps":24,"legacy":0,"folder_id":"301e469a-786f-493a-ad2b-302248aa2d23","created_at_diff":"4 giorni fa"};
|
||||
window.streams = [{"name":"Server1","active":false,"url":"https:\/\/vixcloud.co\/playlist\/271977?b=1\u0026ub=1"},{"name":"Server2","active":1,"url":"https:\/\/vixcloud.co\/playlist\/271977?b=1\u0026ab=1"}];
|
||||
window.masterPlaylist = {
|
||||
params: {
|
||||
'token': '890a3e7db7f1c8213a11007947362b21',
|
||||
'expires': '1737812156',
|
||||
},
|
||||
url: 'https://vixcloud.co/playlist/271977?b=1',
|
||||
}
|
||||
window.canPlayFHD = true
|
||||
'''
|
||||
|
||||
|
||||
# Test
|
||||
converter = JavaScriptParser.parse(js_string=str(script_text))
|
||||
json_string = json.dumps(converter, indent=2)
|
||||
print("Converted json: ", json_string, "\n")
|
||||
|
||||
window_video = WindowVideo(converter.get('video'))
|
||||
window_streams = StreamsCollection(converter.get('streams'))
|
||||
window_parameter = WindowParameter(converter.get('masterPlaylist'))
|
||||
|
||||
print(window_video)
|
||||
print(window_streams)
|
||||
print(window_parameter)
|
@ -1,22 +0,0 @@
|
||||
# 23.11.24
|
||||
|
||||
# Fix import
|
||||
import sys
|
||||
import os
|
||||
src_path = os.path.abspath(os.path.join(os.path.dirname(__file__), '..', '..'))
|
||||
sys.path.append(src_path)
|
||||
|
||||
|
||||
|
||||
# Import
|
||||
from StreamingCommunity.Util.message import start_message
|
||||
from StreamingCommunity.Util.logger import Logger
|
||||
from StreamingCommunity.Api.Player.maxstream import VideoSource
|
||||
|
||||
|
||||
# Test
|
||||
start_message()
|
||||
logger = Logger()
|
||||
video_source = VideoSource("https://cb01new.biz/what-the-waters-left-behind-scars-hd-2023")
|
||||
master_playlist = video_source.get_playlist()
|
||||
print(master_playlist)
|
@ -1,22 +0,0 @@
|
||||
# 23.11.24
|
||||
|
||||
# Fix import
|
||||
import sys
|
||||
import os
|
||||
src_path = os.path.abspath(os.path.join(os.path.dirname(__file__), '..', '..'))
|
||||
sys.path.append(src_path)
|
||||
|
||||
|
||||
|
||||
# Import
|
||||
from StreamingCommunity.Util.message import start_message
|
||||
from StreamingCommunity.Util.logger import Logger
|
||||
from StreamingCommunity.Api.Player.supervideo import VideoSource
|
||||
|
||||
|
||||
# Test
|
||||
start_message()
|
||||
logger = Logger()
|
||||
video_source = VideoSource("https://supervideo.tv/78np7kfiyklu")
|
||||
master_playlist = video_source.get_playlist()
|
||||
print(master_playlist)
|
@ -1,26 +0,0 @@
|
||||
# 23.11.24
|
||||
|
||||
# Fix import
|
||||
import sys
|
||||
import os
|
||||
src_path = os.path.abspath(os.path.join(os.path.dirname(__file__), '..', '..'))
|
||||
sys.path.append(src_path)
|
||||
|
||||
|
||||
|
||||
# Import
|
||||
from StreamingCommunity.Util.message import start_message
|
||||
from StreamingCommunity.Util.logger import Logger
|
||||
from StreamingCommunity.Api.Player.vixcloud import VideoSource
|
||||
|
||||
|
||||
# Test
|
||||
start_message()
|
||||
logger = Logger()
|
||||
video_source = VideoSource("streamingcommunity")
|
||||
video_source.setup("1171b9202c71489193f5fed2bc7b43bb", "computer", 778)
|
||||
video_source.get_iframe()
|
||||
video_source.get_content()
|
||||
master_playlist = video_source.get_playlist()
|
||||
|
||||
print(master_playlist)
|
@ -1,116 +0,0 @@
|
||||
# 12.11.24
|
||||
|
||||
# Fix import
|
||||
import os
|
||||
import sys
|
||||
src_path = os.path.abspath(os.path.join(os.path.dirname(__file__), '..'))
|
||||
sys.path.append(src_path)
|
||||
|
||||
|
||||
# Other
|
||||
import glob
|
||||
import logging
|
||||
import importlib
|
||||
from rich.console import Console
|
||||
|
||||
|
||||
# Other import
|
||||
from StreamingCommunity.Api.Template.Class.SearchType import MediaManager
|
||||
|
||||
|
||||
# Variable
|
||||
console = Console()
|
||||
|
||||
|
||||
def load_search_functions():
|
||||
modules = []
|
||||
loaded_functions = {}
|
||||
|
||||
# Traverse the Api directory
|
||||
api_dir = os.path.join(os.path.dirname(__file__), '..', 'StreamingCommunity', 'Api', 'Site')
|
||||
init_files = glob.glob(os.path.join(api_dir, '*', '__init__.py'))
|
||||
|
||||
logging.info(f"Base folder path: {api_dir}")
|
||||
logging.info(f"Api module path: {init_files}")
|
||||
|
||||
# Retrieve modules and their indices
|
||||
for init_file in init_files:
|
||||
|
||||
# Get folder name as module name
|
||||
module_name = os.path.basename(os.path.dirname(init_file))
|
||||
logging.info(f"Load module name: {module_name}")
|
||||
|
||||
try:
|
||||
# Dynamically import the module
|
||||
mod = importlib.import_module(f'StreamingCommunity.Api.Site.{module_name}')
|
||||
|
||||
# Get 'indice' from the module
|
||||
indice = getattr(mod, 'indice', 0)
|
||||
is_deprecate = bool(getattr(mod, '_deprecate', True))
|
||||
use_for = getattr(mod, '_useFor', 'other')
|
||||
|
||||
if not is_deprecate:
|
||||
modules.append((module_name, indice, use_for))
|
||||
|
||||
except Exception as e:
|
||||
console.print(f"[red]Failed to import module {module_name}: {str(e)}")
|
||||
|
||||
# Sort modules by 'indice'
|
||||
modules.sort(key=lambda x: x[1])
|
||||
|
||||
# Load search functions in the sorted order
|
||||
for module_name, _, use_for in modules:
|
||||
|
||||
# Construct a unique alias for the module
|
||||
module_alias = f'{module_name}_search'
|
||||
logging.info(f"Module alias: {module_alias}")
|
||||
|
||||
try:
|
||||
# Dynamically import the module
|
||||
mod = importlib.import_module(f'StreamingCommunity.Api.Site.{module_name}')
|
||||
|
||||
# Get the search function from the module (assuming the function is named 'search' and defined in __init__.py)
|
||||
search_function = getattr(mod, 'search')
|
||||
|
||||
# Add the function to the loaded functions dictionary
|
||||
loaded_functions[module_alias] = (search_function, use_for)
|
||||
|
||||
except Exception as e:
|
||||
console.print(f"[red]Failed to load search function from module {module_name}: {str(e)}")
|
||||
|
||||
return loaded_functions
|
||||
|
||||
|
||||
def search_all_sites(loaded_functions, search_string, max_sites=10):
|
||||
total_len_database = 0
|
||||
site_count = 0
|
||||
|
||||
for module_alias, (search_function, use_for) in loaded_functions.items():
|
||||
if max_sites is not None and site_count >= max_sites:
|
||||
break
|
||||
|
||||
console.print(f"\n[blue]Searching in module: {module_alias} [white](Use for: {use_for})")
|
||||
|
||||
try:
|
||||
database: MediaManager = search_function(search_string, get_onylDatabase=True)
|
||||
len_database = len(database.media_list)
|
||||
|
||||
for element in database.media_list:
|
||||
print(element.__dict__)
|
||||
|
||||
console.print(f"[green]Database length for {module_alias}: {len_database}")
|
||||
total_len_database += len_database
|
||||
site_count += 1
|
||||
|
||||
except Exception as e:
|
||||
console.print(f"[red]Error while executing search function for {module_alias}: {str(e)}")
|
||||
|
||||
return total_len_database
|
||||
|
||||
|
||||
# Main
|
||||
search_string = "cars"
|
||||
loaded_functions = load_search_functions()
|
||||
|
||||
total_len = search_all_sites(loaded_functions, search_string)
|
||||
console.print(f"\n[cyan]Total number of results from all sites: {total_len}")
|
7
client/dashboard/.eslintrc.json
Normal file
7
client/dashboard/.eslintrc.json
Normal file
@ -0,0 +1,7 @@
|
||||
{
|
||||
"extends": ["react-app", "plugin:react/recommended"],
|
||||
"rules": {
|
||||
// Aggiungi qui le tue regole personalizzate
|
||||
}
|
||||
}
|
||||
|
18
client/dashboard/Dockerfile
Normal file
18
client/dashboard/Dockerfile
Normal file
@ -0,0 +1,18 @@
|
||||
# Usa un'immagine di base con Node.js
|
||||
FROM node:14-alpine
|
||||
|
||||
# Imposta la directory di lavoro
|
||||
WORKDIR /app
|
||||
|
||||
# Copia i file necessari
|
||||
COPY . .
|
||||
|
||||
# Installa le dipendenze e builda il frontend React
|
||||
RUN npm install && npm run build && npm install -g serve
|
||||
|
||||
# Espone la porta su cui il server React ascolterà
|
||||
EXPOSE 3000
|
||||
|
||||
# Comando per avviare il server React
|
||||
CMD ["serve", "-s", "build", "-l", "3000"]
|
||||
|
19762
client/dashboard/package-lock.json
generated
Normal file
19762
client/dashboard/package-lock.json
generated
Normal file
File diff suppressed because it is too large
Load Diff
57
client/dashboard/package.json
Normal file
57
client/dashboard/package.json
Normal file
@ -0,0 +1,57 @@
|
||||
{
|
||||
"name": "dashboard",
|
||||
"version": "0.1.0",
|
||||
"private": true,
|
||||
"dependencies": {
|
||||
"@fortawesome/fontawesome-svg-core": "^6.5.2",
|
||||
"@fortawesome/free-brands-svg-icons": "^6.5.2",
|
||||
"@fortawesome/free-regular-svg-icons": "^6.5.2",
|
||||
"@fortawesome/free-solid-svg-icons": "^6.5.2",
|
||||
"@fortawesome/react-fontawesome": "^0.2.0",
|
||||
"@testing-library/jest-dom": "^5.17.0",
|
||||
"@testing-library/react": "^13.4.0",
|
||||
"@testing-library/user-event": "^13.5.0",
|
||||
"axios": "^1.7.9",
|
||||
"bootstrap": "^5.3.3",
|
||||
"dashboard": "file:",
|
||||
"prop-types": "^15.8.1",
|
||||
"react": "^18.2.0",
|
||||
"react-bootstrap": "^2.10.6",
|
||||
"react-bootstrap-typeahead": "^6.3.2",
|
||||
"react-data-table-component": "^7.6.2",
|
||||
"react-dom": "^18.2.0",
|
||||
"react-icons": "^5.4.0",
|
||||
"react-router-dom": "^7.0.2",
|
||||
"react-scripts": "5.0.1",
|
||||
"react-toastify": "^10.0.4",
|
||||
"styled-components": "^6.1.8",
|
||||
"web-vitals": "^2.1.4"
|
||||
},
|
||||
"scripts": {
|
||||
"start": "react-scripts start",
|
||||
"build": "react-scripts build",
|
||||
"test": "react-scripts test",
|
||||
"eject": "react-scripts eject"
|
||||
},
|
||||
"eslintConfig": {
|
||||
"extends": [
|
||||
"react-app",
|
||||
"react-app/jest"
|
||||
]
|
||||
},
|
||||
"browserslist": {
|
||||
"production": [
|
||||
">0.2%",
|
||||
"not dead",
|
||||
"not op_mini all"
|
||||
],
|
||||
"development": [
|
||||
"last 1 chrome version",
|
||||
"last 1 firefox version",
|
||||
"last 1 safari version"
|
||||
]
|
||||
},
|
||||
"devDependencies": {
|
||||
"@babel/plugin-proposal-private-property-in-object": "^7.21.11"
|
||||
}
|
||||
}
|
BIN
client/dashboard/public/favicon.ico
Normal file
BIN
client/dashboard/public/favicon.ico
Normal file
Binary file not shown.
After Width: | Height: | Size: 3.8 KiB |
43
client/dashboard/public/index.html
Normal file
43
client/dashboard/public/index.html
Normal file
@ -0,0 +1,43 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="utf-8" />
|
||||
<link rel="icon" href="%PUBLIC_URL%/favicon.ico" />
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1" />
|
||||
<meta name="theme-color" content="#000000" />
|
||||
<meta
|
||||
name="description"
|
||||
content="Web site created using create-react-app"
|
||||
/>
|
||||
<link rel="apple-touch-icon" href="%PUBLIC_URL%/logo192.png" />
|
||||
<!--
|
||||
manifest.json provides metadata used when your web app is installed on a
|
||||
user's mobile device or desktop. See https://developers.google.com/web/fundamentals/web-app-manifest/
|
||||
-->
|
||||
<link rel="manifest" href="%PUBLIC_URL%/manifest.json" />
|
||||
<!--
|
||||
Notice the use of %PUBLIC_URL% in the tags above.
|
||||
It will be replaced with the URL of the `public` folder during the build.
|
||||
Only files inside the `public` folder can be referenced from the HTML.
|
||||
|
||||
Unlike "/favicon.ico" or "favicon.ico", "%PUBLIC_URL%/favicon.ico" will
|
||||
work correctly both with client-side routing and a non-root public URL.
|
||||
Learn how to configure a non-root public URL by running `npm run build`.
|
||||
-->
|
||||
<title>React App</title>
|
||||
</head>
|
||||
<body>
|
||||
<noscript>You need to enable JavaScript to run this app.</noscript>
|
||||
<div id="root"></div>
|
||||
<!--
|
||||
This HTML file is a template.
|
||||
If you open it directly in the browser, you will see an empty page.
|
||||
|
||||
You can add webfonts, meta tags, or analytics to this file.
|
||||
The build step will place the bundled scripts into the <body> tag.
|
||||
|
||||
To begin the development, run `npm start` or `yarn start`.
|
||||
To create a production bundle, use `npm run build` or `yarn build`.
|
||||
-->
|
||||
</body>
|
||||
</html>
|
BIN
client/dashboard/public/logo192.png
Normal file
BIN
client/dashboard/public/logo192.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 5.2 KiB |
BIN
client/dashboard/public/logo512.png
Normal file
BIN
client/dashboard/public/logo512.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 9.4 KiB |
25
client/dashboard/public/manifest.json
Normal file
25
client/dashboard/public/manifest.json
Normal file
@ -0,0 +1,25 @@
|
||||
{
|
||||
"short_name": "React App",
|
||||
"name": "Create React App Sample",
|
||||
"icons": [
|
||||
{
|
||||
"src": "favicon.ico",
|
||||
"sizes": "64x64 32x32 24x24 16x16",
|
||||
"type": "image/x-icon"
|
||||
},
|
||||
{
|
||||
"src": "logo192.png",
|
||||
"type": "image/png",
|
||||
"sizes": "192x192"
|
||||
},
|
||||
{
|
||||
"src": "logo512.png",
|
||||
"type": "image/png",
|
||||
"sizes": "512x512"
|
||||
}
|
||||
],
|
||||
"start_url": ".",
|
||||
"display": "standalone",
|
||||
"theme_color": "#000000",
|
||||
"background_color": "#ffffff"
|
||||
}
|
3
client/dashboard/public/robots.txt
Normal file
3
client/dashboard/public/robots.txt
Normal file
@ -0,0 +1,3 @@
|
||||
# https://www.robotstxt.org/robotstxt.html
|
||||
User-agent: *
|
||||
Disallow:
|
37
client/dashboard/src/App.css
Normal file
37
client/dashboard/src/App.css
Normal file
@ -0,0 +1,37 @@
|
||||
.search-results-container {
|
||||
padding: 20px;
|
||||
}
|
||||
|
||||
.results-grid {
|
||||
display: grid;
|
||||
grid-template-columns: repeat(auto-fill, minmax(200px, 1fr));
|
||||
gap: 20px;
|
||||
}
|
||||
|
||||
.search-result-item {
|
||||
cursor: pointer;
|
||||
transition: transform 0.3s ease;
|
||||
border-radius: 10px;
|
||||
overflow: hidden;
|
||||
box-shadow: 0 4px 6px rgba(0,0,0,0.1);
|
||||
}
|
||||
|
||||
.search-result-item:hover {
|
||||
transform: scale(1.05);
|
||||
}
|
||||
|
||||
.result-image {
|
||||
width: 100%;
|
||||
height: 300px;
|
||||
object-fit: cover;
|
||||
}
|
||||
|
||||
.result-info {
|
||||
padding: 10px;
|
||||
background-color: #f4f4f4;
|
||||
}
|
||||
|
||||
.result-info h2 {
|
||||
margin: 0 0 10px 0;
|
||||
font-size: 1.2rem;
|
||||
}
|
48
client/dashboard/src/App.js
Normal file
48
client/dashboard/src/App.js
Normal file
@ -0,0 +1,48 @@
|
||||
import React, { useState, useEffect } from 'react';
|
||||
import { BrowserRouter as Router, Routes, Route } from 'react-router-dom';
|
||||
import 'bootstrap/dist/css/bootstrap.min.css';
|
||||
|
||||
import Navbar from './components/Navbar.js';
|
||||
import Dashboard from './components/Dashboard.js';
|
||||
import SearchResults from './components/SearchResult.js';
|
||||
import TitleDetail from './components/TitleDetail.js';
|
||||
import Watchlist from './components/Watchlist.js';
|
||||
import Downloads from './components/Downloads.js';
|
||||
|
||||
function App() {
|
||||
const [theme, setTheme] = useState('dark'); // Default to dark mode
|
||||
|
||||
// Toggle the theme
|
||||
const toggleTheme = () => {
|
||||
const newTheme = theme === 'light' ? 'dark' : 'light';
|
||||
setTheme(newTheme);
|
||||
localStorage.setItem('app-theme', newTheme); // Save user preference
|
||||
};
|
||||
|
||||
// Load the saved theme on mount
|
||||
useEffect(() => {
|
||||
const savedTheme = localStorage.getItem('app-theme') || 'dark'; // Default to dark if no saved theme
|
||||
setTheme(savedTheme);
|
||||
document.documentElement.setAttribute('data-theme', savedTheme); // Apply theme globally
|
||||
}, []);
|
||||
|
||||
// Update the theme dynamically
|
||||
useEffect(() => {
|
||||
document.documentElement.setAttribute('data-theme', theme); // Apply theme when changed
|
||||
}, [theme]);
|
||||
|
||||
return (
|
||||
<Router>
|
||||
<Navbar toggleTheme={toggleTheme} theme={theme} />
|
||||
<Routes>
|
||||
<Route path="/" element={<Dashboard />} />
|
||||
<Route path="/search" element={<SearchResults />} />
|
||||
<Route path="/title/:id" element={<TitleDetail />} />
|
||||
<Route path="/watchlist" element={<Watchlist />} />
|
||||
<Route path="/downloads" element={<Downloads />} />
|
||||
</Routes>
|
||||
</Router>
|
||||
);
|
||||
}
|
||||
|
||||
export default App;
|
6
client/dashboard/src/components/ApiUrl.js
Normal file
6
client/dashboard/src/components/ApiUrl.js
Normal file
@ -0,0 +1,6 @@
|
||||
export const API_BASE_URL = "http://127.0.0.1:1234";
|
||||
|
||||
export const API_URL = `${API_BASE_URL}/api`;
|
||||
export const SERVER_WATCHLIST_URL = `${API_BASE_URL}/server/watchlist`;
|
||||
export const SERVER_PATH_URL = `${API_BASE_URL}/server/path`;
|
||||
export const SERVER_DELETE_URL = `${API_BASE_URL}/server/delete`;
|
41
client/dashboard/src/components/Dashboard.js
Normal file
41
client/dashboard/src/components/Dashboard.js
Normal file
@ -0,0 +1,41 @@
|
||||
import React, { useState, useEffect } from 'react';
|
||||
import axios from 'axios';
|
||||
import { Container, Button, Form, InputGroup } from 'react-bootstrap';
|
||||
|
||||
import SearchBar from './SearchBar.js';
|
||||
|
||||
const API_BASE_URL = "http://127.0.0.1:1234";
|
||||
|
||||
const Dashboard = () => {
|
||||
const [items, setItems] = useState([]);
|
||||
|
||||
useEffect(() => {
|
||||
fetchItems();
|
||||
}, []);
|
||||
|
||||
const fetchItems = async (filter = '') => {
|
||||
try {
|
||||
const response = await axios.get(`${API_BASE_URL}/api/items?filter=${filter}`);
|
||||
setItems(response.data);
|
||||
} catch (error) {
|
||||
console.error("Error fetching items:", error);
|
||||
}
|
||||
};
|
||||
|
||||
const handleSearch = (query) => {
|
||||
fetchItems(query);
|
||||
};
|
||||
|
||||
return (
|
||||
<Container fluid className="p-4">
|
||||
<h1 className="mb-4">Dashboard</h1>
|
||||
|
||||
<div className="d-flex justify-content-between align-items-center mb-4">
|
||||
<SearchBar onSearch={handleSearch} />
|
||||
</div>
|
||||
|
||||
</Container>
|
||||
);
|
||||
};
|
||||
|
||||
export default Dashboard;
|
195
client/dashboard/src/components/Downloads.js
Normal file
195
client/dashboard/src/components/Downloads.js
Normal file
@ -0,0 +1,195 @@
|
||||
import React, { useState, useEffect } from 'react';
|
||||
import axios from 'axios';
|
||||
import { Container, Row, Col, Card, Button, Badge, Modal } from 'react-bootstrap';
|
||||
import { FaTrash, FaPlay } from 'react-icons/fa';
|
||||
import { Link } from 'react-router-dom';
|
||||
|
||||
const API_BASE_URL = "http://127.0.0.1:1234";
|
||||
|
||||
const Downloads = () => {
|
||||
const [downloads, setDownloads] = useState([]);
|
||||
const [loading, setLoading] = useState(true);
|
||||
const [showPlayer, setShowPlayer] = useState(false);
|
||||
const [currentVideo, setCurrentVideo] = useState("");
|
||||
|
||||
// Fetch all downloads
|
||||
const fetchDownloads = async () => {
|
||||
try {
|
||||
const response = await axios.get(`${API_BASE_URL}/downloads`);
|
||||
setDownloads(response.data);
|
||||
setLoading(false);
|
||||
} catch (error) {
|
||||
console.error("Error fetching downloads:", error);
|
||||
setLoading(false);
|
||||
}
|
||||
};
|
||||
|
||||
// Delete a TV episode
|
||||
const handleDeleteEpisode = async (id, season, episode) => {
|
||||
try {
|
||||
await axios.delete(`${API_BASE_URL}/deleteEpisode`, {
|
||||
params: { id, season, episode }
|
||||
});
|
||||
fetchDownloads(); // Refresh the list
|
||||
} catch (error) {
|
||||
console.error("Error deleting episode:", error);
|
||||
}
|
||||
};
|
||||
|
||||
// Delete a movie
|
||||
const handleDeleteMovie = async (id) => {
|
||||
try {
|
||||
await axios.delete(`${API_BASE_URL}/deleteMovie`, {
|
||||
params: { id }
|
||||
});
|
||||
fetchDownloads(); // Refresh the list
|
||||
} catch (error) {
|
||||
console.error("Error deleting movie:", error);
|
||||
}
|
||||
};
|
||||
|
||||
// Watch video
|
||||
const handleWatchVideo = (videoPath) => {
|
||||
setCurrentVideo(videoPath);
|
||||
setShowPlayer(true);
|
||||
};
|
||||
|
||||
// Initial fetch of downloads
|
||||
useEffect(() => {
|
||||
fetchDownloads();
|
||||
}, []);
|
||||
|
||||
if (loading) {
|
||||
return <div className="text-center mt-5">Loading...</div>;
|
||||
}
|
||||
|
||||
// Separate movies and TV shows
|
||||
const movies = downloads.filter(item => item.type === 'movie');
|
||||
const tvShows = downloads.filter(item => item.type === 'tv');
|
||||
|
||||
// Group TV shows by slug
|
||||
const groupedTvShows = tvShows.reduce((acc, show) => {
|
||||
if (!acc[show.slug]) {
|
||||
acc[show.slug] = [];
|
||||
}
|
||||
acc[show.slug].push(show);
|
||||
return acc;
|
||||
}, {});
|
||||
|
||||
return (
|
||||
<Container fluid className="p-0">
|
||||
<Container className="mt-4">
|
||||
<h2 className="mb-4">My Downloads</h2>
|
||||
|
||||
{/* Movies Section */}
|
||||
<h3 className="mt-4 mb-3">Movies</h3>
|
||||
{movies.length === 0 ? (
|
||||
<p>No movies downloaded.</p>
|
||||
) : (
|
||||
<Row xs={1} md={3} className="g-4">
|
||||
{movies.map((movie) => (
|
||||
<Col key={movie.id}>
|
||||
<Card>
|
||||
<Card.Body>
|
||||
<div className="d-flex justify-content-between align-items-start">
|
||||
<Card.Title>{movie.slug.replace(/-/g, ' ')}</Card.Title>
|
||||
<Button
|
||||
variant="outline-danger"
|
||||
size="sm"
|
||||
onClick={() => handleDeleteMovie(movie.id)}
|
||||
>
|
||||
<FaTrash />
|
||||
</Button>
|
||||
</div>
|
||||
<Card.Text>
|
||||
<small>Downloaded on: {new Date(movie.timestamp).toLocaleString()}</small>
|
||||
</Card.Text>
|
||||
<Button
|
||||
variant="primary"
|
||||
size="sm"
|
||||
onClick={() => handleWatchVideo(movie.path)}
|
||||
>
|
||||
<FaPlay className="me-2" /> Watch
|
||||
</Button>
|
||||
<Link
|
||||
to={`/title/${movie.slug}`}
|
||||
state={{ url: movie.slug }}
|
||||
className="btn btn-secondary btn-sm ms-2"
|
||||
>
|
||||
View Details
|
||||
</Link>
|
||||
</Card.Body>
|
||||
</Card>
|
||||
</Col>
|
||||
))}
|
||||
</Row>
|
||||
)}
|
||||
|
||||
{/* TV Shows Section */}
|
||||
<h3 className="mt-4 mb-3">TV Shows</h3>
|
||||
{Object.keys(groupedTvShows).length === 0 ? (
|
||||
<p>No TV shows downloaded.</p>
|
||||
) : (
|
||||
Object.entries(groupedTvShows).map(([slug, episodes]) => (
|
||||
<div key={slug} className="mb-4">
|
||||
<h4>{slug.replace(/-/g, ' ')}</h4>
|
||||
<Row xs={1} md={3} className="g-4">
|
||||
{episodes.map((episode) => (
|
||||
<Col key={`${episode.n_s}-${episode.n_ep}`}>
|
||||
<Card>
|
||||
<Card.Body>
|
||||
<div className="d-flex justify-content-between align-items-start">
|
||||
<Card.Title>
|
||||
S{episode.n_s} E{episode.n_ep}
|
||||
</Card.Title>
|
||||
<Button
|
||||
variant="outline-danger"
|
||||
size="sm"
|
||||
onClick={() => handleDeleteEpisode(episode.id, episode.n_s, episode.n_ep)}
|
||||
>
|
||||
<FaTrash />
|
||||
</Button>
|
||||
</div>
|
||||
<Card.Text>
|
||||
<small>Downloaded on: {new Date(episode.timestamp).toLocaleString()}</small>
|
||||
</Card.Text>
|
||||
<Button
|
||||
variant="primary"
|
||||
size="sm"
|
||||
onClick={() => handleWatchVideo(episode.path)}
|
||||
>
|
||||
<FaPlay className="me-2" /> Watch
|
||||
</Button>
|
||||
<Link
|
||||
to={`/title/${slug}`}
|
||||
state={{ url: slug }}
|
||||
className="btn btn-secondary btn-sm ms-2"
|
||||
>
|
||||
View Details
|
||||
</Link>
|
||||
</Card.Body>
|
||||
</Card>
|
||||
</Col>
|
||||
))}
|
||||
</Row>
|
||||
</div>
|
||||
))
|
||||
)}
|
||||
</Container>
|
||||
|
||||
{/* Modal Video Player */}
|
||||
<Modal show={showPlayer} onHide={() => setShowPlayer(false)} size="lg" centered>
|
||||
<Modal.Body>
|
||||
<video
|
||||
src={`http://127.0.0.1:1234/downloaded/${currentVideo}`}
|
||||
controls
|
||||
autoPlay
|
||||
style={{ width: '100%' }}
|
||||
/>
|
||||
</Modal.Body>
|
||||
</Modal>
|
||||
</Container>
|
||||
);
|
||||
};
|
||||
|
||||
export default Downloads;
|
41
client/dashboard/src/components/Navbar.js
Normal file
41
client/dashboard/src/components/Navbar.js
Normal file
@ -0,0 +1,41 @@
|
||||
import React from 'react';
|
||||
import PropTypes from 'prop-types';
|
||||
import { Link } from 'react-router-dom';
|
||||
import 'bootstrap/dist/css/bootstrap.min.css';
|
||||
|
||||
const Navbar = ({ toggleTheme, theme }) => {
|
||||
return (
|
||||
<nav className={`navbar navbar-expand-lg ${theme === 'dark' ? 'navbar-dark bg-dark' : 'navbar-light bg-light'}`}>
|
||||
<div className="container-fluid">
|
||||
<Link className="navbar-brand" to="/">Home</Link>
|
||||
<button className="navbar-toggler" type="button" data-bs-toggle="collapse" data-bs-target="#navbarNav" aria-controls="navbarNav" aria-expanded="false" aria-label="Toggle navigation">
|
||||
<span className="navbar-toggler-icon"></span>
|
||||
</button>
|
||||
<div className="collapse navbar-collapse" id="navbarNav">
|
||||
<ul className="navbar-nav">
|
||||
<li className="nav-item">
|
||||
<Link className="nav-link" to="/watchlist">Watchlist</Link>
|
||||
</li>
|
||||
<li className="nav-item">
|
||||
<Link className="nav-link" to="/downloads">Downloads</Link>
|
||||
</li>
|
||||
</ul>
|
||||
<button
|
||||
className="btn btn-outline-secondary ms-auto"
|
||||
onClick={toggleTheme}
|
||||
>
|
||||
{theme === 'dark' ? '☀️ Light Mode' : '🌙 Dark Mode'}
|
||||
</button>
|
||||
</div>
|
||||
</div>
|
||||
</nav>
|
||||
);
|
||||
};
|
||||
|
||||
// Validazione delle proprietà
|
||||
Navbar.propTypes = {
|
||||
toggleTheme: PropTypes.func.isRequired,
|
||||
theme: PropTypes.oneOf(['light', 'dark']).isRequired,
|
||||
};
|
||||
|
||||
export default Navbar;
|
48
client/dashboard/src/components/SearchBar.js
Normal file
48
client/dashboard/src/components/SearchBar.js
Normal file
@ -0,0 +1,48 @@
|
||||
import React, { useState } from 'react';
|
||||
import PropTypes from 'prop-types'; // Add this import
|
||||
import { useNavigate } from 'react-router-dom';
|
||||
import { Form, InputGroup, Button } from 'react-bootstrap';
|
||||
import { FaSearch } from 'react-icons/fa';
|
||||
|
||||
const SearchBar = ({ onSearch }) => {
|
||||
const [searchQuery, setSearchQuery] = useState('');
|
||||
const navigate = useNavigate();
|
||||
|
||||
const handleSearch = (e) => {
|
||||
e.preventDefault();
|
||||
if (searchQuery.trim()) {
|
||||
// If onSearch prop is provided, call it
|
||||
if (onSearch) {
|
||||
onSearch(searchQuery);
|
||||
}
|
||||
|
||||
// Navigate to search results page
|
||||
navigate(`/search?q=${encodeURIComponent(searchQuery)}`);
|
||||
}
|
||||
};
|
||||
|
||||
return (
|
||||
<Form onSubmit={handleSearch} className="w-100">
|
||||
<InputGroup>
|
||||
<Form.Control
|
||||
type="text"
|
||||
placeholder="Search movies or TV shows..."
|
||||
value={searchQuery}
|
||||
onChange={(e) => setSearchQuery(e.target.value)}
|
||||
/>
|
||||
<Button type="submit" variant="primary">
|
||||
<FaSearch />
|
||||
</Button>
|
||||
</InputGroup>
|
||||
</Form>
|
||||
);
|
||||
};
|
||||
|
||||
// Add PropTypes validation
|
||||
SearchBar.propTypes = {
|
||||
onSearch: PropTypes.func // If onSearch is optional
|
||||
// or
|
||||
// onSearch: PropTypes.func.isRequired // If onSearch is required
|
||||
};
|
||||
|
||||
export default SearchBar;
|
95
client/dashboard/src/components/SearchResult.js
Normal file
95
client/dashboard/src/components/SearchResult.js
Normal file
@ -0,0 +1,95 @@
|
||||
import React, { useState, useEffect } from 'react';
|
||||
import { useLocation, useNavigate } from 'react-router-dom';
|
||||
import axios from 'axios';
|
||||
import { Container, Row, Col, Card, Spinner } from 'react-bootstrap';
|
||||
|
||||
import SearchBar from './SearchBar.js';
|
||||
|
||||
const API_BASE_URL = "http://127.0.0.1:1234";
|
||||
|
||||
const SearchResults = () => {
|
||||
const [results, setResults] = useState([]);
|
||||
const [loading, setLoading] = useState(true);
|
||||
const location = useLocation();
|
||||
const navigate = useNavigate();
|
||||
|
||||
useEffect(() => {
|
||||
const searchParams = new URLSearchParams(location.search);
|
||||
const query = searchParams.get('q');
|
||||
|
||||
const fetchSearchResults = async () => {
|
||||
try {
|
||||
setLoading(true);
|
||||
const response = await axios.get(`${API_BASE_URL}/api/search`, {
|
||||
params: { q: query }
|
||||
});
|
||||
setResults(response.data);
|
||||
setLoading(false);
|
||||
} catch (error) {
|
||||
console.error("Error fetching search results:", error);
|
||||
setLoading(false);
|
||||
}
|
||||
};
|
||||
|
||||
if (query) {
|
||||
fetchSearchResults();
|
||||
}
|
||||
}, [location.search]);
|
||||
|
||||
const handleItemClick = (item) => {
|
||||
navigate(`/title/${item.id}-${item.slug}`, {
|
||||
state: {
|
||||
url: item.url // Pass the full URL to the TitleDetail component
|
||||
}
|
||||
});
|
||||
};
|
||||
|
||||
return (
|
||||
<Container fluid className="p-4">
|
||||
<div className="mb-4">
|
||||
<SearchBar />
|
||||
</div>
|
||||
|
||||
<h2 className="mb-4">Search Results</h2>
|
||||
|
||||
{loading ? (
|
||||
<div className="text-center">
|
||||
<Spinner animation="border" role="status">
|
||||
<span className="visually-hidden">Loading...</span>
|
||||
</Spinner>
|
||||
</div>
|
||||
) : (
|
||||
<Row xs={2} md={4} lg={6} className="g-4">
|
||||
{results.map((item) => (
|
||||
<Col key={item.id}>
|
||||
<Card
|
||||
className="h-100 hover-zoom"
|
||||
onClick={() => handleItemClick(item)}
|
||||
style={{ cursor: 'pointer' }}
|
||||
>
|
||||
<Card.Img
|
||||
variant="top"
|
||||
src={item.images.poster || item.images.cover}
|
||||
alt={item.name}
|
||||
style={{
|
||||
height: '300px',
|
||||
objectFit: 'cover'
|
||||
}}
|
||||
/>
|
||||
<Card.Body>
|
||||
<Card.Title>{item.name}</Card.Title>
|
||||
<Card.Text>
|
||||
{item.year} • {item.type === 'tv' ? 'TV Series' : 'Movie'}
|
||||
{item.type === 'tv' && ` • ${item.seasons_count} Seasons`}
|
||||
</Card.Text>
|
||||
</Card.Body>
|
||||
</Card>
|
||||
</Col>
|
||||
))}
|
||||
</Row>
|
||||
)}
|
||||
</Container>
|
||||
);
|
||||
};
|
||||
|
||||
export default SearchResults;
|
407
client/dashboard/src/components/TitleDetail.js
Normal file
407
client/dashboard/src/components/TitleDetail.js
Normal file
@ -0,0 +1,407 @@
|
||||
import React, { useState, useEffect } from 'react';
|
||||
import { useLocation } from 'react-router-dom';
|
||||
import axios from 'axios';
|
||||
import { Container, Row, Col, Image, Button, Dropdown, Modal, Alert } from 'react-bootstrap';
|
||||
import { FaDownload, FaPlay, FaPlus, FaTrash } from 'react-icons/fa';
|
||||
|
||||
import SearchBar from './SearchBar.js';
|
||||
|
||||
const API_BASE_URL = "http://127.0.0.1:1234";
|
||||
|
||||
const TitleDetail = () => {
|
||||
const [titleDetails, setTitleDetails] = useState(null);
|
||||
const [loading, setLoading] = useState(true);
|
||||
const [selectedSeason, setSelectedSeason] = useState(1);
|
||||
const [episodes, setEpisodes] = useState([]);
|
||||
const [hoveredEpisode, setHoveredEpisode] = useState(null);
|
||||
const [isInWatchlist, setIsInWatchlist] = useState(false);
|
||||
const [downloadStatus, setDownloadStatus] = useState({});
|
||||
const [showPlayer, setShowPlayer] = useState(false);
|
||||
const [currentVideo, setCurrentVideo] = useState("");
|
||||
const location = useLocation();
|
||||
|
||||
useEffect(() => {
|
||||
const fetchTitleDetails = async () => {
|
||||
try {
|
||||
setLoading(true);
|
||||
const titleUrl = location.state?.url || location.pathname.split('/title/')[1];
|
||||
|
||||
// Fetch title information
|
||||
const response = await axios.get(`${API_BASE_URL}/api/getInfo`, {
|
||||
params: { url: titleUrl }
|
||||
});
|
||||
|
||||
const titleData = response.data;
|
||||
setTitleDetails(titleData);
|
||||
|
||||
// Check download status
|
||||
await checkDownloadStatus(titleData);
|
||||
|
||||
// Check watchlist status
|
||||
await checkWatchlistStatus(titleData.slug);
|
||||
|
||||
// For TV shows, fetch first season episodes directly
|
||||
if (titleData.type === 'tv') {
|
||||
setEpisodes(titleData.episodes || []);
|
||||
}
|
||||
|
||||
setLoading(false);
|
||||
} catch (error) {
|
||||
console.error("Error fetching title details:", error);
|
||||
setLoading(false);
|
||||
}
|
||||
};
|
||||
|
||||
fetchTitleDetails();
|
||||
}, [location]);
|
||||
|
||||
// Check if the movie/series is already downloaded
|
||||
const checkDownloadStatus = async (titleData) => {
|
||||
try {
|
||||
if (titleData.type === 'movie') {
|
||||
const response = await axios.get(`${API_BASE_URL}/downloads`);
|
||||
const downloadedMovie = response.data.find(
|
||||
download => download.type === 'movie' && download.slug === titleData.slug
|
||||
);
|
||||
setDownloadStatus({
|
||||
movie: {
|
||||
downloaded: !!downloadedMovie,
|
||||
path: downloadedMovie ? downloadedMovie.path : null
|
||||
}
|
||||
});
|
||||
} else if (titleData.type === 'tv') {
|
||||
const response = await axios.get(`${API_BASE_URL}/downloads`);
|
||||
const downloadedEpisodes = response.data.filter(
|
||||
download => download.type === 'tv' && download.slug === titleData.slug
|
||||
);
|
||||
|
||||
const episodeStatus = {};
|
||||
downloadedEpisodes.forEach(episode => {
|
||||
episodeStatus[`S${episode.n_s}E${episode.n_ep}`] = {
|
||||
downloaded: true,
|
||||
path: episode.path
|
||||
};
|
||||
});
|
||||
setDownloadStatus({ tv: episodeStatus });
|
||||
}
|
||||
} catch (error) {
|
||||
console.error("Error checking download status:", error);
|
||||
}
|
||||
};
|
||||
|
||||
// Check watchlist status
|
||||
const checkWatchlistStatus = async (slug) => {
|
||||
try {
|
||||
const response = await axios.get(`${API_BASE_URL}/api/getWatchlist`);
|
||||
const inWatchlist = response.data.some(item => item.name === slug);
|
||||
setIsInWatchlist(inWatchlist);
|
||||
} catch (error) {
|
||||
console.error("Error checking watchlist status:", error);
|
||||
}
|
||||
};
|
||||
|
||||
const handleSeasonSelect = async (seasonNumber) => {
|
||||
if (titleDetails.type === 'tv') {
|
||||
try {
|
||||
setLoading(true);
|
||||
const seasonResponse = await axios.get(`${API_BASE_URL}/api/getInfoSeason`, {
|
||||
params: {
|
||||
url: location.state?.url,
|
||||
n: seasonNumber
|
||||
}
|
||||
});
|
||||
|
||||
setSelectedSeason(seasonNumber);
|
||||
setEpisodes(seasonResponse.data);
|
||||
setLoading(false);
|
||||
} catch (error) {
|
||||
console.error("Error fetching season details:", error);
|
||||
setLoading(false);
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
const handleDownloadFilm = async () => {
|
||||
try {
|
||||
const response = await axios.get(`${API_BASE_URL}/downloadFilm`, {
|
||||
params: {
|
||||
id: titleDetails.id,
|
||||
slug: titleDetails.slug
|
||||
}
|
||||
});
|
||||
const videoPath = response.data.path;
|
||||
|
||||
// Update download status
|
||||
setDownloadStatus({
|
||||
movie: {
|
||||
downloaded: true,
|
||||
path: videoPath
|
||||
}
|
||||
});
|
||||
} catch (error) {
|
||||
console.error("Error downloading film:", error);
|
||||
alert("Error downloading film. Please try again.");
|
||||
}
|
||||
};
|
||||
|
||||
const handleDownloadEpisode = async (seasonNumber, episodeNumber) => {
|
||||
try {
|
||||
const response = await axios.get(`${API_BASE_URL}/downloadEpisode`, {
|
||||
params: {
|
||||
n_s: seasonNumber,
|
||||
n_ep: episodeNumber
|
||||
}
|
||||
});
|
||||
const videoPath = response.data.path;
|
||||
|
||||
// Update download status for this specific episode
|
||||
setDownloadStatus(prev => ({
|
||||
tv: {
|
||||
...prev.tv,
|
||||
[`S${seasonNumber}E${episodeNumber}`]: {
|
||||
downloaded: true,
|
||||
path: videoPath
|
||||
}
|
||||
}
|
||||
}));
|
||||
} catch (error) {
|
||||
console.error("Error downloading episode:", error);
|
||||
alert("Error downloading episode. Please try again.");
|
||||
}
|
||||
};
|
||||
|
||||
const handleWatchVideo = async (videoPath) => {
|
||||
if (!videoPath) {
|
||||
// If no path provided, attempt to get path from downloads
|
||||
try {
|
||||
let path;
|
||||
if (titleDetails.type === 'movie') {
|
||||
const response = await axios.get(`${API_BASE_URL}/moviePath`, {
|
||||
params: { id: titleDetails.id }
|
||||
});
|
||||
path = response.data.path;
|
||||
} else {
|
||||
alert("Please select a specific episode to watch.");
|
||||
return;
|
||||
}
|
||||
|
||||
setCurrentVideo(path);
|
||||
} catch (error) {
|
||||
alert("Please download the content first.");
|
||||
return;
|
||||
}
|
||||
} else {
|
||||
setCurrentVideo(videoPath);
|
||||
}
|
||||
setShowPlayer(true);
|
||||
};
|
||||
|
||||
const handleAddToWatchlist = async () => {
|
||||
try {
|
||||
await axios.post(`${API_BASE_URL}/api/addWatchlist`, {
|
||||
name: titleDetails.slug,
|
||||
url: location.state?.url || location.pathname.split('/title/')[1],
|
||||
season: titleDetails.season_count
|
||||
});
|
||||
setIsInWatchlist(true);
|
||||
} catch (error) {
|
||||
console.error("Error adding to watchlist:", error);
|
||||
alert("Error adding to watchlist. Please try again.");
|
||||
}
|
||||
};
|
||||
|
||||
const handleRemoveFromWatchlist = async () => {
|
||||
try {
|
||||
await axios.post(`${API_BASE_URL}/api/removeWatchlist`, {
|
||||
name: titleDetails.slug
|
||||
});
|
||||
setIsInWatchlist(false);
|
||||
} catch (error) {
|
||||
console.error("Error removing from watchlist:", error);
|
||||
alert("Error removing from watchlist. Please try again.");
|
||||
}
|
||||
};
|
||||
|
||||
if (loading) {
|
||||
return <div className="text-center mt-5">Loading...</div>;
|
||||
}
|
||||
|
||||
if (!titleDetails) {
|
||||
return <Container>Title not found</Container>;
|
||||
}
|
||||
|
||||
return (
|
||||
<Container fluid className="p-0">
|
||||
<SearchBar />
|
||||
|
||||
{/* Background Image */}
|
||||
<div
|
||||
style={{
|
||||
backgroundImage: `url(${titleDetails.image.background})`,
|
||||
backgroundSize: 'cover',
|
||||
backgroundPosition: 'center',
|
||||
height: '50vh',
|
||||
position: 'relative'
|
||||
}}
|
||||
>
|
||||
<div
|
||||
style={{
|
||||
position: 'absolute',
|
||||
bottom: 0,
|
||||
left: 0,
|
||||
right: 0,
|
||||
background: 'linear-gradient(to top, rgba(0,0,0,0.8), transparent)',
|
||||
padding: '20px',
|
||||
display: 'flex',
|
||||
alignItems: 'center',
|
||||
justifyContent: 'space-between'
|
||||
}}
|
||||
>
|
||||
<h1 className="text-white">{titleDetails.name}</h1>
|
||||
|
||||
{/* Watchlist Button */}
|
||||
{titleDetails.type === 'tv' && (
|
||||
<div>
|
||||
{isInWatchlist ? (
|
||||
<Button
|
||||
variant="outline-light"
|
||||
onClick={handleRemoveFromWatchlist}
|
||||
>
|
||||
<FaTrash className="me-2" /> Remove from Watchlist
|
||||
</Button>
|
||||
) : (
|
||||
<Button
|
||||
variant="outline-light"
|
||||
onClick={handleAddToWatchlist}
|
||||
>
|
||||
<FaPlus className="me-2" /> Add to Watchlist
|
||||
</Button>
|
||||
)}
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<Container className="mt-4">
|
||||
{/* Plot */}
|
||||
<Row className="mb-4">
|
||||
<Col>
|
||||
<p>{titleDetails.plot}</p>
|
||||
</Col>
|
||||
</Row>
|
||||
|
||||
{/* Download/Watch Button for Movies */}
|
||||
{titleDetails.type === 'movie' && (
|
||||
<Row className="mb-4">
|
||||
<Col>
|
||||
{downloadStatus.movie?.downloaded ? (
|
||||
<Button
|
||||
variant="success"
|
||||
onClick={() => handleWatchVideo(downloadStatus.movie.path)}
|
||||
>
|
||||
<FaPlay className="me-2" /> Watch
|
||||
</Button>
|
||||
) : (
|
||||
<Button
|
||||
variant="primary"
|
||||
onClick={handleDownloadFilm}
|
||||
>
|
||||
<FaDownload className="me-2" /> Download Film
|
||||
</Button>
|
||||
)}
|
||||
</Col>
|
||||
</Row>
|
||||
)}
|
||||
|
||||
{/* TV Show Seasons and Episodes */}
|
||||
{titleDetails.type === 'tv' && (
|
||||
<>
|
||||
<Row className="mb-3">
|
||||
<Col>
|
||||
<Dropdown>
|
||||
<Dropdown.Toggle variant="secondary">
|
||||
Season {selectedSeason}
|
||||
</Dropdown.Toggle>
|
||||
|
||||
<Dropdown.Menu>
|
||||
{[...Array(titleDetails.season_count)].map((_, index) => (
|
||||
<Dropdown.Item
|
||||
key={index + 1}
|
||||
onClick={() => handleSeasonSelect(index + 1)}
|
||||
>
|
||||
Season {index + 1}
|
||||
</Dropdown.Item>
|
||||
))}
|
||||
</Dropdown.Menu>
|
||||
</Dropdown>
|
||||
</Col>
|
||||
</Row>
|
||||
|
||||
<Row xs={2} md={4} className="g-4">
|
||||
{episodes.map((episode) => {
|
||||
const episodeKey = `S${selectedSeason}E${episode.number}`;
|
||||
const isDownloaded = downloadStatus.tv?.[episodeKey]?.downloaded;
|
||||
|
||||
return (
|
||||
<Col key={episode.id}>
|
||||
<div className="episode-thumbnail-wrapper position-relative">
|
||||
<Image
|
||||
src={episode.image}
|
||||
alt={`Episode ${episode.number}`}
|
||||
fluid
|
||||
rounded
|
||||
className="mb-2"
|
||||
/>
|
||||
<div
|
||||
className="episode-number position-absolute top-0 start-0 m-2 px-2 py-1"
|
||||
style={{
|
||||
backgroundColor: 'rgba(255, 255, 255, 0.7)',
|
||||
color: '#333',
|
||||
borderRadius: '4px',
|
||||
fontSize: '0.8rem'
|
||||
}}
|
||||
>
|
||||
Ep {episode.number}
|
||||
</div>
|
||||
<h6>{episode.name}</h6>
|
||||
|
||||
{isDownloaded ? (
|
||||
<Button
|
||||
variant="success"
|
||||
onClick={() => handleWatchVideo(downloadStatus.tv[episodeKey].path)}
|
||||
>
|
||||
<FaPlay className="me-2" /> Watch
|
||||
</Button>
|
||||
) : (
|
||||
<Button
|
||||
variant="primary"
|
||||
onClick={() => handleDownloadEpisode(selectedSeason, episode.number)}
|
||||
>
|
||||
<FaDownload className="me-2" /> Download
|
||||
</Button>
|
||||
)}
|
||||
</div>
|
||||
</Col>
|
||||
);
|
||||
})}
|
||||
</Row>
|
||||
</>
|
||||
)}
|
||||
</Container>
|
||||
|
||||
{/* Modal Video Player */}
|
||||
<Modal show={showPlayer} onHide={() => setShowPlayer(false)} size="lg" centered>
|
||||
<Modal.Body>
|
||||
<video
|
||||
src={`http://127.0.0.1:1234/downloaded/${currentVideo}`}
|
||||
controls
|
||||
autoPlay
|
||||
style={{ width: '100%' }}
|
||||
/>
|
||||
</Modal.Body>
|
||||
</Modal>
|
||||
</Container>
|
||||
);
|
||||
};
|
||||
|
||||
export default TitleDetail;
|
163
client/dashboard/src/components/Watchlist.js
Normal file
163
client/dashboard/src/components/Watchlist.js
Normal file
@ -0,0 +1,163 @@
|
||||
import React, { useState, useEffect } from 'react';
|
||||
import axios from 'axios';
|
||||
import { Container, Row, Col, Card, Button, Badge, Alert } from 'react-bootstrap';
|
||||
import { Link } from 'react-router-dom';
|
||||
import { FaTrash } from 'react-icons/fa';
|
||||
|
||||
const API_BASE_URL = "http://127.0.0.1:1234";
|
||||
|
||||
const Watchlist = () => {
|
||||
const [watchlistItems, setWatchlistItems] = useState([]);
|
||||
const [newSeasons, setNewSeasons] = useState([]);
|
||||
const [loading, setLoading] = useState(true);
|
||||
const [newSeasonsMessage, setNewSeasonsMessage] = useState(""); // Stato per il messaggio delle nuove stagioni
|
||||
|
||||
// Funzione per recuperare i dati della watchlist
|
||||
const fetchWatchlistData = async () => {
|
||||
try {
|
||||
const watchlistResponse = await axios.get(`${API_BASE_URL}/api/getWatchlist`);
|
||||
setWatchlistItems(watchlistResponse.data);
|
||||
setLoading(false);
|
||||
} catch (error) {
|
||||
console.error("Error fetching watchlist:", error);
|
||||
setLoading(false);
|
||||
}
|
||||
};
|
||||
|
||||
// Funzione per controllare se ci sono nuove stagioni (attivata dal bottone)
|
||||
const checkNewSeasons = async () => {
|
||||
try {
|
||||
const newSeasonsResponse = await axios.get(`${API_BASE_URL}/api/checkWatchlist`);
|
||||
|
||||
if (Array.isArray(newSeasonsResponse.data)) {
|
||||
setNewSeasons(newSeasonsResponse.data);
|
||||
|
||||
// Crea un messaggio per i titoli con nuove stagioni
|
||||
const titlesWithNewSeasons = newSeasonsResponse.data.map(season => season.name);
|
||||
if (titlesWithNewSeasons.length > 0) {
|
||||
setNewSeasonsMessage(`Nuove stagioni disponibili per: ${titlesWithNewSeasons.join(", ")}`);
|
||||
|
||||
// Dopo aver mostrato il messaggio, aggiorniamo i titoli con le nuove stagioni
|
||||
updateTitlesWithNewSeasons(newSeasonsResponse.data);
|
||||
} else {
|
||||
setNewSeasonsMessage("Nessuna nuova stagione disponibile.");
|
||||
}
|
||||
} else {
|
||||
setNewSeasons([]); // In caso contrario, non ci sono nuove stagioni
|
||||
setNewSeasonsMessage("Nessuna nuova stagione disponibile.");
|
||||
}
|
||||
} catch (error) {
|
||||
console.error("Error fetching new seasons:", error);
|
||||
}
|
||||
};
|
||||
|
||||
// Funzione per inviare la richiesta POST per aggiornare il titolo nella watchlist
|
||||
const updateTitlesWithNewSeasons = async (newSeasonsList) => {
|
||||
try {
|
||||
for (const season of newSeasonsList) {
|
||||
// Manda una richiesta POST per ogni titolo con nuove stagioni
|
||||
console.log(`Updated watchlist for ${season.name} with new season ${season.nNewSeason}, url: ${season.title_url}`);
|
||||
|
||||
await axios.post(`${API_BASE_URL}/api/updateTitleWatchlist`, {
|
||||
url: season.title_url,
|
||||
season: season.season
|
||||
});
|
||||
|
||||
}
|
||||
} catch (error) {
|
||||
console.error("Error updating title watchlist:", error);
|
||||
}
|
||||
};
|
||||
|
||||
// Funzione per rimuovere un elemento dalla watchlist
|
||||
const handleRemoveFromWatchlist = async (serieName) => {
|
||||
try {
|
||||
await axios.post(`${API_BASE_URL}/api/removeWatchlist`, { name: serieName });
|
||||
|
||||
// Aggiorna lo stato locale per rimuovere l'elemento dalla watchlist
|
||||
setWatchlistItems((prev) => prev.filter((item) => item.name !== serieName));
|
||||
} catch (error) {
|
||||
console.error("Error removing from watchlist:", error);
|
||||
}
|
||||
};
|
||||
|
||||
// Carica inizialmente la watchlist
|
||||
useEffect(() => {
|
||||
fetchWatchlistData();
|
||||
}, []);
|
||||
|
||||
if (loading) {
|
||||
return <div className="text-center mt-5">Loading...</div>;
|
||||
}
|
||||
|
||||
return (
|
||||
<Container fluid className="p-0">
|
||||
<Container className="mt-4">
|
||||
<h2 className="mb-4">My Watchlist</h2>
|
||||
|
||||
<Button onClick={checkNewSeasons} variant="primary" className="mb-4">
|
||||
Check for New Seasons
|
||||
</Button>
|
||||
|
||||
{/* Mostra il messaggio sulle nuove stagioni */}
|
||||
{newSeasonsMessage && (
|
||||
<Alert variant={newSeasonsMessage.includes("Nuove stagioni") ? "success" : "info"}>
|
||||
{newSeasonsMessage}
|
||||
</Alert>
|
||||
)}
|
||||
|
||||
{watchlistItems.length === 0 ? (
|
||||
<p>Your watchlist is empty.</p>
|
||||
) : (
|
||||
<Row xs={1} md={3} className="g-4">
|
||||
{watchlistItems.map((item) => {
|
||||
const hasNewSeason = newSeasons && Array.isArray(newSeasons) && newSeasons.some(
|
||||
(season) => season.name === item.name
|
||||
);
|
||||
|
||||
return (
|
||||
<Col key={item.name}>
|
||||
<Card>
|
||||
<Card.Body>
|
||||
<div className="d-flex justify-content-between align-items-start">
|
||||
<Card.Title>
|
||||
{item.name.replace(/-/g, ' ')}
|
||||
{hasNewSeason && (
|
||||
<Badge bg="danger" className="ms-2">New Season</Badge>
|
||||
)}
|
||||
</Card.Title>
|
||||
<Button
|
||||
variant="outline-danger"
|
||||
size="sm"
|
||||
onClick={() => handleRemoveFromWatchlist(item.name)}
|
||||
>
|
||||
<FaTrash />
|
||||
</Button>
|
||||
</div>
|
||||
<Card.Text>
|
||||
<small>
|
||||
Added on: {new Date(item.added_on).toLocaleDateString()}
|
||||
</small>
|
||||
<br />
|
||||
<small>Seasons: {item.season}</small>
|
||||
</Card.Text>
|
||||
<Link
|
||||
to={`/title/${item.name}`}
|
||||
state={{ url: item.title_url }}
|
||||
className="btn btn-primary btn-sm mt-2"
|
||||
>
|
||||
View Details
|
||||
</Link>
|
||||
</Card.Body>
|
||||
</Card>
|
||||
</Col>
|
||||
);
|
||||
})}
|
||||
</Row>
|
||||
)}
|
||||
</Container>
|
||||
</Container>
|
||||
);
|
||||
};
|
||||
|
||||
export default Watchlist;
|
25
client/dashboard/src/index.css
Normal file
25
client/dashboard/src/index.css
Normal file
@ -0,0 +1,25 @@
|
||||
:root {
|
||||
--background-color: #121212;
|
||||
--text-color: #ffffff;
|
||||
}
|
||||
|
||||
[data-theme='light'] {
|
||||
--background-color: #ffffff;
|
||||
--text-color: #000000;
|
||||
}
|
||||
|
||||
body {
|
||||
background-color: var(--background-color);
|
||||
color: var(--text-color);
|
||||
transition: background-color 0.3s ease, color 0.3s ease;
|
||||
}
|
||||
|
||||
code {
|
||||
font-family: source-code-pro, Menlo, Monaco, Consolas, 'Courier New',
|
||||
monospace;
|
||||
}
|
||||
|
||||
.contenitore {
|
||||
text-align: center;
|
||||
justify-content: center;
|
||||
}
|
15
client/dashboard/src/index.js
Normal file
15
client/dashboard/src/index.js
Normal file
@ -0,0 +1,15 @@
|
||||
import React from 'react';
|
||||
import ReactDOM from 'react-dom/client';
|
||||
import './index.css';
|
||||
import App from './App';
|
||||
import reportWebVitals from './reportWebVitals';
|
||||
import 'bootstrap/dist/css/bootstrap.min.css'; // Importa Bootstrap CSS
|
||||
|
||||
const root = ReactDOM.createRoot(document.getElementById('root'));
|
||||
root.render(
|
||||
<React.StrictMode>
|
||||
<App />
|
||||
</React.StrictMode>
|
||||
);
|
||||
|
||||
reportWebVitals();
|
13
client/dashboard/src/reportWebVitals.js
Normal file
13
client/dashboard/src/reportWebVitals.js
Normal file
@ -0,0 +1,13 @@
|
||||
const reportWebVitals = onPerfEntry => {
|
||||
if (onPerfEntry && onPerfEntry instanceof Function) {
|
||||
import('web-vitals').then(({ getCLS, getFID, getFCP, getLCP, getTTFB }) => {
|
||||
getCLS(onPerfEntry);
|
||||
getFID(onPerfEntry);
|
||||
getFCP(onPerfEntry);
|
||||
getLCP(onPerfEntry);
|
||||
getTTFB(onPerfEntry);
|
||||
});
|
||||
}
|
||||
};
|
||||
|
||||
export default reportWebVitals;
|
6
client/package-lock.json
generated
Normal file
6
client/package-lock.json
generated
Normal file
@ -0,0 +1,6 @@
|
||||
{
|
||||
"name": "client",
|
||||
"lockfileVersion": 3,
|
||||
"requires": true,
|
||||
"packages": {}
|
||||
}
|
49
config.json
49
config.json
@ -8,15 +8,7 @@
|
||||
"root_path": "Video",
|
||||
"movie_folder_name": "Movie",
|
||||
"serie_folder_name": "TV",
|
||||
"map_episode_name": "%(tv_name)_S%(season)E%(episode)_%(episode_name)",
|
||||
"config_qbit_tor": {
|
||||
"host": "192.168.1.59",
|
||||
"port": "8080",
|
||||
"user": "admin",
|
||||
"pass": "adminadmin"
|
||||
},
|
||||
"not_close": false,
|
||||
"show_trending": false
|
||||
"map_episode_name": "%(tv_name)_S%(season)E%(episode)_%(episode_name)"
|
||||
},
|
||||
"REQUESTS": {
|
||||
"timeout": 20,
|
||||
@ -26,9 +18,6 @@
|
||||
"proxy_start_max": 0.5,
|
||||
"user-agent": ""
|
||||
},
|
||||
"BROWSER": {
|
||||
"headless": false
|
||||
},
|
||||
"M3U8_DOWNLOAD": {
|
||||
"tqdm_delay": 0.01,
|
||||
"tqdm_use_large_bar": true,
|
||||
@ -63,38 +52,10 @@
|
||||
"SITE": {
|
||||
"streamingcommunity": {
|
||||
"domain": "family"
|
||||
},
|
||||
"altadefinizione": {
|
||||
"domain": "now"
|
||||
},
|
||||
"guardaserie": {
|
||||
"domain": "academy"
|
||||
},
|
||||
"mostraguarda": {
|
||||
"domain": "stream"
|
||||
},
|
||||
"ddlstreamitaly": {
|
||||
"domain": "co",
|
||||
"extra": {
|
||||
"ips4_device_key": "",
|
||||
"ips4_member_id": "",
|
||||
"ips4_login_key": ""
|
||||
}
|
||||
},
|
||||
"animeunity": {
|
||||
"domain": "to"
|
||||
},
|
||||
"cb01new": {
|
||||
"domain": "club"
|
||||
},
|
||||
"bitsearch": {
|
||||
"domain": "to"
|
||||
},
|
||||
"1337xx": {
|
||||
"domain": "to"
|
||||
},
|
||||
"piratebays": {
|
||||
"domain": "to"
|
||||
}
|
||||
},
|
||||
"EXTRA": {
|
||||
"mongodb": "mongodb+srv://..",
|
||||
"database": "StreamingCommunity"
|
||||
}
|
||||
}
|
20
dockerfile
20
dockerfile
@ -1,20 +0,0 @@
|
||||
FROM python:3.11-slim
|
||||
|
||||
COPY . /app
|
||||
WORKDIR /app
|
||||
|
||||
ENV TEMP /tmp
|
||||
RUN mkdir -p $TEMP
|
||||
|
||||
RUN apt-get update && apt-get install -y \
|
||||
ffmpeg \
|
||||
build-essential \
|
||||
libssl-dev \
|
||||
libffi-dev \
|
||||
python3-dev \
|
||||
libxml2-dev \
|
||||
libxslt1-dev
|
||||
|
||||
RUN pip install --no-cache-dir -r requirements.txt
|
||||
|
||||
CMD ["python", "test_run.py"]
|
@ -1,4 +1,4 @@
|
||||
httpx
|
||||
httpx
|
||||
bs4
|
||||
rich
|
||||
tqdm
|
||||
@ -11,4 +11,7 @@ pycryptodome
|
||||
fake-useragent==1.1.3
|
||||
qbittorrent-api
|
||||
python-qbittorrent
|
||||
googlesearch-python
|
||||
googlesearch-python
|
||||
pymongo
|
||||
fastapi
|
||||
pyclean
|
600
server.py
Normal file
600
server.py
Normal file
@ -0,0 +1,600 @@
|
||||
import os
|
||||
import logging
|
||||
import datetime
|
||||
from urllib.parse import urlparse
|
||||
from urllib.parse import unquote
|
||||
|
||||
|
||||
# External
|
||||
from pymongo import MongoClient
|
||||
from flask_cors import CORS
|
||||
from flask import Flask, jsonify, request
|
||||
from flask import send_from_directory
|
||||
|
||||
|
||||
# Util
|
||||
from StreamingCommunity.Util._jsonConfig import config_manager
|
||||
|
||||
|
||||
# Internal
|
||||
from StreamingCommunity.Api.Template.Class.SearchType import MediaItem
|
||||
from StreamingCommunity.Api.Site.streamingcommunity.api import get_version_and_domain, search_titles, get_infoSelectTitle, get_infoSelectSeason
|
||||
from StreamingCommunity.Api.Site.streamingcommunity.film import download_film
|
||||
from StreamingCommunity.Api.Site.streamingcommunity.series import download_video
|
||||
from StreamingCommunity.Api.Site.streamingcommunity.util.ScrapeSerie import ScrapeSerie
|
||||
|
||||
# Player
|
||||
from StreamingCommunity.Api.Player.vixcloud import VideoSource
|
||||
|
||||
# Variable
|
||||
app = Flask(__name__)
|
||||
CORS(app)
|
||||
|
||||
# Site variable
|
||||
version, domain = get_version_and_domain()
|
||||
season_name = None
|
||||
scrape_serie = ScrapeSerie("streamingcommunity")
|
||||
video_source = VideoSource("streamingcommunity", True)
|
||||
DOWNLOAD_DIRECTORY = os.getcwd()
|
||||
|
||||
|
||||
# Mongo variable
|
||||
client = MongoClient(config_manager.get("EXTRA", "mongodb"))
|
||||
db = client[config_manager.get("EXTRA", "database")]
|
||||
watchlist_collection = db['watchlist']
|
||||
downloads_collection = db['downloads']
|
||||
|
||||
|
||||
|
||||
# ---------- SITE API ------------
|
||||
@app.route('/')
|
||||
def index():
|
||||
"""
|
||||
Health check endpoint to confirm server is operational.
|
||||
|
||||
Returns:
|
||||
str: Operational status message
|
||||
"""
|
||||
logging.info("Health check endpoint accessed")
|
||||
return 'Server is operational'
|
||||
|
||||
@app.route('/api/search', methods=['GET'])
|
||||
def get_list_search():
|
||||
"""
|
||||
Search for titles based on query parameter.
|
||||
|
||||
Returns:
|
||||
JSON response with search results or error message
|
||||
"""
|
||||
try:
|
||||
query = request.args.get('q')
|
||||
|
||||
if not query:
|
||||
logging.warning("Search request without query parameter")
|
||||
return jsonify({'error': 'Missing query parameter'}), 400
|
||||
|
||||
result = search_titles(query, domain)
|
||||
logging.info(f"Search performed for query: {query}")
|
||||
return jsonify(result), 200
|
||||
|
||||
except Exception as e:
|
||||
logging.error(f"Error in search: {str(e)}", exc_info=True)
|
||||
return jsonify({'error': 'Internal server error'}), 500
|
||||
|
||||
@app.route('/api/getInfo', methods=['GET'])
|
||||
def get_info_title():
|
||||
"""
|
||||
Retrieve information for a specific title.
|
||||
|
||||
Returns:
|
||||
JSON response with title information or error message
|
||||
"""
|
||||
try:
|
||||
title_url = request.args.get('url')
|
||||
|
||||
if not title_url:
|
||||
logging.warning("GetInfo request without URL parameter")
|
||||
return jsonify({'error': 'Missing URL parameter'}), 400
|
||||
|
||||
result = get_infoSelectTitle(title_url, domain, version)
|
||||
|
||||
if result.get('type') == "tv":
|
||||
global season_name, scrape_serie, video_source
|
||||
season_name = result.get('slug')
|
||||
scrape_serie.setup(
|
||||
version=version,
|
||||
media_id=int(result.get('id')),
|
||||
series_name=result.get('slug')
|
||||
)
|
||||
video_source.setup(result.get('id'))
|
||||
|
||||
logging.info(f"TV series info retrieved: {season_name}")
|
||||
|
||||
return jsonify(result), 200
|
||||
|
||||
except Exception as e:
|
||||
logging.error(f"Error retrieving title info: {str(e)}", exc_info=True)
|
||||
return jsonify({'error': 'Failed to retrieve title information'}), 500
|
||||
|
||||
@app.route('/api/getInfoSeason', methods=['GET'])
|
||||
def get_info_season():
|
||||
"""
|
||||
Retrieve season information for a specific title.
|
||||
|
||||
Returns:
|
||||
JSON response with season information or error message
|
||||
"""
|
||||
try:
|
||||
title_url = request.args.get('url')
|
||||
number_season = request.args.get('n')
|
||||
|
||||
if not title_url or not number_season:
|
||||
logging.warning("GetInfoSeason request with missing parameters")
|
||||
return jsonify({'error': 'Missing URL or season number'}), 400
|
||||
|
||||
result = get_infoSelectSeason(title_url, number_season, domain, version)
|
||||
logging.info(f"Season info retrieved for season {number_season}")
|
||||
return jsonify(result), 200
|
||||
|
||||
except Exception as e:
|
||||
logging.error(f"Error retrieving season info: {str(e)}", exc_info=True)
|
||||
return jsonify({'error': 'Failed to retrieve season information'}), 500
|
||||
|
||||
@app.route('/api/getdomain', methods=['GET'])
|
||||
def get_domain():
|
||||
"""
|
||||
Retrieve current domain and version.
|
||||
|
||||
Returns:
|
||||
JSON response with domain and version
|
||||
"""
|
||||
try:
|
||||
global version, domain
|
||||
version, domain = get_version_and_domain()
|
||||
logging.info(f"Domain retrieved: {domain}, Version: {version}")
|
||||
return jsonify({'domain': domain, 'version': version}), 200
|
||||
|
||||
except Exception as e:
|
||||
logging.error(f"Error retrieving domain: {str(e)}", exc_info=True)
|
||||
return jsonify({'error': 'Failed to retrieve domain information'}), 500
|
||||
|
||||
|
||||
|
||||
# ---------- DOWNLOAD API ------------
|
||||
@app.route('/downloadFilm', methods=['GET'])
|
||||
def call_download_film():
|
||||
"""
|
||||
Download a film by its ID and slug.
|
||||
|
||||
Returns:
|
||||
JSON response with download path or error message
|
||||
"""
|
||||
try:
|
||||
film_id = request.args.get('id')
|
||||
slug = request.args.get('slug')
|
||||
|
||||
if not film_id or not slug:
|
||||
logging.warning("Download film request with missing parameters")
|
||||
return jsonify({'error': 'Missing film ID or slug'}), 400
|
||||
|
||||
item_media = MediaItem(**{'id': film_id, 'slug': slug})
|
||||
path_download = download_film(item_media)
|
||||
|
||||
download_data = {
|
||||
'type': 'movie',
|
||||
'id': film_id,
|
||||
'slug': slug,
|
||||
'path': path_download,
|
||||
'timestamp': datetime.datetime.now(datetime.timezone.utc)
|
||||
}
|
||||
downloads_collection.insert_one(download_data)
|
||||
|
||||
logging.info(f"Film downloaded: {slug}")
|
||||
return jsonify({'path': path_download}), 200
|
||||
|
||||
except Exception as e:
|
||||
logging.error(f"Error downloading film: {str(e)}", exc_info=True)
|
||||
return jsonify({'error': 'Failed to download film'}), 500
|
||||
|
||||
@app.route('/downloadEpisode', methods=['GET'])
|
||||
def call_download_episode():
|
||||
"""
|
||||
Download a specific TV series episode.
|
||||
|
||||
Returns:
|
||||
JSON response with download path or error message
|
||||
"""
|
||||
try:
|
||||
season_number = request.args.get('n_s')
|
||||
episode_number = request.args.get('n_ep')
|
||||
|
||||
if not season_number or not episode_number:
|
||||
logging.warning("Download episode request with missing parameters")
|
||||
return jsonify({'error': 'Missing season or episode number'}), 400
|
||||
|
||||
season_number = int(season_number)
|
||||
episode_number = int(episode_number)
|
||||
|
||||
scrape_serie.collect_title_season(season_number)
|
||||
path_download = download_video(
|
||||
season_name,
|
||||
season_number,
|
||||
episode_number,
|
||||
scrape_serie,
|
||||
video_source
|
||||
)
|
||||
|
||||
download_data = {
|
||||
'type': 'tv',
|
||||
'id': scrape_serie.media_id,
|
||||
'slug': scrape_serie.series_name,
|
||||
'n_s': season_number,
|
||||
'n_ep': episode_number,
|
||||
'path': path_download,
|
||||
'timestamp': datetime.datetime.now(datetime.timezone.utc)
|
||||
}
|
||||
downloads_collection.insert_one(download_data)
|
||||
|
||||
logging.info(f"Episode downloaded: S{season_number}E{episode_number}")
|
||||
return jsonify({'path': path_download}), 200
|
||||
|
||||
except ValueError:
|
||||
logging.error("Invalid season or episode number format")
|
||||
return jsonify({'error': 'Invalid season or episode number'}), 400
|
||||
|
||||
except Exception as e:
|
||||
logging.error(f"Error downloading episode: {str(e)}", exc_info=True)
|
||||
return jsonify({'error': 'Failed to download episode'}), 500
|
||||
|
||||
@app.route('/downloaded/<path:filename>', methods=['GET'])
|
||||
def serve_downloaded_file(filename):
|
||||
"""
|
||||
Serve downloaded files with proper URL decoding and error handling.
|
||||
|
||||
Returns:
|
||||
Downloaded file or error message
|
||||
"""
|
||||
try:
|
||||
# URL decode the filename
|
||||
decoded_filename = unquote(filename)
|
||||
logging.debug(f"Requested file: {decoded_filename}")
|
||||
|
||||
# Construct full file path
|
||||
file_path = os.path.join(DOWNLOAD_DIRECTORY, decoded_filename)
|
||||
logging.debug(f"Full file path: {file_path}")
|
||||
|
||||
# Verify file exists
|
||||
if not os.path.isfile(file_path):
|
||||
logging.warning(f"File not found: {decoded_filename}")
|
||||
return jsonify({'error': 'File not found'}), 404
|
||||
|
||||
# Serve the file
|
||||
return send_from_directory(DOWNLOAD_DIRECTORY, decoded_filename, as_attachment=False)
|
||||
|
||||
except Exception as e:
|
||||
logging.error(f"Error serving file: {str(e)}", exc_info=True)
|
||||
return jsonify({'error': 'Internal server error'}), 500
|
||||
|
||||
|
||||
|
||||
# ---------- WATCHLIST MONGO ------------
|
||||
@app.route('/api/addWatchlist', methods=['POST'])
|
||||
def add_to_watchlist():
|
||||
title_name = request.json.get('name')
|
||||
title_url = request.json.get('url')
|
||||
season = request.json.get('season')
|
||||
|
||||
if title_url and season:
|
||||
|
||||
existing_item = watchlist_collection.find_one({'name': title_name, 'url': title_url, 'season': season})
|
||||
if existing_item:
|
||||
return jsonify({'message': 'Il titolo è già nella watchlist'}), 400
|
||||
|
||||
watchlist_collection.insert_one({
|
||||
'name': title_name,
|
||||
'title_url': title_url,
|
||||
'season': season,
|
||||
'added_on': datetime.datetime.utcnow()
|
||||
})
|
||||
return jsonify({'message': 'Titolo aggiunto alla watchlist'}), 200
|
||||
else:
|
||||
return jsonify({'message': 'Missing title_url or season'}), 400
|
||||
|
||||
@app.route('/api/updateTitleWatchlist', methods=['POST'])
|
||||
def update_title_watchlist():
|
||||
print(request.json)
|
||||
|
||||
title_url = request.json.get('url')
|
||||
new_season = request.json.get('season')
|
||||
|
||||
if title_url is not None and new_season is not None:
|
||||
result = watchlist_collection.update_one(
|
||||
{'title_url': title_url},
|
||||
{'$set': {'season': new_season}}
|
||||
)
|
||||
|
||||
if result.matched_count == 0:
|
||||
return jsonify({'message': 'Titolo non trovato nella watchlist'}), 404
|
||||
|
||||
if result.modified_count == 0:
|
||||
return jsonify({'message': 'La stagione non è cambiata'}), 200
|
||||
|
||||
return jsonify({'message': 'Stagione aggiornata con successo'}), 200
|
||||
|
||||
else:
|
||||
return jsonify({'message': 'Missing title_url or season'}), 400
|
||||
|
||||
@app.route('/api/removeWatchlist', methods=['POST'])
|
||||
def remove_from_watchlist():
|
||||
title_name = request.json.get('name')
|
||||
|
||||
if title_name:
|
||||
result = watchlist_collection.delete_one({'name': title_name})
|
||||
|
||||
if result.deleted_count == 1:
|
||||
return jsonify({'message': 'Titolo rimosso dalla watchlist'}), 200
|
||||
else:
|
||||
return jsonify({'message': 'Titolo non trovato nella watchlist'}), 404
|
||||
else:
|
||||
return jsonify({'message': 'Missing title_url or season'}), 400
|
||||
|
||||
@app.route('/api/getWatchlist', methods=['GET'])
|
||||
def get_watchlist():
|
||||
watchlist_items = list(watchlist_collection.find({}, {'_id': 0}))
|
||||
|
||||
if watchlist_items:
|
||||
return jsonify(watchlist_items), 200
|
||||
else:
|
||||
return jsonify({'message': 'La watchlist è vuota'}), 200
|
||||
|
||||
@app.route('/api/checkWatchlist', methods=['GET'])
|
||||
def get_newSeason():
|
||||
title_newSeasons = []
|
||||
watchlist_items = list(watchlist_collection.find({}, {'_id': 0}))
|
||||
|
||||
if not watchlist_items:
|
||||
return jsonify({'message': 'La watchlist è vuota'}), 200
|
||||
|
||||
for item in watchlist_items:
|
||||
title_url = item.get('title_url')
|
||||
if not title_url:
|
||||
continue
|
||||
|
||||
try:
|
||||
parsed_url = urlparse(title_url)
|
||||
hostname = parsed_url.hostname
|
||||
domain_part = hostname.split('.')[1]
|
||||
new_url = title_url.replace(domain_part, domain)
|
||||
|
||||
result = get_infoSelectTitle(new_url, domain, version)
|
||||
|
||||
if not result or 'season_count' not in result:
|
||||
continue
|
||||
|
||||
number_season = result.get("season_count")
|
||||
|
||||
if number_season > item.get("season"):
|
||||
title_newSeasons.append({
|
||||
'title_url': item.get('title_url'),
|
||||
'name': item.get('name'),
|
||||
'season': int(number_season),
|
||||
'nNewSeason': int(number_season) - int(item.get("season"))
|
||||
})
|
||||
|
||||
except Exception as e:
|
||||
print(f"Errore nel recuperare informazioni per {item.get('title_url')}: {e}")
|
||||
|
||||
if title_newSeasons:
|
||||
return jsonify(title_newSeasons), 200
|
||||
else:
|
||||
return jsonify({'message': 'Nessuna nuova stagione disponibile'}), 200
|
||||
|
||||
|
||||
|
||||
# ---------- DOWNLOAD MONGO ------------
|
||||
def ensure_collections_exist(db):
|
||||
"""
|
||||
Ensures that the required collections exist in the database.
|
||||
If they do not exist, they are created.
|
||||
|
||||
Args:
|
||||
db: The MongoDB database object.
|
||||
"""
|
||||
required_collections = ['watchlist', 'downloads']
|
||||
existing_collections = db.list_collection_names()
|
||||
|
||||
for collection_name in required_collections:
|
||||
if collection_name not in existing_collections:
|
||||
# Creazione della collezione
|
||||
db.create_collection(collection_name)
|
||||
logging.info(f"Created missing collection: {collection_name}")
|
||||
else:
|
||||
logging.info(f"Collection already exists: {collection_name}")
|
||||
|
||||
@app.route('/downloads', methods=['GET'])
|
||||
def fetch_all_downloads():
|
||||
"""
|
||||
Endpoint to fetch all downloads.
|
||||
"""
|
||||
try:
|
||||
downloads = list(downloads_collection.find({}, {'_id': 0}))
|
||||
return jsonify(downloads), 200
|
||||
|
||||
except Exception as e:
|
||||
logging.error(f"Error fetching all downloads: {str(e)}")
|
||||
return []
|
||||
|
||||
@app.route('/deleteEpisode', methods=['DELETE'])
|
||||
def remove_episode():
|
||||
"""
|
||||
Endpoint to delete a specific episode and its file.
|
||||
"""
|
||||
try:
|
||||
series_id = request.args.get('id')
|
||||
season_number = request.args.get('season')
|
||||
episode_number = request.args.get('episode')
|
||||
|
||||
if not series_id or not season_number or not episode_number:
|
||||
return jsonify({'error': 'Missing parameters (id, season, episode)'}), 400
|
||||
|
||||
try:
|
||||
series_id = int(series_id)
|
||||
season_number = int(season_number)
|
||||
episode_number = int(episode_number)
|
||||
except ValueError:
|
||||
return jsonify({'error': 'Invalid season or episode number'}), 400
|
||||
|
||||
# Trova il percorso del file
|
||||
episode = downloads_collection.find_one({
|
||||
'type': 'tv',
|
||||
'id': series_id,
|
||||
'n_s': season_number,
|
||||
'n_ep': episode_number
|
||||
}, {'_id': 0, 'path': 1})
|
||||
|
||||
if not episode or 'path' not in episode:
|
||||
return jsonify({'error': 'Episode not found'}), 404
|
||||
|
||||
file_path = episode['path']
|
||||
|
||||
# Elimina il file fisico
|
||||
try:
|
||||
if os.path.exists(file_path):
|
||||
os.remove(file_path)
|
||||
logging.info(f"Deleted episode file: {file_path}")
|
||||
else:
|
||||
logging.warning(f"Episode file not found: {file_path}")
|
||||
except Exception as e:
|
||||
logging.error(f"Error deleting episode file: {str(e)}")
|
||||
|
||||
# Rimuovi l'episodio dal database
|
||||
result = downloads_collection.delete_one({
|
||||
'type': 'tv',
|
||||
'id': series_id,
|
||||
'n_s': season_number,
|
||||
'n_ep': episode_number
|
||||
})
|
||||
|
||||
if result.deleted_count > 0:
|
||||
return jsonify({'success': True}), 200
|
||||
else:
|
||||
return jsonify({'error': 'Failed to delete episode from database'}), 500
|
||||
|
||||
except Exception as e:
|
||||
logging.error(f"Error deleting episode: {str(e)}")
|
||||
return jsonify({'error': 'Failed to delete episode'}), 500
|
||||
|
||||
@app.route('/deleteMovie', methods=['DELETE'])
|
||||
def remove_movie():
|
||||
"""
|
||||
Endpoint to delete a specific movie, its file, and its parent folder if empty.
|
||||
"""
|
||||
try:
|
||||
movie_id = request.args.get('id')
|
||||
|
||||
if not movie_id:
|
||||
return jsonify({'error': 'Missing movie ID'}), 400
|
||||
|
||||
# Trova il percorso del file
|
||||
movie = downloads_collection.find_one({'type': 'movie', 'id': movie_id}, {'_id': 0, 'path': 1})
|
||||
|
||||
if not movie or 'path' not in movie:
|
||||
return jsonify({'error': 'Movie not found'}), 404
|
||||
|
||||
file_path = movie['path']
|
||||
parent_folder = os.path.dirname(file_path)
|
||||
|
||||
# Elimina il file fisico
|
||||
try:
|
||||
if os.path.exists(file_path):
|
||||
os.remove(file_path)
|
||||
logging.info(f"Deleted movie file: {file_path}")
|
||||
else:
|
||||
logging.warning(f"Movie file not found: {file_path}")
|
||||
except Exception as e:
|
||||
logging.error(f"Error deleting movie file: {str(e)}")
|
||||
|
||||
# Elimina la cartella superiore se vuota
|
||||
try:
|
||||
if os.path.exists(parent_folder) and not os.listdir(parent_folder):
|
||||
os.rmdir(parent_folder)
|
||||
logging.info(f"Deleted empty parent folder: {parent_folder}")
|
||||
except Exception as e:
|
||||
logging.error(f"Error deleting parent folder: {str(e)}")
|
||||
|
||||
# Rimuovi il film dal database
|
||||
result = downloads_collection.delete_one({'type': 'movie', 'id': movie_id})
|
||||
|
||||
if result.deleted_count > 0:
|
||||
return jsonify({'success': True}), 200
|
||||
else:
|
||||
return jsonify({'error': 'Failed to delete movie from database'}), 500
|
||||
|
||||
except Exception as e:
|
||||
logging.error(f"Error deleting movie: {str(e)}")
|
||||
return jsonify({'error': 'Failed to delete movie'}), 500
|
||||
|
||||
@app.route('/moviePath', methods=['GET'])
|
||||
def fetch_movie_path():
|
||||
"""
|
||||
Endpoint to fetch the path of a specific movie.
|
||||
"""
|
||||
try:
|
||||
movie_id = int(request.args.get('id'))
|
||||
|
||||
if not movie_id:
|
||||
return jsonify({'error': 'Missing movie ID'}), 400
|
||||
|
||||
movie = downloads_collection.find_one({'type': 'movie', 'id': movie_id}, {'_id': 0, 'path': 1})
|
||||
|
||||
if movie and 'path' in movie:
|
||||
return jsonify({'path': movie['path']}), 200
|
||||
else:
|
||||
return jsonify({'error': 'Movie not found'}), 404
|
||||
|
||||
except Exception as e:
|
||||
logging.error(f"Error fetching movie path: {str(e)}")
|
||||
return jsonify({'error': 'Failed to fetch movie path'}), 500
|
||||
|
||||
@app.route('/episodePath', methods=['GET'])
|
||||
def fetch_episode_path():
|
||||
"""
|
||||
Endpoint to fetch the path of a specific episode.
|
||||
"""
|
||||
try:
|
||||
series_id = request.args.get('id')
|
||||
season_number = request.args.get('season')
|
||||
episode_number = request.args.get('episode')
|
||||
|
||||
if not series_id or not season_number or not episode_number:
|
||||
return jsonify({'error': 'Missing parameters (id, season, episode)'}), 400
|
||||
|
||||
try:
|
||||
series_id = int(series_id)
|
||||
season_number = int(season_number)
|
||||
episode_number = int(episode_number)
|
||||
except ValueError:
|
||||
return jsonify({'error': 'Invalid season or episode number'}), 400
|
||||
|
||||
episode = downloads_collection.find_one({
|
||||
'type': 'tv',
|
||||
'id': series_id,
|
||||
'n_s': season_number,
|
||||
'n_ep': episode_number
|
||||
}, {'_id': 0, 'path': 1})
|
||||
|
||||
if episode and 'path' in episode:
|
||||
return jsonify({'path': episode['path']}), 200
|
||||
else:
|
||||
return jsonify({'error': 'Episode not found'}), 404
|
||||
|
||||
except Exception as e:
|
||||
logging.error(f"Error fetching episode path: {str(e)}")
|
||||
return jsonify({'error': 'Failed to fetch episode path'}), 500
|
||||
|
||||
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
ensure_collections_exist(db)
|
||||
app.run(debug=True, port=1234, threaded=True)
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
x
Reference in New Issue
Block a user