• 0 Posts
  • 5 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle
  • I think you’re referring to FlareSolverr. If so, I’m not aware of a direct replacement.

    Main issue is it’s heavy on resources (I have an rpi4b)

    FlareSolverr does add some memory overhead, but otherwise it’s fairly lightweight. On my system FlareSolverr has been up for 8 days and is using ~300MB:

    NAME           CPU %     MEM USAGE
    flaresolverr   0.01%     310.3MiB
    

    Note that any CPU usage introduced by FlareSolverr is unavoidable because that’s how CloudFlare protection works. CloudFlare creates a workload in the client browser that should be trivial if you’re making a single request, but brings your system to a crawl if you’re trying to send many requests, e.g. DDOSing or scraping. You need to execute that browser-based work somewhere to get past those CloudFlare checks.

    If hosting the FlareSolverr container on your rpi4b would put it under memory or CPU pressure, you could run the docker container on a different system. When setting up Flaresolverr in Prowlarr you create an indexer proxy with a tag. Any indexer with that tag sends their requests through the proxy instead of sending them directly to the tracker site. When Flaresolverr is running in a local Docker container the address for the proxy is localhost, e.g.:

    If you run Flaresolverr’s Docker container on another system that’s accessible to your rpi4b, you could create an indexer proxy whose Host is “http://<other_system_IP>:8191”. Keep security in mind when doing this, if you’ve got a VPN connection on your rpi4b with split tunneling enabled (i.e. connections to local network resources are allowed when the tunnel is up) then this setup would allow requests to these indexers to escape the VPN tunnel.

    On a side note, I’d strongly recommend trying out a Docker-based setup. Aside from Flaresolverr, I ran my servarr setup without containers for years and that was fine, but moving over to Docker made the configuration a lot easier. Before Docker I had a complex set of firewall rules to allow traffic to my local network and my VPN server, but drop any other traffic that wasn’t using the VPN tunnel. All the firewall complexity has now been replaced with a gluetun container, which is much easier to manage and probably more secure. You don’t have to switch to Docker-based all in go, you can run hybrid if need be.

    If you really don’t want to use Docker then you could attempt to install from source on the rpi4b. Be advised that you’re absolutely going offroad if you do this as it’s not officially supported by the FlareSolverr devs. It requires install an ARM-based Chromium browser, then setting some environment variables so that FlareSolverr uses that browser instead of trying to download its own. Exact steps are documented in this GitHub comment. I haven’t tested these steps, so YMMV. Honestly, I think this is a bad idea because the full browser will almost certainly require more memory. The browser included in the FlareSolverr container is stripped down to the bare minimum required to pass the CloudFlare checks.

    If you’re just strongly opposed to Docker for whatever reason then I think your best bet would be to combine the two approaches above. Host the FlareSolverr proxy on an x86-based system so you can install from source using the officially supported steps.




  • The technical issues could probably be tackled, but realistically I doubt we’ll ever return to the screener copy glory days. By now everyone who receives a screener copy knows about the watermarking software, and release teams would have a hell of a time convincing them that the watermark can 100% for sure be removed. The person in possession of the screener copy has every incentive not to share it, since the costs of getting caught are so high (fired and/or sued and/or blacklisted in the industry).

    I don’t have any source for this, but screener copy leaks were so prevalent at one point that I have to imagine that money was changing hands. Release teams behind paid sites were probably bribing recipients of screener copies so their site could have the pirate copy first, and later it would spread to free sites. Given the number of people that receive screener copies, studios realistically had no way to figure out who was leaking them so it was essentially free money for the leaker. The price paid to the leaker was probably not all that high since the risks were so low.

    As soon as the watermarks were in, the risks for the leaker went up dramatically and so would their price. Watermarking was actually a very clever solution to the problem. Rather than adding DRM, which would bog down their workflows and piss off their customers, studios added watermarks that made it uneconomical for the leaks to continue.


  • That sounds like a workprint. The linked wiki page has notable examples of workprints that made their way onto Internet, sometimes before the movie was even in theaters. I don’t think this is typically a sought-after version for pirate groups, their existence is likely more of a convenience situation. Someone got their hands on the workprint, uploaded it online, and it spread from there.

    The holy grail for pirate groups used to be screener copies, finished versions of films that are sent to reviewers, promoters, etc. before release. I remember a (relatively brief) time when finished copies of movies were routinely popping up online even before they were in theaters. Such leaks have largely been stopped by difficult-to-remove watermarking of screener copies and workprints. Every such copy that goes to an editor, VFX house or film reviewer gets its own unique watermark trace embedded in the copy. If the studio finds that your copy was leaked online they can fire / sue / blacklist you. It’s massively curtailed such leaks.