Why using executable curl and not fsockopen?
using fsockopen removes the need to use command line executables (OS dependency, security risks) while still allows you to start a "thread" and close it before it finishes and the process would keep running in the BG.
about security - if the script it executed via URL, is there any means of security to protect someone from calling it from the browser?
another consideration is server resources - it may be beneficial to run the tasks on a separate server, so the front-end server that responds to your website requests does not get into starvation state due to many long-running processes (whether it is apache, or nginx or other - there's a CPU cap and # of connections cap at some point).
Does the package supports that?
Joseluis Laso - 2015-09-29 18:16:17 - In reply to message 1 from gonen radai
First of all thank you so much for your appreciations. I have in mind to solve the curl command line limitation and I will take in account your advice to use the fsckopen function. I'd prefer as well to don't use a URL to invoke the task but my tests didn't work correctly without this, but I'll take a look again on it.
Regarding to the separation of servers you are right, my first version of the repo had this possibility but finally to simplify I decide to use together. One of the advantages to start the task by URL could this.
Regarding to the security you are right again. The best for me, maintaining the current system is to use a token to prevent this problems. Let me that I think how to solve.
I invite you to contribute your solution to the repo. I consider myself a student not a teacher and would be perfect to count on your collaboration.
gonen radai - 2015-09-30 06:14:14 - In reply to message 2 from Joseluis Laso
Your idea of using a token is good - perhaps even better would be to make it an encrypted one that has no dependency other than a share secret (like using the mcrypt lib).
It could even be a token that encapsulates the entire job data.
As for fsockopen - it would still be via URL (i.e. opening a socket to connect to apache on port 80 or 443) but the benefit is that once you hit the request you can fclose() on the resource while apache would keep running the script even if your "starter" script has ended.
if you do continue with URL, there's no reason you can't do server-separation.
instead of hitting http://127.0.0.1/job.php, you can hit http://internal-job-server/job.php - consider this "internal" server as one that does not serve end-user requests and so its processing would not affect the speed of the main site.