Frederick Gotham
2020-09-24 09:50:07 UTC
I've recently started doing web GUI programming.
On the web server, I have a PHP script that uses the "exec" function to run my C++ program.
My C++ program performs two HTTPS requests, and depending on the data it gets back, it might perform 2 or 3 more HTTPS requests. My program then prints HTML code to stdout. The PHP script takes this HTML and throws it up on the end user's screen as a webpage.
My C++ program could fall down in several ways. Any of the HTTPS requests could fail, or return partial (or corrupt) data. There could be an uncaught exception from the networking code, or a segfault in a 3rd party library. It could fail in lots of ways.
My C++ code at the moment is quite clean, and I don't want to litter it with error-handling code.
One thing I could do is throw an "std::runtime_error" whenever anything goes wrong, then let these exceptions propagate up to 'main', and then in 'main' just restart the whole program.
Another option would be to kill my program with "std::exit(EXIT_FAILURE)" when anything goes wrong. Then I would have a Linux shell script that restarts my program. The rationale of the Linux shell script would be:
"Run the C++ program and check that it returns EXIT_SUCCESS. If it doesn't return EXIT_SUCCESS, then try to restart it. If it fails 5 times in a row, stop trying."
I would also make it a little more complicated:
"Put a time limit of 4 seconds on the C++ program -- if it runs into 5 seconds then kill it and start it again (up to a max of 5 times)".
A simple Linux script to constantly restart a program if it fails looks like this:
#!/bin/sh
until my_program; do
echo "Program 'my_program' crashed with exit code $?. Respawning.." >&2
sleep 1
done
So next to try 5 times, I could do:
#!/bin/sh
succeeded=0
for i in {1..5}
do
output=$(./myprogram)
status=$?
if [ ${status} -eq 0 ]; then
echo -n ${output} #This prints the HTML to stdout
succeeded=1
break
fi
sleep 1
done
if [ ${succeeded} -eq 0 ]; then
echo -n "<h2>Error</h2>"
exit 1
fi
And then finally to give it a max time of 4 seconds, use the program "timeout" which will exit with status 124 if it times out:
#!/bin/sh
succeeded=0
for i in {1..5}
do
output=$(timeout --signal SIGKILL 4 ./myprogram)
status=$?
if [ ${status} -eq 0 ]; then
echo -n ${output} #This prints the HTML to stdout
succeeded=1
break
fi
sleep 1
done
if [ ${succeeded} -eq 0 ]; then
echo -n "<h2>Error</h2>"
exit 1
fi
And so then in my C++ program, I'd have;
inline void exitfail(void) { std::exit(EXIT_FAILURE); }
And then in my C++ program if I'm parsing the HTML I get back, and something's wrong:
string const html = PerformHTTPSrequest(. . .);
size_t const i = html.rfind("<diameter>");
if ( string::npos == i ) exitfail();
So this way, if my C++ program fails in any way, an entire new process is spawned to try again (which might be the right thing to do if it's a runtime error for example to do with loading a shared library).
Any thoughts or advice on this?
On the web server, I have a PHP script that uses the "exec" function to run my C++ program.
My C++ program performs two HTTPS requests, and depending on the data it gets back, it might perform 2 or 3 more HTTPS requests. My program then prints HTML code to stdout. The PHP script takes this HTML and throws it up on the end user's screen as a webpage.
My C++ program could fall down in several ways. Any of the HTTPS requests could fail, or return partial (or corrupt) data. There could be an uncaught exception from the networking code, or a segfault in a 3rd party library. It could fail in lots of ways.
My C++ code at the moment is quite clean, and I don't want to litter it with error-handling code.
One thing I could do is throw an "std::runtime_error" whenever anything goes wrong, then let these exceptions propagate up to 'main', and then in 'main' just restart the whole program.
Another option would be to kill my program with "std::exit(EXIT_FAILURE)" when anything goes wrong. Then I would have a Linux shell script that restarts my program. The rationale of the Linux shell script would be:
"Run the C++ program and check that it returns EXIT_SUCCESS. If it doesn't return EXIT_SUCCESS, then try to restart it. If it fails 5 times in a row, stop trying."
I would also make it a little more complicated:
"Put a time limit of 4 seconds on the C++ program -- if it runs into 5 seconds then kill it and start it again (up to a max of 5 times)".
A simple Linux script to constantly restart a program if it fails looks like this:
#!/bin/sh
until my_program; do
echo "Program 'my_program' crashed with exit code $?. Respawning.." >&2
sleep 1
done
So next to try 5 times, I could do:
#!/bin/sh
succeeded=0
for i in {1..5}
do
output=$(./myprogram)
status=$?
if [ ${status} -eq 0 ]; then
echo -n ${output} #This prints the HTML to stdout
succeeded=1
break
fi
sleep 1
done
if [ ${succeeded} -eq 0 ]; then
echo -n "<h2>Error</h2>"
exit 1
fi
And then finally to give it a max time of 4 seconds, use the program "timeout" which will exit with status 124 if it times out:
#!/bin/sh
succeeded=0
for i in {1..5}
do
output=$(timeout --signal SIGKILL 4 ./myprogram)
status=$?
if [ ${status} -eq 0 ]; then
echo -n ${output} #This prints the HTML to stdout
succeeded=1
break
fi
sleep 1
done
if [ ${succeeded} -eq 0 ]; then
echo -n "<h2>Error</h2>"
exit 1
fi
And so then in my C++ program, I'd have;
inline void exitfail(void) { std::exit(EXIT_FAILURE); }
And then in my C++ program if I'm parsing the HTML I get back, and something's wrong:
string const html = PerformHTTPSrequest(. . .);
size_t const i = html.rfind("<diameter>");
if ( string::npos == i ) exitfail();
So this way, if my C++ program fails in any way, an entire new process is spawned to try again (which might be the right thing to do if it's a runtime error for example to do with loading a shared library).
Any thoughts or advice on this?