[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

mass hosting + cgi [WAS: Technical committee acting in gross violation of the Debian constitution]



On 04.12.2014 22:23, Christoph Anton Mitterer wrote:

> Apart from that, when you speak of "non-trivial" quantities - I'd
> probably say that running gazillion websites from different entities on
> one host is generally a really bad idea.

No, it's not, and it's pretty cheap, if done right.

Several years ago, I was working for some large ISP (probably the
largest in Germany). Hosting more than 1000 sites per box, several
millions in total. (yes, most of them are pretty small and low
traffic).

IIRC at that time they've been using cgiexec. I just don't recall
why they didn't use my muxmpm. (maybe because apache upstream was
too lazy to pick it up, even though it had been shipped by several
large distros).

A few years earlier I've developed muxmpm for exactly that purpose:
a derivative of worker/perchild, running individual sites under their
own UID, spawning on-demand. This approach not just worked for CGI,
but also builtin content processor like mod_php, mod_perl, etc.

>>> FastCGI is just a slightly more fancy way of doing this.
>> FastCGI is another thing that almost nobody can afford when hosting 
>> a significant number of web sites.
> Why not?

It adds additional complexity, especially when you're going to manage
a _large_ number (several k) of users per box. In such scenarios
you wanna be careful about system resources like sockets, fds, etc.

I'm not up to date whether there's meanwhile an efficient solution
for fully on-demand startup (and auto-cleanup) of fcgi slaves
with arbitrary UIDs, and how much overhead copying between
processes (compared to socket-passing) produces on modern systems
(back when I wrote muxmpm, it still was quite significant)

OTOH, for high-volume scenarios, apache might not be the first choice.


cu
--
Enrico Weigelt,
metux IT consulting
+49-151-27565287


Reply to: