the most awesome guy ever.

The Blog of Darryl E. Clarke

  Random musings from a jaded coder who just needs a hug.

Posts Tagged ‘nginx’

Allowing the Facebook Debugger through nginx’s auth_basic

Friday, March 29th, 2013

In my prior post, Allowing the Facebook Debugger through .htaccess, I showed how you could do just that. But, as time goes on, I spend more and more time with nginx and I need to adapt my rules.

So, today, I decided I should do the exact same thing with nginx. All of the dev sites I work on are generally password protected with a standard auth_basic setup. This is great, keeps the robots out and prying eyes away. But it’s always an issue when you need to test sharing and other external scrapers.  As it turns out, doing so with nginx is just as simple as it was with Apache.

My initial ‘location’ block was a simple configuration:

location  /  {
  auth_basic            "Restricted";
  auth_basic_user_file  htpasswd;

  if (!-e $request_filename) {
    rewrite ^(.+)$ /index.php last;
  }
}

To allow Facebook debugger through the simple auth_basic was as easy as adding an if check and a secondary ‘location’ rule.

location  /  {
  error_page 418 = @allowed;

 if ($http_user_agent ~* facebookexternalhit) {
         # bypass httpauth.
        return 418;
  }
  auth_basic            "Restricted";
  auth_basic_user_file  htpasswd;

  if (!-e $request_filename) {
    rewrite ^(.+)$ /index.php last;
  }
}

location @allowed {
if (!-e $request_filename) {
              rewrite ^(.+)$ /index.php last;
 }
}

The first thing added was a rule for nginx to understand what I mean when I say ‘return 418’ – this is the http response code for “I’m a teapot” The if block simply checks if it’s a known facebook agent, and the third block is a custom location that strips out the authentication requirements.

It’s generally fairly simple the concept and can be applied to any other external scrapers that you may need.

Tags: , , ,
Posted in Security

The Switch: Apache + Mod_PHP to Nginx + PHP-FPM

Thursday, December 22nd, 2011

File this under “another thing I should’ve done ages ago.”

I decided that I should explore the world of Nginx as a web server since many people have been telling me it’s good. And all I can say is holy shit, it’s good. The setup was simple and after a few idiotic mistakes on my part, it was up and running.

At first I was skeptical as to how fast it would be and with my first couple of benchmarks, nginx was definitely faster.. but not by much. With just a simple php file on a very low resource machine (Ubuntu 11.10, on a 256MB VM at rackspace which I use for playing around) I used ‘ab’ to test 1000 requests with 10 concurrent:

Nginx:
Concurrency Level:      10
Time taken for tests:   0.473 seconds
Complete requests:      1000
Total transferred:      191000 bytes
HTML transferred:       26000 bytes
Requests per second:    2112.79 [#/sec] (mean)
Time per request:       4.733 [ms] (mean)
Time per request:       0.473 [ms] (mean, across all concurrent requests)
Transfer rate:          394.09 [Kbytes/sec] received

Apache:
Concurrency Level:      10
Time taken for tests:   0.533 seconds
Complete requests:      1000
Total transferred:      245000 bytes
HTML transferred:       26000 bytes
Requests per second:    1877.53 [#/sec] (mean)
Time per request:       5.326 [ms] (mean)
Time per request:       0.533 [ms] (mean, across all concurrent requests)
Transfer rate:          449.21 [Kbytes/sec] received

As you can see from the initial benchmark, there’s not much difference, but it is noticeable. And if you throw even more at it I’m pretty sure the gap will be bigger.  One thing that stood out most to me is the extra amount of data that Apache sends.

After I setup a zend framework application, I ran the benchmarks again. Same 10 concurrent, 1000 requests:

Nginx:
Concurrency Level:      10
Time taken for tests:   15.892 seconds
Complete requests:      1000
Total transferred:      3735000 bytes
HTML transferred:       3577000 bytes
Requests per second:    62.92 [#/sec] (mean)
Time per request:       158.922 [ms] (mean)
Time per request:       15.892 [ms] (mean, across all concurrent requests)
Transfer rate:          229.51 [Kbytes/sec] received

Apache:
Concurrency Level:      10
Time taken for tests:   17.724 seconds
Complete requests:      1000
Total transferred:      3791000 bytes
HTML transferred:       3577000 bytes
Requests per second:    56.42 [#/sec] (mean)
Time per request:       177.242 [ms] (mean)
Time per request:       17.724 [ms] (mean, across all concurrent requests)
Transfer rate:          208.88 [Kbytes/sec] received

Again, the difference is there. Nginx is clearly faster. It’s clearly winning. But I’m still just benchmarking with settings that I know Apache can handle on the low resource box. And this of course is all about resources and effectively using them. So I pumped it up. Time to do ab -c 100 -n 10000, ten thousand requests with one hundred concurrent and the results are amazing:

nginx:
Concurrency Level:      100
Time taken for tests:   122.030 seconds
Complete requests:      10000
Total transferred:      37350000 bytes
HTML transferred:       35770000 bytes
Requests per second:    81.95 [#/sec] (mean)
Time per request:       1220.301 [ms] (mean)
Time per request:       12.203 [ms] (mean, across all concurrent requests)
Transfer rate:          298.90 [Kbytes/sec] received

Apache:
CRASHED after 485 requests.
apr_poll: The timeout specified has expired (70007)
Total of 485 requests completed
load average: 83.73, 30.80, 11.43

The server load under apache went into a state of pure cluster-fuck. Apache could not contain itself with 100 concurrent connections on a box with such low resources, whereas Nginx handled it with EASE. The requests per second were slightly slower at 81.96 when doing 100 concurrent connections, but that request count is still amazing compared to apache crashing.

I’m sorry Apache+mod_php, you lose. Now it’s time to migrate all my stuff.

Tags: , , ,
Posted in Linux, PHP, Ubuntu, Zend Framework