Allowing the Facebook Debugger Through .htaccess

Filed under: Coding — Tags: , , — Darryl Clarke @ 3:24 pm

Here’s a short story; When I develop Facebook web apps, I do it under a password protected development site. Facebook hates this. It complains that it can’t reach urls, it can’t get meta data, it can’t do this, it can’t do that. The downside to not having a password is the fact that anybody can hit the site. (sandboxing is almost useless, these days.)

So, the quick solution: Allow Facebook to hit it, but only via their external meta data scraper.

A quick edit (well, not so quick, it was something obscure.) of my .htaccess rules, and voila! Facebook can debug and people still can’t hit it (easily)

SetEnvIf User-Agent ^facebookexternalhit.*$ Facebook=1

AuthType Basic
AuthName "Art & Science DEV Server"
AuthUserFile /home/dclarke/www/dev/.htpasswd
Require valid-user

order allow,deny
Allow from env=Facebook
Satisfy Any

First, set an environment variable based on if it is the Facebook user agent. Then, allow access. The key here is the ‘satisfy any’ line, which means you can get in if you have a user and password, or that environment flag is set. The downside is now you all know you can just set your user agent to Facebook and get access to my dev sites. 😉

Experiment: The Viewport Viewer

Filed under: Coding — Tags: , , , — Darryl Clarke @ 3:12 pm

As a little experiment, while I redesign my own site, I decided to create a ‘Viewport Viewer

It’s a little tool that’ll load up a site in an iframe and let you re-size the iframe to some specifics, like an iPhone, or iPad in portrait mode. Go play with it and see.

It’s pretty straight forward: enter your URL, then toggle between sizes as you see fit. If your site is responsive it should fit in the iframe with out any issues.

It’s by no means meant to replace actual hardware testing with your sites, but it is quite handy during the initial phases of your responsive web development.

There are some downsides to it, however. One that I can think of is that it does not utilize ‘min-device-width‘ or ‘max-device-width‘, or any other ‘device‘ related queries. Because, well, it’s all in a browser and not on a device.

Anyway, there’s the tool, hopefully it’s of some use to you. If not, oh well. It’s been useful to me.

There’s Something about FITC Toronto…

Filed under: Conferences — Tags: , , , , , , — Darryl Clarke @ 8:41 pm

On the days of April 23rd to April 25th, 2012, I attended a few various sessions at FITC Toronto. At the time the amount of information I was consuming was a little bit of overload. However, now that things have sunk in, I’d like to reflect upon it.

HTML5 Gaming

One of my primary goals of the conference was to cozy up to the idea of HTML5 gaming. It’s definitely a big thing that you can’t ignore.  The concept of being able to write a game and deploy it wherever an HTML5 capable browser exists is somewhat exciting.

First up on my list was “HTML5: My Life in the Trenches” with Grant Skinner. For the most part Grant introduced the audience to a nifty set of tools called CreateJS – which I think has great potential to become a solid standard for HTML5 gaming libraries. As you can see from some of the featured demos on the CreateJS site, developers are already building a great array of types of games. They’re fast, well done, and with the tools used they maintain compatibility across platforms that support HTML5.

Next up was “My Adventures in HTML5 Gaming” with Jesse Freeman. Jesse’s focus was more oriented towards the design of games and making what you want. When it comes to actually designing, you shouldn’t be afraid to share your ideas with others. You should document every little feature you want, even if it’s not realistic at the time. Also, don’t be afraid to just re-create something you like. It’s one of the best ways to learn. Jesse’s primary HTML5 library of choice is ImpactJS (he’s also selling a book about it), a full featured canvas and game engine. It’s $99, one time, per developer license fee might keep people away from it.

In both presentations it was made clear to me, albeit indirectly, that Adobe’s direction is definitely moving away from Flash as a final distribution format.  They’re re-tooling and adding the ability to export Flash and related assets directly into HTML5 libraries and resources.  These resources can be easily leveraged by any awesome HTML5/JavaScript library.

Overall, both HTML5 related talks were pretty good.  They definitely opened my eyes to newer tools and technologies with some innovative ways to use them.

Aware, Phsyical, Wearable, and Interactive Things

Another one of my favourite things of late is the ability to have things that are aware. Whether it’s my phone being aware that I’m in the office or at home; or my couch aware that I’m sitting on it; or the ability for my computer to detect that I turned on my Apple TV and ask me “Hey, Darryl, do you want me to turn your TV to Input5?” where I say yes and it does it.  But I digress. These potential things are just too awesome.

There were a few presentations that I saw that I felt were right up this alley. “NFC: Thinking Creatively Beyond Mobile Payments” with Pearl Chen was one such eye opener. The idea of using NFC to tag food storage containers and connect it to recipes in Evernote was one such awesomeness.  Another being a ‘smart’ case for your phone, which can detect via NFC the orientation of your device and set it’s ring tone to a certain way. It’s pretty damn cool.

Hey Ellie, What are You?” with Matt Fisher was a pretty awesome display of how you can leverage open source projects to create your own. With his home brew always-on voice command recognition system a la HAL or Jarvis (Iron Man), the potential for home automation and home awesomeness is endless. This one presentation alone filled my melon with so many ideas. My Evernote cupeth floweth over.

A Window to the Physical” with Peter Nitsch was all about going beyond digital, on screen, glass interfaces. His talk ventured into the realm of “the internet of things”, a world of connected and aware devices. He touched on a lot of greatness like how affordable and accessible DIY physical hardware (arduino, 3d printing, prototyping) has become.  And literally, if you can think of something, you can likely make a prototype these days without much cost.  The concepts presented really work into my overall desire to make something using things that are aware and things that can interact with other things.

The most interactive presentation was Marvin.  Marvin was just hanging out in the lobby for the entire time. He wasn’t really presenting anything.  I don’t think a lot of people realized what Marvin was.  A giant bear-like creature that for the most part just had people sitting on it.  Until people realized the sign that said “Run and Jump on Marvin…” Then the fun began.  As it turns out, Marvin was loaded up with some sensors and he detected a good pounce and hug.  When that happened, the bigger the pounce the bigger the “smile-works” show was above him.  You can see the videos below. It’s silly little cool things like this that make me love technology.

Experimentation

When it all comes down to putting all these incredible technological things together it takes time and process. And that’s where a talk titled “The rapid prototyping, creative incubating, lean startup.” by Hoss Gifford comes in to play.  He explained a lot about process that they use at OneMethod.  Everything starts with an idea and gets a time limit. A two month incubation time limit. If you cannot take your idea to a working prototype in that time, it should likely be back-burnered.  The main reasoning is that everyone’s passion around a project is at a high during the early stages and as time goes on that high can fall down drastically, so they jump on another new project with a fresh new high.  Not to say all projects should be thrown out; but the amount of attention you should give it is directly related the initial feedback and response of your prototype.

The prototype itself should be as lean as you can make it.  The most minimal, working product you can get to market that does what you want.  Time to market is typically critical. In this digital age if you sit on your idea and “make it perfect” over many months or even years, chances are the technology you’ve built it on, or the social buzz that you were banking on has likely moved on.

Overall, I found most of the sessions at FITC Toronto 2012 to be very well rounded. There were many other sessions that I attended with mixed impressions and little take-away. These ones though, they really hit home. Together, each of the sessions I have mentioned will all play a little (or big?) part in how I operate my future initiatives.  I’ve got a lot of crazy ideas. I can see the process.  Now if only I can find the time.

And Finally, Some Useless Content 😉

It’s So Meta

Filed under: Randomness — Darryl Clarke @ 12:03 pm

It’s time for yet another meta post.

I’ve redesigned my blog. It’s here, it’s new. It’s tidy. It’s the first phase of change that you may or may not notice.

As it stands right now, I’ve not even looked at it in IE, so I doubt it works. But it’s got some awesome IE Chrome Frame pusher, so hopefully I never have to deal with IE again. Now that that’s done, here’s what it is…

  • HTML5
  • CSS3
  • See Humans.txt (in the footer) for more.

I’ve also simplified a lot of the WordPress feature overkill that existed. I’ve killed comments, pingbacks, and most user interaction because it’s mostly spam. If you want to say something to me, tweet it, Facebook it, Google Plus it.

I’ve gotten rid of most of the clutter that comes with sidebars and widgets and idiotic things that mean nothing. You know, all that shit that nobody ever uses. I’ll, in the future, keep any experimental items to their own pages.

Some old posts might look like ass. Let me know and I’ll go back and fix ’em. But the handful I’ve looked at look pretty swell.

That is all.

See you on the Twitternets.

The Firefox Switch Back

Filed under: Randomness — Tags: , , , , — Darryl Clarke @ 12:11 pm

Early in 2011 I switched from Firefox to Google Chrome. Just before the end of 2011 I switched back.

I switched back because I had ignored Firefox for so long, I was almost damned sure that they should have fixed a few issues I had in the 3.1-3.6 releases. And sure enough, they have.

In my near 9 months of using Google Chrome I missed a couple of features of Firefox that Chrome just didn’t cut it with.

  1. Firefox’s Awesome Bar (aka: the address bar)
    It’s 100 times better than Google Chrome’s ‘search’ bar.  The awesome bar lets me search bookmarks, open tabs, history before searching the web. Google always wants you to search the web and it’s hardly ever necessary.
  2. Firefox’s Tab Groups
    They’re just awesome.  Tab overload has always been an issue. 20-30, more tabs open. With tab groups you can sort them out and only have certain working groups of tabs available. Want to switch? sure, hit the magic button and voila. All your groups are exposed and you can easily switch to them.
  3. Firefox’s Bookmarks
    This might sound ridiculous, but I really like tagging my bookmarks without the need of an extension. Tagged bookmarks really help out with #1 and well, it makes my life easier when I’m trying to find stuff.
  4. Firefox has gotten faster.
    This is always a battle as to which browser is faster, but really, Firefox 9.0.1 is way, way, way faster than previous versions and if you find benchmarks that you want to accept as good, I’m sure someone will say Firefox is faster than everything. But hey, that’s subjective.

And yeah, I’m sure there are chrome extensions that enable these features, but quite frankly, I hate most extensions. There, I said it.

Here’s to an awesome 2012, Mozilla.

(p.s. I only remembered Firefox because Niv mentioned it.)

The Switch: Apache + Mod_PHP to Nginx + PHP-FPM

Filed under: Linux,PHP,Ubuntu,Zend Framework — Tags: , , , — Darryl Clarke @ 12:29 pm

File this under “another thing I should’ve done ages ago.”

I decided that I should explore the world of Nginx as a web server since many people have been telling me it’s good. And all I can say is holy shit, it’s good. The setup was simple and after a few idiotic mistakes on my part, it was up and running.

At first I was skeptical as to how fast it would be and with my first couple of benchmarks, nginx was definitely faster.. but not by much. With just a simple php file on a very low resource machine (Ubuntu 11.10, on a 256MB VM at rackspace which I use for playing around) I used ‘ab’ to test 1000 requests with 10 concurrent:

Nginx:
Concurrency Level:      10
Time taken for tests:   0.473 seconds
Complete requests:      1000
Total transferred:      191000 bytes
HTML transferred:       26000 bytes
Requests per second:    2112.79 [#/sec] (mean)
Time per request:       4.733 [ms] (mean)
Time per request:       0.473 [ms] (mean, across all concurrent requests)
Transfer rate:          394.09 [Kbytes/sec] received

Apache:
Concurrency Level:      10
Time taken for tests:   0.533 seconds
Complete requests:      1000
Total transferred:      245000 bytes
HTML transferred:       26000 bytes
Requests per second:    1877.53 [#/sec] (mean)
Time per request:       5.326 [ms] (mean)
Time per request:       0.533 [ms] (mean, across all concurrent requests)
Transfer rate:          449.21 [Kbytes/sec] received

As you can see from the initial benchmark, there’s not much difference, but it is noticeable. And if you throw even more at it I’m pretty sure the gap will be bigger.  One thing that stood out most to me is the extra amount of data that Apache sends.

After I setup a zend framework application, I ran the benchmarks again. Same 10 concurrent, 1000 requests:

Nginx:
Concurrency Level:      10
Time taken for tests:   15.892 seconds
Complete requests:      1000
Total transferred:      3735000 bytes
HTML transferred:       3577000 bytes
Requests per second:    62.92 [#/sec] (mean)
Time per request:       158.922 [ms] (mean)
Time per request:       15.892 [ms] (mean, across all concurrent requests)
Transfer rate:          229.51 [Kbytes/sec] received

Apache:
Concurrency Level:      10
Time taken for tests:   17.724 seconds
Complete requests:      1000
Total transferred:      3791000 bytes
HTML transferred:       3577000 bytes
Requests per second:    56.42 [#/sec] (mean)
Time per request:       177.242 [ms] (mean)
Time per request:       17.724 [ms] (mean, across all concurrent requests)
Transfer rate:          208.88 [Kbytes/sec] received

Again, the difference is there. Nginx is clearly faster. It’s clearly winning. But I’m still just benchmarking with settings that I know Apache can handle on the low resource box. And this of course is all about resources and effectively using them. So I pumped it up. Time to do ab -c 100 -n 10000, ten thousand requests with one hundred concurrent and the results are amazing:

nginx:
Concurrency Level:      100
Time taken for tests:   122.030 seconds
Complete requests:      10000
Total transferred:      37350000 bytes
HTML transferred:       35770000 bytes
Requests per second:    81.95 [#/sec] (mean)
Time per request:       1220.301 [ms] (mean)
Time per request:       12.203 [ms] (mean, across all concurrent requests)
Transfer rate:          298.90 [Kbytes/sec] received

Apache:
CRASHED after 485 requests.
apr_poll: The timeout specified has expired (70007)
Total of 485 requests completed
load average: 83.73, 30.80, 11.43

The server load under apache went into a state of pure cluster-fuck. Apache could not contain itself with 100 concurrent connections on a box with such low resources, whereas Nginx handled it with EASE. The requests per second were slightly slower at 81.96 when doing 100 concurrent connections, but that request count is still amazing compared to apache crashing.

I’m sorry Apache+mod_php, you lose. Now it’s time to migrate all my stuff.

« Newer PostsOlder Posts »