You are viewing slillibri

entries friends calendar profile Previous Previous
Scott Lillibridge
Add to Memories
Share
Here is my stupid nginx patch, in case I lose it again

--- src/http/ngx_http_core_module.c	2009-10-26 10:40:07.000000000 -0700
+++ /Users/scott/source/nginx-0.7.63/src/http/ngx_http_core_module.c	2009-11-12 15:08:44.000000000 -0700
@@ -720,7 +720,13 @@
 ngx_http_handler(ngx_http_request_t *r)
 {
     ngx_http_core_main_conf_t  *cmcf;
-
+    uint i;
+    for (i = 0; i < r->unparsed_uri.len; i++) {
+        if (r->unparsed_uri.data[i] > 64 && r->unparsed_uri.data[i] < 91) {
+            r->unparsed_uri.data[i] = r->unparsed_uri.data[i] + 32;
+        }
+    }
+    
     r->connection->log->action = NULL;
 
     r->connection->unexpected_eof = 0;
Add to Memories
Share
I've forked the Cassandra Ruby gem on GitHub (http://github.com/fauna/cassandra/tree/master) and updated it to work with release 0.3 and current git trunk. My repo is here (http://github.com/slillibri/cassandra/tree/master). There are a couple of open issues, but most tests pass.

Tags:

Add to Memories
Share
Since I had spent some time with ActiveMQ, I thought I would check out RabbitMQ. RabbitMQ is an AMQP messaging server, but it also has an XMPP and Stomp adaptor.

Getting the source built was easy, but I had to make sure my MacPorts directory was first in the path, and to make with PYTHON=python2.5. Then getting everything setup was easy-peasy. Do a 'make run' and get the server running in the foreground.

I then whipped up a simple stock watcher and publisher in ruby using the amqp gem. I would recommend getting the source from github (http://github.com/tmm1/amqp/tree/master). Here is the code for the simple client and publisher. It will fetch Apple stock data from Yahoo! Finance every 30 seconds and publish it to the 'stock queues' exchange.

require 'rubygems'
require 'amqp'
require 'mq'
require 'pp'
require 'net/http'

class SimpleStockWatcher
  def run
    EM.run do
      connection = AMQP.connect(:host => 'haruhi.local', :logging => false)
      channel = MQ.new(connection)
      queue = MQ::Queue.new(channel, 'stocks')
      queue.bind('stock queues')
      queue.subscribe do |headers, msg|
        pp [:got, headers, msg]
      end
    end
  end
end

class SimpleStockPublisher
  def run
    EM.run do
      connection = AMQP.connect(:host => 'haruhi.local', :logging => false)
      channel = MQ.new(connection)
      exchange = MQ::Exchange.new(channel, :fanout, 'stock queues')
      EM.add_periodic_timer(30) do
        uri = URI.parse('http://download.finance.yahoo.com/d/quotes.csv?s=AAPL&f=sl1d1t1c1ohgv&e=.csv')        
        res = Net::HTTP.start(uri.host, 80) do |http|
          http.read_timeout = 30
          http.get('http://download.finance.yahoo.com/d/quotes.csv?s=AAPL&f=sl1d1t1c1ohgv&e=.csv')
        end
        msg = res.body.split(',')
        
        exchange.publish("#{msg[0]}:#{msg[1]}:#{msg[5]}")
      end
    end
  end
end

Tags:

Add to Memories
Share
To startup the TokyoTyrant master (note, you have to compile TokyoTyrant with Lua support to use the table extension)

sudo ttserver -port 1978 -dmn -pid /var/ttserver/pid -ulog /usr/local/tokyotyrant/share/log/ -ulim 2048 -uas -rts /usr/local/tokyotyrant/timestamp /usr/local/tokyotyrant/share/data.tct

To start the slave

sudo ttserver -port 1979 -dmn -pid /var/ttserver/slave-pid -mhost 127.0.0.1 -mport 1978 -rts /usr/local/tokyotyrant/timestamp /usr/local/tokyotyrant/share/data2.tct

And here is a small script to generate some random type data

#!/usr/bin/env ruby

require 'rubygems'
require 'tokyotyrant'
include TokyoTyrant

langs = %w[en jp ru it]
countries = %w[usa japan russia italy]
skills = %w[ruby c blogging linux tokyo-cabinet c#]

limit = ARGV[0].to_i || 100
offset = ARGV[1].to_i || 0

rdb = RDBTBL.new
if !rdb.open('127.0.0.1', 1978)
  err = rdb.ecode
  puts("open error: #{rdb.errmsg(err)}")
  exit
end

limit.times do |x|
  puts "Generating record #{x + offset}"
	skill = []
  rand(skills.size).times do |y|
    puts "Adding skill #{y}"
	  skill << skills[rand(skills.size)]	  
  end
  cols = {'name' => "User#{x + offset}", 'lang' => langs[rand(langs.size)], 
		'country' => countries[rand(countries.size)],
		'skills' => skill.join(',')}
	puts "Cols => #{cols}"
  if !rdb.put(rdb.genuid, cols)
    err = rdb.ecode
    puts("Put error: #{rdb.errmsg(err)}")
    exit
  end
  puts "Put record #{x + offset}"
end

Tags:
Current Music: #379: Return To The Scene Of The Crime - Chicago Public Radio

Add to Memories
Share
Doing some basic application testing with the new Phusion Passenger module for nginx and Apache. These tests were done on some Parallels virtual machines, one for the web/app server and one for the database. The setup on the web servers is identical, from a hardware standpoint. Anyway, here are some meaningless numbers :)

ApacheBench was used for the testing, 500 requests with a concurrency of 20.


Nginx/Passenger (no proxy, NFS for photos, inital hit to start the server)

Server Software:        nginx/0.6.36
Server Hostname:        www.things.fm
Server Port:            80

Document Path:          /things/953125641
Document Length:        5567 bytes

Concurrency Level:      20
Time taken for tests:   26.821 seconds
Complete requests:      500
Failed requests:        0
Write errors:           0
Total transferred:      3113499 bytes
HTML transferred:       2783500 bytes
Requests per second:    18.64 [#/sec] (mean)
Time per request:       1072.821 [ms] (mean)
Time per request:       53.641 [ms] (mean, across all concurrent requests)
Transfer rate:          113.37 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        1    4   4.2      2      39
Processing:   170 1055 279.1    990    1616
Waiting:      169 1047 278.8    980    1615
Total:        178 1059 278.4    994    1621

Percentage of the requests served within a certain time (ms)
  50%    994
  66%   1244
  75%   1337
  80%   1369
  90%   1425
  95%   1478
  98%   1535
  99%   1570
 100%   1621 (longest request)


Nginx -> Passenger (Prewarmed 6 runners, NFS for photo storage)

Server Software:        nginx/0.6.34
Server Hostname:        www.things.fm
Server Port:            80

Document Path:          /things/953125641
Document Length:        5567 bytes

Concurrency Level:      20
Time taken for tests:   28.798 seconds
Complete requests:      500
Failed requests:        0
Write errors:           0
Total transferred:      3124497 bytes
HTML transferred:       2783500 bytes
Requests per second:    17.36 [#/sec] (mean)
Time per request:       1151.935 [ms] (mean)
Time per request:       57.597 [ms] (mean, across all concurrent requests)
Transfer rate:          105.95 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        1    3   3.2      2      23
Processing:   140 1134 315.8   1084    1761
Waiting:      139 1114 314.4   1060    1716
Total:        149 1137 315.5   1085    1764

Percentage of the requests served within a certain time (ms)
  50%   1085
  66%   1363
  75%   1437
  80%   1475
  90%   1549
  95%   1598
  98%   1657
  99%   1699
 100%   1764 (longest request)

Apache/Passenger (NFS for photo storage)

Server Software:        Apache/2.2.11
Server Hostname:        www.things.fm
Server Port:            80

Document Path:          /things/953125641
Document Length:        5567 bytes

Concurrency Level:      20
Time taken for tests:   27.831 seconds
Complete requests:      500
Failed requests:        0
Write errors:           0
Total transferred:      3141500 bytes
HTML transferred:       2783500 bytes
Requests per second:    17.97 [#/sec] (mean)
Time per request:       1113.221 [ms] (mean)
Time per request:       55.661 [ms] (mean, across all concurrent requests)
Transfer rate:          110.23 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        1    3   2.2      2      16
Processing:   179 1090 274.6   1028    1923
Waiting:      168 1062 274.4   1001    1906
Total:        185 1093 274.5   1034    1924

Percentage of the requests served within a certain time (ms)
  50%   1034
  66%   1242
  75%   1340
  80%   1376
  90%   1472
  95%   1531
  98%   1623
  99%   1666
 100%   1924 (longest request)


(Updated Apache setup to use Ruby-EE, as the nginx test did)

Tags:

Add to Memories
Share
The more I work with nginx, the more I like it. Here is a little trick you can use, and is especially helpful if you are using it as a reverse proxy. Add the following stanza to your server {} block,

	## Look to see if we are in maintenance mode
	if ( -f $document_root/shared/system/maintenance.html ) {
		rewrite ^(.*)$ /system/maintenance.html;
		break;
	}


And the maintenance.html will be returned to all incoming requests for that server{}. Then all you need to do is delete the maintenance.html file to re-enable the website. This works great with the Capistrano deploy:web:disable and deploy:web:enable tasks. You can pass a REASON variable and UNTIL variable to the disable task and the template file will include these values. Then when the maintenance is done, the enable task will automagically delete the maintenance.html file.

Current Music: Does Google Violate Its 'Don't Be Evil' Motto? - NPR

Add to Memories
Share
ActiveMessaging::Adaptors::Stomp doesn't null terminate the messages so make sure to do that.
Make sure to set the RAILS_ENV when starting the pollers in you Capistrano deploy.rb. I added the following to mine

namespace :poller do
  desc "Stop existing pollers"
  task :stop, :roles => :app do
    run "RAILS_ENV=production #{current_path}/script/poller stop"
  end
  
  desc "Start pollers"
  task :start, :roles => :app do
    run "RAILS_ENV=production #{current_path}/script/poller start"
  end
end

before :deploy, "poller:stop"
after :deploy, "poller:start"


When I get a chance I will write up my notes about getting ActiveMessaging and ActiveMQ to play nice and some of the cool things you can do with it.
Add to Memories
Share
I hate email. The amount of trust required in the RFCs is stupid, and there still isn't a solid verification system in place. Yet it seems like every job I seem qualified for lately has to do with email. If I were smarter, I might say this is my boulder.
Add to Memories
Share
I think the solid money is that Takahashi and Niigaki will graduate together and Tanaka will be the next leader. I have a feeling that Kusumi will be the subleader.
Add to Memories
Share
 Looks like will_paginate will play well with acts_as_solr by just using a common params[:query] and passing it to the will_paginate call through :params.

Tags:

profile
Scott Lillibridge
User: slillibri
Name: Scott Lillibridge
calendar
Back November 2009
1234567
891011121314
15161718192021
22232425262728
2930
page summary
tags