SponsorPay in Athens - EuRuKo 2013

written on

The annual european Ruby conference, EuRuKo, has a few great traditions. Not only it is organised in 100% by the community, with no institution behind it, but also every single year it takes place in different city, chosen by the attendees of the conference. Last year participants of EuRuKo 2012, in quite a controversial voting, decided to choose Greek team that promised to organise a great event in the heart of their country, in Athens.

Euruko banner

This year SponsorPay’s representation on EuRuKo wasn’t big, however, we were not only participants, but also had the chance to talk about the project we had been working on in the first months of 2013. Two SponsorPay developers - me (Grzegorz) and Simon showed the audience presentation called “… but we had to kill unicorns” which reached the top #1 in the community voting. In our talk we presented how we manage to use Ruby on Rails application in multi-threaded mode, without switching from MRI to different Ruby implementation. More about this topic coming in the next post!

The conference itself was organised very well. I was really surprised, because during team’s presentation in Amsterdam in 2012 I was really sceptical about how this group will handle organising such a big event. In the end, it was really amazing! The conference took place in a huge venue, called Badminton Theater, which had over 2000 seats! EuRuKo is much smaller event, so we covered only about 25% of the room’s capacity.

A lot of great presentations took place during this year’s conference. All of them are already available on Ustream and in a few weeks will be available in HD quality.

The conference traditionally started with Matz’s keynote. It was my 3rd EuRuKo and this year’s keynote was so far the best one. Matz talked about being a language designer. He presented programming languages as a compromise between human’s languages and bits that are understood by computer. He also tried to convince attendees to try to write their own languages.

Traditional "Friday hug"

A lot was said about Ruby internals. A few presentations touched this topic. One of them was presented by Koichi Sasada (@_ko1), member of Matz’s team, working full-time on MRI. Koichi started his talk by presenting the whole team, and then he talked about the GC that will be implemented in Ruby 2.1 and why it is so hard to implement a generational garbage collector in MRI. Also Chris Kelly and Dirkjan Bussink told about internals of Ruby, with GC among them.

A few more presentations that I enjoyed most were “Functional programming in Ruby” presented by Pat Shaughnessy, “Architecting your Rails app for success” by Ben Smith and “Functional reactive programming with Frappuccino” by Steve Klabnik. Especially the last one brought my attention, as Steve is not only a great open source contributor, but also experienced speaker. His presentation was very motivating. Actually, reactive programming and Frappuccino gem were just a background to tell people that sometimes they should have fun in programming, that they should write irresponsible code, without tests suite, just plain fun, the code that shouldn’t be used in production, but is creative and can lead to some other ideas. In his talk, Steve mentioned why, the lucky stiff, a few times. He told how _why had an impact on his contributions to open source, and how _why’s spirit is still present in Ruby community. Personally I found this talk really, really inspiring. I think that Steve’s currently a great keeper of _why’s legacy, not only in terms of projects and code, but also in term of this spirit and craziness.

Steve Klabnik's presentation

All the presentations are already available on the EuRuKo Ustream channel. I highly recommend to watch at least the ones that I’ve mentioned in this post, but I must say that almost all of them were really good and I enjoyed watching them.

It was a great pleasure to deliver a speech on such a great conference. Huge thanks go to organisers. This group of Ruby enthusiasts made an incredible effort to make EuRuKo look so professional and well organised. They succeeded in 100%!

The next year’s EuRuKo will be organised in the capital of Ukraine, Kiev. I really hope to go there, as organising team made a good impression. Maybe at SponsorPay we’ll try to prepare some presentation again?

Making APIs Faster: HTTP Optimizations

written on

Some of our publishers use our API thousands of times per minute. A fast response from our API is critical to provide a good experience to the final user.

There are many factors that influence how fast a publisher can consume the SponsorPay API. In this article we will talk about HTTP persistent connections and HTTP compression. The findings of this article will most likely apply to other APIs.

HTTP Persistent Connections

Each time a client starts a new HTTP request it needs to setup a new TCP connection to a remote server. Starting this new connection takes some time, since both client and server need to participate in the three way handshake. You can find a detailed description of TCP connections at the O’Reilly page for the “High Performance Browser Networking” book.

HTTP persistent connections, also called HTTP keep-alive, introduce the idea of re-using a single TCP connection to send and receive multiple HTTP requests/responses, as opposed to opening a new connection for every single request/response pair. Wikipedia has an extensive explanation of this concept.

HTTP Compression

When accessing our API, HTTP compression can also be used. API responses, are usually highly compressible. If the HTTP client requests a compressed response, our servers will return a gziped response.

Persistance and Compression: Before and After

We tested the use of HTTP persistent connections and HTTP compression when requesting 500 different URLs from our API.

Keep Alive

Just by using persistent connections the time taken to complete 500 requests to our API dropped to less than half the time it took without any of these optimizations.

If we then use compression to access our API the time taken to complete all requests dropped again to less than half the time it took when using persistent connections only.

How you can do it

To start using HTTP persistent connections and HTTP compression you can check the documentation of the library you are using to make HTTP requests to our API. In HTTP 1.1, all connections are considered persistent by default, so usually you just need to reuse the same request object instance to force the library to reuse the same TCP connection for multiple HTTP requests. For example, you can force the ruby curb library to use persistence connections if you use it like this:

1
2
3
4
5
6
7
curl = Curl::Easy.new

url.url = "http://iframe.sponsorpay.com/<one path>"
curl.perform

url.url = "http://iframe.sponsorpay.com/<another path>"
curl.perform

With PHP this can be done using the following code:

1
2
3
4
5
6
7
$r = new HttpRequest(HttpRequest::METH_GET);

$r->setUrl("http://iframe.sponsorpay.com/<one path>");
$r->send();

$r->setUrl("http://iframe.sponsorpay.com/<another path>");
$r->send()

If you are using Java with httpclient, you can reuse the same HttpClient instance and it will use persistent connections:

1
2
3
4
5
6
7
8
9
10
HttpClient httpclient = new DefaultHttpClient();
HttpGet httpget = new HttpGet("http://iframe.sponsorpay.com/<one path>");

ResponseHandler<String> responseHandler = new BasicResponseHandler();
String responseBody = httpclient.execute(httpget, responseHandler);
System.out.println(responseBody);

httpget = new HttpGet("http://iframe.sponsorpay.com/<another path>");
responseBody = httpclient.execute(httpget, responseHandler);
System.out.println(responseBody);

Using compression is also simple. Most HTTP libraries support a parameter to request compressed responses and parse the response appropriately. Taking again ruby curb as an example, enabling compression comes down to:

1
2
3
4
5
curl = Curl::Easy.new

curl.url = "http://iframe.sponsorpay.com/<one path>"
curl.encoding = "gzip, deflate"
curl.perform

To get a compressed response using PHP using HTTPRequest you can use this code:

1
2
3
4
5
$r = new HttpRequest(HttpRequest::METH_GET);
$r->setOptions(array("compress" => true));

$r->setUrl("http://iframe.sponsorpay.com/<one path>");
$r->send();

With httpclient you can use a DecompressingHttpClient decorator to handle decompression in your application:

1
2
3
4
5
6
HttpClient httpclient = new DecompressingHttpClient(new DefaultHttpClient());
HttpGet httpget = new HttpGet("http://iframe.sponsorpay.com/<one path>");

ResponseHandler<String> responseHandler = new BasicResponseHandler();
String responseBody = httpclient.execute(httpget, responseHandler);
System.out.println(responseBody);

In conclusion

Enabling HTTP persistent connections and HTTP compression are easy and quick wins which will greatly increase your applications performance when using the SponsorPay API.

Capybara, Poltergeist and CSV Downloads

written on

It’s not so obvious how to test a scenario, where user clicks an export link and gets a CSV file download. Especially if the user is able to filter the generated CSV content before downloading.

Capybara’s Poltergeist engine is a wrapper for headless browser called PhantomJS. A basic Capybara spec to test the downloading might look like this:

“Basic CSV download spec”
1
2
3
4
5
6
7
8
9
10
11
12
13
it 'outputs valid csv with status' do
  login candy_user
  visit candy_invoices_path

  select 'sweets', :from => 'Type'
  click_button 'Apply filter'
  click_link 'Export Invoices'

  page.text.should eql(<<-CSV)
Customer;Month;Invoice #;Amount;Difference;Status;Type
Name 0;September 2002;INVOICE0;-;-;new;sweets
  CSV
end

We select a filter from a select box, apply the filter and download the CSV. The problem here is that Poltergeist starts a download when clicking the export link and the page.text contains the previous page, not the CSV one might expect. There seems to be no clean solution for this.

An easy way out is to register a different mime type for CSV files for the tests, so we can trick Poltergeist to render the CSV instead of downloading it:

“Basic CSV download spec”
1
2
3
4
5
6
7
8
9
before do
  Mime.send(:remove_const, :CSV)
  Mime::Type.register 'text/plain', :csv
end

after do
  Mime.send(:remove_const, :CSV)
  Mime::Type.register 'text/csv', :csv
end

And now our page.text contains the data we want.