Archivo de la etiqueta: SMTP

Google, reinventando Internet

Primero fue Wave, que a mi modo de ver no es un intento de crear una red social ni de desbancar a Facebook ni a Twitter en lo suyo (aunque probablemente acabará/ía pasando), si no una forma de reinventar la comunicación en Internet: el principal medio de mensajería “serio” es el correo electrónico, y cualquiera que haya administrado un servidor estará de acuerdo conmigo en que el protocolo SMTP tiene problemas, MUCHOS problemas; Y luego está la mensajería instantánea, microblogging y demás. Para mi Wave es un intento de reinventar y unificar todas estas formas de comunicación en una sola, y además hacerla abierta, extensible y muy colaborativa. BTW, ¿a alguien le sobra alguna invitación para Wave? O:-) ¡Gracias, Luismi!

Y no contentos con reinventar el correo-IM-microblogging, ahora están jugando con un sustituto para el HTTP: el protocolo se llama SPDY y por lo visto ya tienen un servidor y una versión modificada de Chrome rulando, y en pruebas de laboratorio han visto mejoras de rendimiento del 50-60%. SPDY apunta a solventar problemas de latencias y mejorar el aprovechamiento de la conexión con medidas como no reenviar cabeceras HTTP (el equivalente) con cada GET, reaprovechar una única conexión TCP para enviar/recibir en paralelo varias peticiones en lugar de abrir una conexión TCP por cada elemento o pedir varios elementos en la misma sesión pero de forma secuencial, o permitir que el servidor inicie conexiones con el cliente para enviarle datos actualizados (bye-bye, AJAX). Suena bien.

Cambiar protocolos tan básicos que usa todo el mundo es difícil. Si no mirad cuántos años llevamos haciendo el paripé con IPv6. ¿Cómo se planifica la migración? ¿Se crean puentes entre el protocolo viejo y el nuevo para que durante el proceso haya interoperabilidad, o los primeros en migrar quedan aislados? Alternativas al SMTP hay más de una, pero ninguna ha fructificado. Sin embargo estoy convencido de que Google se va a llevar el gato al agua con Wave, y de aquí a unos (pocos) años el SMTP será un recuerdo del pasado. Y no me extrañaría que con SPDY acabara pasando lo mismo. Si hay alguien con recursos (mentes brillantes) y posición como para conseguir cambios así, es Google.

¿Qué será lo próximo? ¿IPvG?

Cluster de correo escalable con software libre

  • english
  • spanish

At my previous job I was responsible for the MTA of a group of companies, handling around 3000 e-mail accounts spread over 20 domains. This MTA received around 150,000 mails daily, and over 95% of them was discarded/marked because it was identified as SPAM or viruses (as of last year, don’t know how this evolved since I left). We used a homegrown cluster of seven servers, which enabled us to scale as needed. And it was based on free software.

This is not an step-by-step installation guide with technical details and configuration files, but rather the story of the evolution of the service, the various problems that we faced, how we solved them, and the design decisions in each case.

Migration

The first incarnation of the server was in 2001 when we had to migrate the old server, which was starting to give lots of trouble, to more current software and hardware. I seem to remember it was a mail server from Netscape (!?) that stored the account information in an LDAP directory, but can’t recall the exact name or version of the product. The server we chosed for the migration was qmail-ldap, mainly because of the good reviews we read about its stability, reliability and security, ease of setup (personally I still think qmail is much simpler than eg sendmail) and because it also used an LDAP directory. The latter may seem a silly reason, but in the end the migration had to be done in extremis at a time that the original server wouldn’t even boot most of the times, and we got away with it with a simple ldapsearch and a little script that “translated” the LDAP scheme of one server into that of the other one. Over time the choice of qmail-ldap proved to be the right one, because thanks to its modular design it allowed us to progressively move from a one server deploy to the cluster that I refered about in the introduction.

This first server was a rack-mounted one, with redundant power supplies and hw RAID5, so that all the data was secure (or so we thought back then). We also rolled qmail-scanner and the Kaspersky anti-virus (there was no ClamAV yet, we moved to it some years later). The same server held the SMTP, POP, IMAP and WebMail (SquirrelMail) services.

Active/Passive backup

We had to do the first architectural upgrade a couple of months after the migration: a RAID5 hiccup lead to a corrupted filesystem which was quite difficult to fix. It became clear that the RAID discs and the redundant power supplies were not enough to ensure the data integrity and service availability, so we installed another server exactly like the first one, and synchronized the configuration and mailboxes using rsync and cron jobs. The switching from the primary to the backup server was manual back then, using NAT at the router.

Over time the server was upgraded to new models several times, but we kept the active/passive backup structure. The syncronization between both servers was also improved, with DRBD for the mailboxes and csync2 for the configuration, AV bases, and so on. Master-backup monitoring and service switch was automatized with heartbeat.

The SPAM flood, specialization by resources

Sometime around 2002-2003 viruses ceased beeing e-mail’s biggest problem: the increasing number of SPAM messages received every day was way worse. So we threw SpamAssassin into the mix. Over time this lead to an ever-increasing CPU and memory consumption, slowing the server to a crawl. At first it seemed that the only option was to migrate every year to a new, more powerful server (and what would we do with the old one then?), or have multiple servers and distribute all the domains among them in an attempt to distribute the load.

Finally we realized that we had two different kinds of resource needs, with different growth patterns:

  • HD space for the mailboxes: the number of mailboxes in our system was fairly stable and the vast majority of our users downloaded their e-mails using POP, so HD scalability wasn’t really that big of a problem for us. We could easily afford to upgrade disk every few years, moving the service to the backup server while we were upgrading the master one.
  • CPU for the filtering: SPAM was growing at an exponential rate, we basically needed to double the CPU power each year.

So, why not specialize our servers into storage servers and a filtering farm? We moved the SMTP service from the main servers to a front-line of SMTP servers with the follwing characteristics:

  • they were off-the-shelf PCs and their configuration was practically identical (no variations appart from hostnames and IP addressess). We prepared a system image we could easily dump in a matter of minutes to a new PC, in case one of the servers went down or we needed more raw CPU power because of an increase in SPAM.
  • we had a router load-balaincing port 25 among all these servers.
  • all these SMTP servers were independent from the central ones, except for the final step of delivering the already analized mail to its destination mailbox: each server had a local copy of the LDAP directory (synchronized with slurpd), a copy of all the configuration files and all the AV bases and the SpamAssassin bayesian database (synchronized with csync2), and a DNS resolver/cache (dnscache).
  • they did local logs, but also sent them to a centralized syslog server for easier analysis.
  • they didn’t store the mails locally for later delivery, in other words they had no delivery queue: e-mails were analyzed on the fly during the SMTP session and if one of them met certain anti-SPAM/AV criteria (blacklisted IP, a number of RBL hits, certain keywords, etc.) it was immediatelly rejected with an SMTP error and the connection was closed; on the other hand if the mail was let through (it was either legitimate, or marked as possible SPAM), it was sent to the central server on the spot, and the filtering server never gave the OK to the origin MTA until the mailboxes server acknowledged the delivery. This is done quite simply with qmail by means of replacing the qmail-queue binary with the qmail-qmqpc one. By doing this we were able to guarantee that no mail would be lost in the event that a filtering server crashed, as the origin MTA wouldn’t receive the OK from us and would re-try the delivery after a couple of minutes.

Mailboxes, the POP and IMAP services, the LDAP master, webmail, and the remote queue remained in the central server, although most of them could have been moved to independent servers if needed, but we never needed to.

Specialization by type of client

The next problem we faced came about 2-3 years ago when image- and PDF-based SPAM became popular: we added an SpamAssassin plugin which re-composed animated GIF images and did OCR to all image attachments. This extra analysis greatly increased our CPU needs (we had to go from 2 or 3 filtering servers to 5 in a couple of days) and even so there were times when a server got overloaded for some 5-10 minutes and an e-mail could take not less than 2 minutes to be processed, delivered and SMTP-OK’d. When this happened and the sending party was another MTA it represented no bigger issue, as in the event of a timeout or disconnection the remote server would re-try the delivery several times; however, if the sender was an end-user with his MUA, a longer-than-usual delivery time or (God forbid) an error message from Outlook because of an eventual dropped connection lead to a phone call to the IT team because “the mail wouldn’t work.” :-)

The solution was splitting the SMTP and analysis farm into two: one for external mail and another for internal ones, for our users. The first farm is the one the DNS’ MX records pointed to, and had all the SPAM filtering options activated; while the second one retained the domain name end users used as the SMTP server in their MUAs, had all the heavy-weight lifting filters disabled and required SMTP authentication (wouldn’t accept non-authenticated sesions even for local domains). This way all external e-mail coming from remote MTAs would go through all the filters, and our users went to the privileged servers with somewhat lesser filering capabilities (but enough for internal mail) and great response times.

The big picture