Discussion:
greylisting on debian.org?
Wolfgang Lonien
2006-07-05 13:19:31 UTC
Permalink
Hi all,

this is maybe the wrong group for it (sorry in that case), but:

Do we use greylisting on the @debian.org domain and especially on
@lists.debian.org?

If not, then we should probably try it - for my private stuff, that
works just nice. The only things which still come through are spams
which were sent over debian.org and such, which obviously use real mail
servers.

Just my thoughts, when I read the recent posting from JB (Jeff) on the
planet.
--
cheers,
+-------------------------------------------------------------------+
| wjl aka Wolfgang Lonien | GPG key: 728D9BD0 |
|---------------------------------| Fingerprint: |
| Mail: | a923 2294 b7ed eb3e 2f18 |
| wolfgang - at - lonien.de | ae56 aab8 d36a 728d 9bd0 |
+-------------------------------------------------------------------+
martin f krafft
2006-07-05 14:45:52 UTC
Permalink
Post by Wolfgang Lonien
@lists.debian.org?
If not, then we should probably try it - for my private stuff, that
works just nice. The only things which still come through are spams
which were sent over debian.org and such, which obviously use real mail
servers.
This has been brought up. Basically I don't think people were
opposed to it, but there was noone available to implement it.

So if you really want it, log in to the hosts, copy the exim
configuration, implement greylisting, test it, then contact
debian-***@lists.d.o with patches.
--
Please do not send copies of list mail to me; I read the list!

.''`. martin f. krafft <***@debian.org>
: :' : proud Debian developer and author: http://debiansystem.info
`. `'`
`- Debian - when you have better things to do than fixing a system

i've not lost my mind. it's backed up on tape somewhere.
Pierre Habouzit
2006-07-05 15:33:26 UTC
Permalink
Post by martin f krafft
Post by Wolfgang Lonien
@lists.debian.org?
If not, then we should probably try it - for my private stuff, that
works just nice. The only things which still come through are spams
which were sent over debian.org and such, which obviously use real
mail servers.
This has been brought up. Basically I don't think people were
opposed to it, but there was noone available to implement it.
So if you really want it, log in to the hosts, copy the exim
configuration, implement greylisting, test it, then contact
the patches exists, and I already did that. the setup is in production
on alioth FWIW, thanks to raphael hertzog.

basically, on alioth the greylisting is a selective greylist: we only
use greylisting on hosts that are awkward (like listed on rbl's,
reverse IP do not resolve, ...).

greylist is inneficient if the remote host is a real smtp server, and
real smtp server likely :
- are not listed on rbl's
- uses a correct reverse dns
- ...

I had a couple of posts on the subject on my blog[1]. FWIW I also have
written a policy daemon, used with postgrey (or any other existant
greylister) called whitelister[2], in order to implement the same thing
on postfix. Configuration is pretty straightforward.

[1] http://blog.madism.org/index.php/2006/03/25/79-debianorg-and-spam
http://blog.madism.org/index.php/2006/03/28/80-debianorg-and-spam-2
     http://blog.madism.org/index.php/2006/04/03/81-debianorg-and-spam-3-alioth

[2] http://packages.qa.debian.org/w/whitelister.html
http://backports.org/package.php?search=whitelister
--
·O· Pierre Habouzit
··O ***@debian.org
OOO http://www.madism.org
martin f krafft
2006-07-05 16:02:50 UTC
Permalink
Post by Pierre Habouzit
the patches exists, and I already did that. the setup is in production
on alioth FWIW, thanks to raphael hertzog.
ah! have you submitted them to debian-admin?
Post by Pierre Habouzit
basically, on alioth the greylisting is a selective greylist: we only
use greylisting on hosts that are awkward (like listed on rbl's,
reverse IP do not resolve, ...).
greylist is inneficient if the remote host is a real smtp server, and
- are not listed on rbl's
- uses a correct reverse dns
- ...
FWIW, I do the same now, but I just use a regexp:

/(\-.+){4}$/ greylisting
/(\..+){4}$/ greylisting
/unknown/ greylisting

and these two:
http://sqlgrey.bouton.name/dyn_fqdn.regexp
http://sqlgrey.bouton.name/smtp_server.regexp

Now, about whitelister, would you consider backporting that to
sarge?
--
Please do not send copies of list mail to me; I read the list!

.''`. martin f. krafft <***@debian.org>
: :' : proud Debian developer and author: http://debiansystem.info
`. `'`
`- Debian - when you have better things to do than fixing a system

"we should have a volleyballocracy.
we elect a six-pack of presidents.
each one serves until they screw up,
at which point they rotate."
-- dennis miller
Loïc Minier
2006-07-05 16:24:41 UTC
Permalink
Post by martin f krafft
Now, about whitelister, would you consider backporting that to
sarge?
Isn't it already?

whitelister:
Installed: (none)
Candidate: (none)
Version Table:
0.8-2 0
-1 http://ftp.fr.debian.org unstable/main Packages
0.8-0bpo1 0
-1 http://ftp.de.debian.org sarge-backports/main Packages
--
Loïc Minier <***@dooz.org>
--
To UNSUBSCRIBE, email to debian-devel-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
martin f krafft
2006-07-05 16:37:55 UTC
Permalink
Post by Loïc Minier
Isn't it already?
Mmmmmhhhhhh.... *something* here is broken then.

Sorry.
--
Please do not send copies of list mail to me; I read the list!

.''`. martin f. krafft <***@debian.org>
: :' : proud Debian developer and author: http://debiansystem.info
`. `'`
`- Debian - when you have better things to do than fixing a system

"common sense is the collection
of prejudices acquired by age eighteen"
-- albert einstein
Thomas Bushnell BSG
2006-07-09 03:57:05 UTC
Permalink
Post by martin f krafft
This has been brought up. Basically I don't think people were
opposed to it, but there was noone available to implement it.
There were people opposed to it, in fact.
Christian Perrier
2006-07-09 06:14:20 UTC
Permalink
Post by Thomas Bushnell BSG
Post by martin f krafft
This has been brought up. Basically I don't think people were
opposed to it, but there was noone available to implement it.
There were people opposed to it, in fact.
What were their arguments?
Marc Haber
2006-07-09 12:30:33 UTC
Permalink
On Sun, 9 Jul 2006 08:14:20 +0200, Christian Perrier
Post by Christian Perrier
Post by Thomas Bushnell BSG
Post by martin f krafft
This has been brought up. Basically I don't think people were
opposed to it, but there was noone available to implement it.
There were people opposed to it, in fact.
What were their arguments?
For example, that greylisting puts significant load on systems that
deliver mail to us, and that it is only a question of time before spam
zombies retry.

Greetings
Marc
--
-------------------------------------- !! No courtesy copies, please !! -----
Marc Haber | " Questions are the | Mailadresse im Header
Mannheim, Germany | Beginning of Wisdom " | http://www.zugschlus.de/
Nordisch by Nature | Lt. Worf, TNG "Rightful Heir" | Fon: *49 621 72739834
martin f krafft
2006-07-09 13:39:09 UTC
Permalink
Post by Marc Haber
For example, that greylisting puts significant load on systems
that deliver mail to us,
I am sorry, I don't buy this argument at all. First, a 4xx is not
"significant load" on any mailer unless you're running some piece of
crap. Sure, when you reach the thousands, even postfix could break
the occasional sweat, but which one server delivers thousands of
messages to continuously new from/rcpt combinations -- because
remember, greylisting caches.
Post by Marc Haber
and that it is only a question of time before spam zombies retry.
Yeah sure, which is why some of us wanted greylisting years ago, so
the question of time would have been longer regardless.
--
Please do not send copies of list mail to me; I read the list!

.''`. martin f. krafft <***@debian.org>
: :' : proud Debian developer and author: http://debiansystem.info
`. `'`
`- Debian - when you have better things to do than fixing a system

"man kann die menschen nur von ihren eigenen meinungen überzeugen."
-- charles tschopp
Martijn van Oosterhout
2006-07-09 13:48:50 UTC
Permalink
Post by martin f krafft
Post by Marc Haber
For example, that greylisting puts significant load on systems
that deliver mail to us,
I am sorry, I don't buy this argument at all. First, a 4xx is not
"significant load" on any mailer unless you're running some piece of
crap. Sure, when you reach the thousands, even postfix could break
the occasional sweat, but which one server delivers thousands of
messages to continuously new from/rcpt combinations -- because
remember, greylisting caches.
The point was about mailers sending mail to debian. If they receive a
4xx they have to queue the mail and retry later. It's cheap for
debian, but expensive for everyone else.

A far more reasonable solution is to only greylist mail with an
unreasonably high spamassassin score. Normal mail I assume generally
doesn't score high and is not susceptable to greylisting.

Not that I mind, the amount of spam received via this mailing list is
so marginal I can hardly imagine people worrying about it.

Have a nice day,
--
Martijn van Oosterhout <***@gmail.com> http://svana.org/kleptog/
martin f krafft
2006-07-09 14:14:05 UTC
Permalink
Post by Martijn van Oosterhout
The point was about mailers sending mail to debian. If they receive a
4xx they have to queue the mail and retry later. It's cheap for
debian, but expensive for everyone else.
My point was: even 100 such queued mails are not expensive nowadays
(unless your MTA is crap). If you have more than 100 queued mails
due to greylisting on debian.org, you are either a big provider and
can handle it, or a spammer.
Post by Martijn van Oosterhout
A far more reasonable solution is to only greylist mail with an
unreasonably high spamassassin score. Normal mail I assume generally
doesn't score high and is not susceptable to greylisting.
Sure. Or greylist only when it's from a dynIP address.
Post by Martijn van Oosterhout
Not that I mind, the amount of spam received via this mailing list is
so marginal I can hardly imagine people worrying about it.
Your email address doesn't appear to be plastered all over Debian
package control files, changelogs, the bug tracking system, and the
mailing lists. Or at least not as much as some others. I get
somewhere between 200-400 spam messages into my debian.org account
per day.
--
Please do not send copies of list mail to me; I read the list!

.''`. martin f. krafft <***@debian.org>
: :' : proud Debian developer and author: http://debiansystem.info
`. `'`
`- Debian - when you have better things to do than fixing a system

*** important disclaimer:
by sending an email to any address, that will eventually cause it to
end up in my inbox without much interaction, you are agreeing that:

- i am by definition, "the intended recipient"
- all information in the email is mine to do with as i see fit and
make such financial profit, political mileage, or good joke as it
lends itself to. in particular, i may quote it on usenet.
- i may take the contents as representing the views of your company.
- this overrides any disclaimer or statement of confidentiality that
may be included on your message.
Thijs Kinkhorst
2006-07-09 14:22:28 UTC
Permalink
Post by martin f krafft
Post by Martijn van Oosterhout
A far more reasonable solution is to only greylist mail with an
unreasonably high spamassassin score. Normal mail I assume generally
doesn't score high and is not susceptable to greylisting.
Sure. Or greylist only when it's from a dynIP address.
Indeed, the current Alioth config only greylists those hosts that have
some kind of 'problem', like no reverse DNS entry or are featured on
some kind of RBL.

Any decent mailserver is allowed right through. Any indecent mailserver
is told to wait just a little bit, but is still allowed to send its
mail.
Post by martin f krafft
and that it is only a question of time before spam zombies retry.
That's not really relevant: if we can block spam now, we should do it
now. Sure, we still need to be looking for new measures for when
greylisting stops to work, but that doesn't exclude using it now in any
way.


Thijs
martin f krafft
2006-07-09 14:30:09 UTC
Permalink
Post by Thijs Kinkhorst
Indeed, the current Alioth config only greylists those hosts that have
some kind of 'problem', like no reverse DNS entry or are featured on
some kind of RBL.
Any decent mailserver is allowed right through. Any indecent mailserver
is told to wait just a little bit, but is still allowed to send its
mail.
postgrey, for instance, whitelists hosts that have 5 successful
deliveries. In the presence of this option, you can just greylist
*everything*.
--
Please do not send copies of list mail to me; I read the list!

.''`. martin f. krafft <***@debian.org>
: :' : proud Debian developer and author: http://debiansystem.info
`. `'`
`- Debian - when you have better things to do than fixing a system

mumlutlitithtrhreeaadededd s siigngnatatuurere
Andreas Metzler
2006-07-09 14:19:58 UTC
Permalink
Martijn van Oosterhout <***@gmail.com> wrote:
[...]
Post by Martijn van Oosterhout
The point was about mailers sending mail to debian. If they receive a
4xx they have to queue the mail and retry later. It's cheap for
debian, but expensive for everyone else.
A far more reasonable solution is to only greylist mail with an
unreasonably high spamassassin score. Normal mail I assume generally
doesn't score high and is not susceptable to greylisting.
Greylisting after DATA sounds like a bad idea to me:

1. The bandwith has already been wasted.
2. The bandwith will be wasted again if the host retries
3. spamassassin is a performance hog, and you'll need to rerun it when
the host retries.

*If* you want to be picky about greylisting use something *cheap*,
e.g.
- greylist only hosts listed on a DNS blacklist.
- Don't greylist on host/sender/receipient triples but check
network/sender/receipient. And possibly combine this with *not*
greylisting _any_ sender/receipient tuple iff $host already passed
greylisting for another sender/receipient tuple. (We already know
the host to do proper retries, no use in greylisting again.)
Post by Martijn van Oosterhout
Not that I mind, the amount of spam received via this mailing list is
so marginal I can hardly imagine people worrying about it.
We are not (only) talking about lists.d.o. primarly but the
***@debian.org addresses. /These/ gather loads of spam.

cu andreas
--
The 'Galactic Cleaning' policy undertaken by Emperor Zhark is a personal
vision of the emperor's, and its inclusion in this work does not constitute
tacit approval by the author or the publisher for any such projects,
howsoever undertaken. (c) Jasper Ffforde
Adrian von Bidder
2006-07-10 15:34:42 UTC
Permalink
On Sunday 09 July 2006 15:48, Martijn van Oosterhout wrote:
[greylisting]
Post by Martijn van Oosterhout
The point was about mailers sending mail to debian. If they receive a
4xx they have to queue the mail and retry later. It's cheap for
debian, but expensive for everyone else.
Does anybody have sensible numbers about that?

On my relatively small server, I usually have between 0 and 40 messages in
the deferred queue. Of those, up to 1 or 2 are due to greylisting. All
others are because recipients have crap mailservers or nameservers.

As madduck said: either you are small, so your mailserver isn't loaded
anyway, or you're big, so the additional load from greylisting isn't
noticeable, or you're a spammer.

Hmm. Discussing mail problems on irc while answering mailing list mail in a
mail setup related mail thread mail confuses me mail. can't mail stop mail.

cheers
-- mail
--
Perl: The Swiss Army Chainsaw
Christian Perrier
2006-07-09 15:13:02 UTC
Permalink
Post by Marc Haber
For example, that greylisting puts significant load on systems that
deliver mail to us, and that it is only a question of time before spam
zombies retry.
Yep, I know about these arguments but Pierre Habouzit bringed an
interesting enhancement to greylisting by greylisting only systems
that are in some carefully chosen blacklists.

This is what is currently operational on lists.alioth.d.o

I see this as an interesting combination of RBL (which I dislike A LOT
when used alone) and greylisting. It reduced the amount of spam in
Alioth mailing list significantly.
Pierre Habouzit
2006-07-09 22:21:15 UTC
Permalink
Post by Marc Haber
On Sun, 9 Jul 2006 08:14:20 +0200, Christian Perrier
Post by Christian Perrier
Post by Thomas Bushnell BSG
Post by martin f krafft
This has been brought up. Basically I don't think people were
opposed to it, but there was noone available to implement it.
There were people opposed to it, in fact.
What were their arguments?
For example, that greylisting puts significant load on systems that
deliver mail to us, and that it is only a question of time before
spam zombies retry.
hence a good way to achieve that, is to apply greylisting on hosts that
do not seem to be a valid SMTP server. good hints are:
* beeing listed in some RBL's (like 'dynamic IPs' rbls),
* not having a valid reverse DNS,
* using very curious EHLO/HELO,
* ...

all those checks are really cheap, and almost never makes the thing
greylist really big and well known SMTP's, since it's useless to
greylist SMTP's anyway, it only makes them unhappy (which is your
point).

as said a couple of times in that thread, such a policy is already in
place on alioth with quite a good result IMHO.
--
·O· Pierre Habouzit
··O ***@debian.org
OOO http://www.madism.org
martin f krafft
2006-07-09 13:39:20 UTC
Permalink
Post by Thomas Bushnell BSG
There were people opposed to it, in fact.
Sure, nobody expected it to be any different. This is Debian, after
all. :)

There will always be opposers. If we let our work be hindered by
them, we're going to stagnate.

Anyway, I'll be interested to hear a summary of their arguments, as
Christian Perrier requested. I find it hard to imagine how properly
configured greylisting should cause any problems.
--
Please do not send copies of list mail to me; I read the list!

.''`. martin f. krafft <***@debian.org>
: :' : proud Debian developer and author: http://debiansystem.info
`. `'`
`- Debian - when you have better things to do than fixing a system

no micro$oft components were used
in the creation or posting of this email.
therefore, it is 100% virus free
and does not use html by default (yuck!).
Thomas Bushnell BSG
2006-07-10 00:03:02 UTC
Permalink
Post by martin f krafft
Anyway, I'll be interested to hear a summary of their arguments, as
Christian Perrier requested. I find it hard to imagine how properly
configured greylisting should cause any problems.
It's a violation of the standard. It is especially problematic,
because it is a violation against the spirit of being liberal in what
you accept, and conservative in what you require.

It assumes, for example, that the remote MTA will use the same IP
address each time it sends the message. If the remote MTA is a big
server farm, with a lot of different hosts that could be processing
the mail, what is your strategy for preventing essentially infinite
delay?

So far, all I have seen in response to this particular problem is to
say that "properly configured" includes an exactly accurate hardcoded
list of all such sites on the internet.

Another problem is with hosts that do not accept a message from an MTA
unless that MTA is willing to accept replies. This is a common spam
prevention measure. The graylisting host cannot then send mail to
such sites until they've been whitelisted, because when they try the
reverse connection out, it always gets a 4xx error. I've been bitten
by this one before.

Thomas
Matthew R. Dempsky
2006-07-10 00:17:10 UTC
Permalink
Post by Thomas Bushnell BSG
Another problem is with hosts that do not accept a message from an MTA
unless that MTA is willing to accept replies. This is a common spam
prevention measure.
It also prevents mail from setups that use different servers for inbound
and outbound mail.
Thomas Bushnell BSG
2006-07-10 01:30:57 UTC
Permalink
Post by Matthew R. Dempsky
Post by Thomas Bushnell BSG
Another problem is with hosts that do not accept a message from an MTA
unless that MTA is willing to accept replies. This is a common spam
prevention measure.
It also prevents mail from setups that use different servers for inbound
and outbound mail.
Yes that's right. This is what happens when people start breaking
protocols in attempts to defeat spam. This is why I'm against
graylisting.

Thomas
Pierre Habouzit
2006-07-10 05:39:42 UTC
Permalink
Post by Matthew R. Dempsky
Post by Thomas Bushnell BSG
Another problem is with hosts that do not accept a message from an
MTA unless that MTA is willing to accept replies. This is a common
spam prevention measure.
It also prevents mail from setups that use different servers for
inbound and outbound mail.
which is highly unlikely if you never greylist hosts that are not listed
in rbl's.

so your reproach is completely irelevant to the suggestion.
--
·O· Pierre Habouzit
··O ***@debian.org
OOO http://www.madism.org
Matthew R. Dempsky
2006-07-10 14:57:46 UTC
Permalink
Post by Pierre Habouzit
Post by Matthew R. Dempsky
Post by Thomas Bushnell BSG
Another problem is with hosts that do not accept a message from an
MTA unless that MTA is willing to accept replies. This is a common
spam prevention measure.
It also prevents mail from setups that use different servers for
inbound and outbound mail.
which is highly unlikely if you never greylist hosts that are not listed
in rbl's.
This has nothing to do with greylisting. ``It'' above refers to ``Not
accepting messages from an MTA unless that MTA is willing to accept
replies'', not ``graylisting''.
--
To UNSUBSCRIBE, email to debian-devel-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Adrian von Bidder
2006-07-10 15:57:45 UTC
Permalink
Post by Matthew R. Dempsky
Post by Thomas Bushnell BSG
Another problem is with hosts that do not accept a message from an MTA
unless that MTA is willing to accept replies. This is a common spam
prevention measure.
It also prevents mail from setups that use different servers for inbound
and outbound mail.
Hmm. I've not seen this kind of sender verification. As I know it, the
receiving MX connects the regular MX for the sender address to see if
*that* is ready to receive mail. Works beautifully if outbound != inbound.

While very effective, this is admittedly the kind of spam prevention measure
which puts some load on the systems on both ends.

cheers
-- vbi
--
featured product: the KDE desktop - http://kde.org
Blu Corater
2006-07-10 16:22:48 UTC
Permalink
Post by Adrian von Bidder
Post by Matthew R. Dempsky
Post by Thomas Bushnell BSG
Another problem is with hosts that do not accept a message from an MTA
unless that MTA is willing to accept replies. This is a common spam
prevention measure.
It also prevents mail from setups that use different servers for inbound
and outbound mail.
Hmm. I've not seen this kind of sender verification. As I know it, the
receiving MX connects the regular MX for the sender address to see if
*that* is ready to receive mail. Works beautifully if outbound != inbound.
While very effective, this is admittedly the kind of spam prevention measure
which puts some load on the systems on both ends.
Actually, I don't see it as spam prevention. It is a mean to lock onself
out of broken|fascist mail servers and let their users know that it is their
server blocking legitimate email and not my users ignoring them. There is no
point in accepting a message that cannot be answered (or bounced). The
spam prevention is only a nice side effect.
--
Blu.
Henrique de Moraes Holschuh
2006-07-10 16:55:45 UTC
Permalink
Post by Adrian von Bidder
Post by Matthew R. Dempsky
Post by Thomas Bushnell BSG
Another problem is with hosts that do not accept a message from an MTA
unless that MTA is willing to accept replies. This is a common spam
prevention measure.
It also prevents mail from setups that use different servers for inbound
and outbound mail.
Hmm. I've not seen this kind of sender verification. As I know it, the
receiving MX connects the regular MX for the sender address to see if
*that* is ready to receive mail. Works beautifully if outbound != inbound.
And sets the envolope sender to what in the probe?
--
"One disk to rule them all, One disk to find them. One disk to bring
them all and in the darkness grind them. In the Land of Redmond
where the shadows lie." -- The Silicon Valley Tarot
Henrique Holschuh
Stephen Gran
2006-07-10 17:28:40 UTC
Permalink
Post by Henrique de Moraes Holschuh
Post by Adrian von Bidder
Post by Matthew R. Dempsky
Post by Thomas Bushnell BSG
Another problem is with hosts that do not accept a message from an MTA
unless that MTA is willing to accept replies. This is a common spam
prevention measure.
It also prevents mail from setups that use different servers for inbound
and outbound mail.
Hmm. I've not seen this kind of sender verification. As I know it, the
receiving MX connects the regular MX for the sender address to see if
*that* is ready to receive mail. Works beautifully if outbound != inbound.
And sets the envolope sender to what in the probe?
<>, hopefully. Anything else is silly.
--
-----------------------------------------------------------------
| ,''`. Stephen Gran |
| : :' : ***@debian.org |
| `. `' Debian user, admin, and developer |
| `- http://www.debian.org |
-----------------------------------------------------------------
Lionel Elie Mamane
2006-07-11 06:56:52 UTC
Permalink
Post by Stephen Gran
Post by Henrique de Moraes Holschuh
As I know it, the receiving MX connects the regular MX for the
sender address to see if *that* is ready to receive mail. Works
beautifully if outbound != inbound.
And sets the envolope sender to what in the probe?
<>, hopefully. Anything else is silly.
Yes and no. An increasing number of sites refuse "bounces" (that is
messages with null return-path) to some addresses that are known never
to send mail. This breaks the procedure and is reacted by other sites
by using a fixed "just-probing-***@their_domain" address, and mail to
that address doesn't incur that check (but ends up in /dev/null or
gets refused at DATA time).
--
Lionel
Stephen Gran
2006-07-11 10:30:24 UTC
Permalink
Post by Lionel Elie Mamane
Post by Stephen Gran
Post by Henrique de Moraes Holschuh
As I know it, the receiving MX connects the regular MX for the
sender address to see if *that* is ready to receive mail. Works
beautifully if outbound != inbound.
And sets the envolope sender to what in the probe?
<>, hopefully. Anything else is silly.
Yes and no. An increasing number of sites refuse "bounces" (that is
messages with null return-path) to some addresses that are known never
to send mail. This breaks the procedure and is reacted by other sites
that address doesn't incur that check (but ends up in /dev/null or
gets refused at DATA time).
When I find sites that are broken, I report them to dsn.rfc-ignorant.com,
and then others can use that as a whitelist or blacklist as they choose
for deciding what to do with callouts and the like. I do refuse the
null sender to various role accounts that never send mail, but I don't
do it at RCPT TO time, only after the remote end has sent DATA. This
allows callouts to work, but still blocks bounces resulting from joe
jobs and the like.

Take care,
--
-----------------------------------------------------------------
| ,''`. Stephen Gran |
| : :' : ***@debian.org |
| `. `' Debian user, admin, and developer |
| `- http://www.debian.org |
-----------------------------------------------------------------
Hamish Moffatt
2006-07-11 10:48:43 UTC
Permalink
Post by Lionel Elie Mamane
Post by Stephen Gran
Post by Henrique de Moraes Holschuh
As I know it, the receiving MX connects the regular MX for the
sender address to see if *that* is ready to receive mail. Works
beautifully if outbound != inbound.
And sets the envolope sender to what in the probe?
<>, hopefully. Anything else is silly.
Yes and no. An increasing number of sites refuse "bounces" (that is
messages with null return-path) to some addresses that are known never
to send mail. This breaks the procedure and is reacted by other sites
that address doesn't incur that check (but ends up in /dev/null or
gets refused at DATA time).
That doesn't add up. Since <> never sends mail, there will never be a
sender verification callback TO that address either. What's the problem?

Likewise if you refuse bounces to other inbound-only addresses, you
should never get inbound probes. That is kind of the point.

Hamish
--
Hamish Moffatt VK3SB <***@debian.org> <***@cloud.net.au>
Lionel Elie Mamane
2006-07-11 11:28:47 UTC
Permalink
Post by Hamish Moffatt
Post by Lionel Elie Mamane
Post by Stephen Gran
Post by Henrique de Moraes Holschuh
As I know it, the receiving MX connects the regular MX for the
sender address to see if *that* is ready to receive mail. Works
beautifully if outbound != inbound.
And sets the envolope sender to what in the probe?
<>, hopefully. Anything else is silly.
Yes and no. An increasing number of sites refuse "bounces" (that is
messages with null return-path) to some addresses that are known
never to send mail. This breaks the procedure and is reacted by
address, and mail to that address doesn't incur that check (but
ends up in /dev/null or gets refused at DATA time).
That doesn't add up. Since <> never sends mail, there will never be
a sender verification callback TO that address either.
Some sites verify the ***@example.org and
***@example.org addresses before accepting mail with sender
***@example.org .
--
Lionel
Török Edvin
2006-07-11 11:37:27 UTC
Permalink
IIRC sourceforge verifies for the existence of a postmaster@ address
Stephen Gran
2006-07-11 11:49:31 UTC
Permalink
That's a lot of overhead when there's a postmaster.rfc-ignorant.org rbl.
--
-----------------------------------------------------------------
| ,''`. Stephen Gran |
| : :' : ***@debian.org |
| `. `' Debian user, admin, and developer |
| `- http://www.debian.org |
-----------------------------------------------------------------
Hamish Moffatt
2006-07-11 22:52:52 UTC
Permalink
Post by Hamish Moffatt
That doesn't add up. Since <> never sends mail, there will never be
a sender verification callback TO that address either.
That seems reasonable. If you're sending as ***@example.org
you should accept callbacks for it, and likewise you're required to
accept mail for ***@example.org.

Hamish
--
Hamish Moffatt VK3SB <***@debian.org> <***@cloud.net.au>
Adam Borowski
2006-07-11 00:39:23 UTC
Permalink
Post by Adrian von Bidder
Post by Matthew R. Dempsky
Post by Thomas Bushnell BSG
Another problem is with hosts that do not accept a message from an MTA
unless that MTA is willing to accept replies. This is a common spam
prevention measure.
It also prevents mail from setups that use different servers for inbound
and outbound mail.
Hmm. I've not seen this kind of sender verification. As I know it, the
receiving MX connects the regular MX for the sender address to see if
*that* is ready to receive mail. Works beautifully if outbound != inbound.
In fact, broken servers which don't obey MX will _already_ fail:

debian.org A 192.25.206.10
debian.org MX master.debian.org
master.debian.org A 70.103.162.30

[~]$ telnet 192.25.206.10 25
Trying 192.25.206.10...
Connected to 192.25.206.10.
Escape character is '^]'.
220 gluck.debian.org ESMTP Exim 4.50 Mon, 10 Jul 2006 18:06:29 -0600
helo utumno.angband.pl
250 gluck.debian.org Hello acrc58.neoplus.adsl.tpnet.pl [83.11.4.58]
mail from: ***@utumno.angband.pl
250 OK
rcpt to: ***@debian.org
550 relay not permitted


MX records have been with us for 20 years, so I don't think a
legitimate mailer can ever disobey one. Of course, illegitimate
mailers often do.
--
1KB // Microsoft corollary to Hanlon's razor:
// Never attribute to stupidity what can be
// adequately explained by malice.
Matthew R. Dempsky
2006-07-11 00:56:31 UTC
Permalink
The issue isn't whether MTA's check against MX or A records, it's
whether they check the IP that connected to them vs check MX records.
Unless Debian's MTA's are setup to relay outbound mail via the
debian.org IP, I don't see how this is relevant.

(I make no claim which of the above setups are actually employed---I
simply pointed out the connect-to-sender's-IP scheme TB mentioned is
broken on its own, independant of greylisting.)
Henrique de Moraes Holschuh
2006-07-10 03:34:10 UTC
Permalink
Post by Thomas Bushnell BSG
It assumes, for example, that the remote MTA will use the same IP
address each time it sends the message. If the remote MTA is a big
The earlier *implementations* of greylisting did that, true. They were
simple-minded at best.
Post by Thomas Bushnell BSG
server farm, with a lot of different hosts that could be processing
the mail, what is your strategy for preventing essentially infinite
delay?
You can, for example, use dynamic IP supersets to do the greylisting
"triplet" match. Now the problem is a matter of creating the supersets in a
way to not break incoming email from outgoing-SMTP clusters.

You can also only graylist sites which match a set of conditions that flag
them as suspicious. Depending on what conditions you set, you do not have
the risk of blocking any server farms we would want to talk SMTP to.
Post by Thomas Bushnell BSG
So far, all I have seen in response to this particular problem is to
say that "properly configured" includes an exactly accurate hardcoded
list of all such sites on the internet.
Then you are hearing differently now.
Post by Thomas Bushnell BSG
Another problem is with hosts that do not accept a message from an MTA
unless that MTA is willing to accept replies. This is a common spam
prevention measure. The graylisting host cannot then send mail to
such sites until they've been whitelisted, because when they try the
reverse connection out, it always gets a 4xx error. I've been bitten
Why will the host implementing incoming graylisting *always* get a 4xx error
on his outgoing message? I am curious.
--
"One disk to rule them all, One disk to find them. One disk to bring
them all and in the darkness grind them. In the Land of Redmond
where the shadows lie." -- The Silicon Valley Tarot
Henrique Holschuh
Thomas Bushnell BSG
2006-07-10 03:43:54 UTC
Permalink
Post by Henrique de Moraes Holschuh
You can, for example, use dynamic IP supersets to do the greylisting
"triplet" match. Now the problem is a matter of creating the supersets in a
way to not break incoming email from outgoing-SMTP clusters.
Is there a way of doing this which doesn't require you to know in
advance the setup of remote networks and such? Does it scale?
Post by Henrique de Moraes Holschuh
You can also only graylist sites which match a set of conditions that flag
them as suspicious. Depending on what conditions you set, you do not have
the risk of blocking any server farms we would want to talk SMTP to.
You don't have the risk? Are you saying that there is exactly *zero*
risk? Please, if you don't mean that, be more precise.
Post by Henrique de Moraes Holschuh
Post by Thomas Bushnell BSG
So far, all I have seen in response to this particular problem is to
say that "properly configured" includes an exactly accurate hardcoded
list of all such sites on the internet.
Then you are hearing differently now.
What ar the "dynamic IP supersets" you speak of, then?
Post by Henrique de Moraes Holschuh
Post by Thomas Bushnell BSG
Another problem is with hosts that do not accept a message from an MTA
unless that MTA is willing to accept replies. This is a common spam
prevention measure. The graylisting host cannot then send mail to
such sites until they've been whitelisted, because when they try the
reverse connection out, it always gets a 4xx error. I've been bitten
Why will the host implementing incoming graylisting *always* get a 4xx error
on his outgoing message? I am curious.
The other way round.
Henrique de Moraes Holschuh
2006-07-10 04:10:01 UTC
Permalink
Post by Thomas Bushnell BSG
Post by Henrique de Moraes Holschuh
You can, for example, use dynamic IP supersets to do the greylisting
"triplet" match. Now the problem is a matter of creating the supersets in a
way to not break incoming email from outgoing-SMTP clusters.
Is there a way of doing this which doesn't require you to know in
advance the setup of remote networks and such? Does it scale?
Yes. The most absurd way is to consider every non-stolen, valid for the
public Internet IPv4 netblock as belonging to a single IP superset, and
flushing the graylisted database often (but mind your outgoing email retry
policy!).

Another is to
Post by Thomas Bushnell BSG
Post by Henrique de Moraes Holschuh
You can also only graylist sites which match a set of conditions that flag
them as suspicious. Depending on what conditions you set, you do not have
the risk of blocking any server farms we would want to talk SMTP to.
You don't have the risk? Are you saying that there is exactly *zero*
risk? Please, if you don't mean that, be more precise.
We == Debian.

Server farms we want to talk to == those professionaly run by
non-botnet-<censored>. We also want to talk to MTAs run by geeks on their
home connections, but those are *not* outgoing SMTP farms, so they are not
an issue.

If you graylist only people on DUL and with severily broken DNS, you don't
hit professionaly run SMTP farms like the one for gmail, yahoo, or any other
gigantic email provider. Chance is not zero, it is very small. And it is
even smaller if you consider it over a three-days retry window.
Post by Thomas Bushnell BSG
Post by Henrique de Moraes Holschuh
Post by Thomas Bushnell BSG
So far, all I have seen in response to this particular problem is to
say that "properly configured" includes an exactly accurate hardcoded
list of all such sites on the internet.
Then you are hearing differently now.
What ar the "dynamic IP supersets" you speak of, then?
In their dumbest form, match using big, static netmasks like 255.255.128.0.
That should give you a hint of what I am talking about.
Post by Thomas Bushnell BSG
Post by Henrique de Moraes Holschuh
Post by Thomas Bushnell BSG
Another problem is with hosts that do not accept a message from an MTA
unless that MTA is willing to accept replies. This is a common spam
prevention measure. The graylisting host cannot then send mail to
such sites until they've been whitelisted, because when they try the
reverse connection out, it always gets a 4xx error. I've been bitten
Why will the host implementing incoming graylisting *always* get a 4xx error
on his outgoing message? I am curious.
The other way round.
Here's what I understood of what you wrote:

Alice wants to send email to Bob. Alice graylists incoming email. Bob does
sender verification trying to email people back before accepting a message.

You claim Alice cannot send mail to Bob because Bob will attempt to "almost
send email back to Alice", thus Bob's verification attempt will be
graylisted (with a 4xx), causing Bob to deny the delivery of Alice's message
with a 4xx.

If that's not correct, please clarify.

If it is correct, I am asking you *why* Alice's system will never let Bob's
verification probe through (thus allowing her email to be delivered to Bob).

I *can* see a scenario where delivery might never happen (I am ignoring
configuration error scenarios on Alice's side), but it depends on Alice also
doing the same type of sender verification, and on one or both sides
violating RFC 2821.
--
"One disk to rule them all, One disk to find them. One disk to bring
them all and in the darkness grind them. In the Land of Redmond
where the shadows lie." -- The Silicon Valley Tarot
Henrique Holschuh
Henrique de Moraes Holschuh
2006-07-10 04:41:44 UTC
Permalink
Post by Henrique de Moraes Holschuh
Post by Thomas Bushnell BSG
Is there a way of doing this which doesn't require you to know in
advance the setup of remote networks and such? Does it scale?
Yes. The most absurd way is to consider every non-stolen, valid for the
public Internet IPv4 netblock as belonging to a single IP superset, and
flushing the graylisted database often (but mind your outgoing email retry
policy!).
Another is to
Argh. I must have deleted part of the message by mistyping in vim and didn't
notice it before sending. Sorry about that.

Another way to avoid problems with clusters is to assume certain common
setup patterns for server farms, like a cheap netmask match. This does, in
a way, "require you to know in advance the setup of remote networks", in the
sense that you need to know the common patterns that will be used. At
least now you are dealing with patterns, and not specific instances.

It is not as bad as it sounds. Small clusters of less than five machines
are not supposed to be an issue (you will graylist-approve the entire
cluster before the retry limit is over for reasonable retry policies).

Large clusters are almost always made of a number of islands of nodes with
IPs close to each other, and graylist-approving different islands will also
work if you don't manage to match all islands as a single set).

Scaling is obviously a problem if you have many incoming SMTP hosts, as the
graylisting knowledge should be shared among all of them. Other scaling
issues depend on how you calculate the IP sets, but for IP distance like the
above example, it is pratically the same as for dumb graylisting.
--
"One disk to rule them all, One disk to find them. One disk to bring
them all and in the darkness grind them. In the Land of Redmond
where the shadows lie." -- The Silicon Valley Tarot
Henrique Holschuh
Thomas Bushnell BSG
2006-07-10 05:00:43 UTC
Permalink
Post by Henrique de Moraes Holschuh
Another way to avoid problems with clusters is to assume certain common
setup patterns for server farms, like a cheap netmask match. This does, in
a way, "require you to know in advance the setup of remote networks", in the
sense that you need to know the common patterns that will be used. At
least now you are dealing with patterns, and not specific instances.
This is not adequate, sorry, at least, not in my book.

I am concerned that you not use a spam-defeating technique which
blocks perfectly legitimate and standards-compliant email.

What I object to is specifically the attempt to create *new*
standards, by blocking legitimate email. There is no standard
requirement that a server farm use a small netmask or one of a set of
common patterns. If you want such a requirement, please propose one
to the IETF. You know how.

Saying "if everyone followed rule X (and heck, lots of people already
do!) my system would work perfectly" is irrelevant to me. What
matters to me is "my scheme works when everyone follows the actual
public standards for email."

Thomas
Marco d'Itri
2006-07-10 10:26:10 UTC
Permalink
Post by Thomas Bushnell BSG
I am concerned that you not use a spam-defeating technique which
blocks perfectly legitimate and standards-compliant email.
Then why you are not loudly complaining about the antispam software
currently applied to our mail lists and BTS, which silently discards
mail that appears to be spam?

Silently discarding legitimate email is a problem, rejecting legitimate
email is at best an annoyance.
--
ciao,
Marco
Thomas Bushnell BSG
2006-07-10 17:13:05 UTC
Permalink
Post by Marco d'Itri
Post by Thomas Bushnell BSG
I am concerned that you not use a spam-defeating technique which
blocks perfectly legitimate and standards-compliant email.
Then why you are not loudly complaining about the antispam software
currently applied to our mail lists and BTS, which silently discards
mail that appears to be spam?
I have complained about that, in fact.
Don Armstrong
2006-07-10 22:24:44 UTC
Permalink
Post by Marco d'Itri
Post by Thomas Bushnell BSG
I am concerned that you not use a spam-defeating technique which
blocks perfectly legitimate and standards-compliant email.
Then why you are not loudly complaining about the antispam software
currently applied to our mail lists and BTS, which silently discards
mail that appears to be spam?
At least for the BTS, those messages are not discarded; they're just
separated out and processing on them is halted. Blars spends a lot of
time looking at "borderline" messages to put back in non-spam into the
queue, and catches most of them.


Don Armstrong
--
"For those who understand, no explanation is necessary.
For those who do not, none is possible."

http://www.donarmstrong.com http://rzlab.ucr.edu
Frans Pop
2006-07-10 22:36:02 UTC
Permalink
Post by Don Armstrong
At least for the BTS, those messages are not discarded; they're just
separated out and processing on them is halted. Blars spends a lot of
time looking at "borderline" messages to put back in non-spam into the
queue, and catches most of them.
There _are_ messages that are silently discarded though: those with a size
that is bigger than the limit, often because of attachments.
We sometimes miss valid mails to d-boot because of that (with installation
logs attached).
The main problem there IMO is that the sender gets no indication that
anything has gone wrong.

Also frustrating is the situation were an installation report (BR against
installation-reports) is accepted into the BTS, but rejected by the
mailservers, which means that we never get to see the report (at least
not in a timely manner)...
Roberto Sanchez
2006-07-10 22:40:12 UTC
Permalink
Post by Frans Pop
Post by Don Armstrong
At least for the BTS, those messages are not discarded; they're just
separated out and processing on them is halted. Blars spends a lot of
time looking at "borderline" messages to put back in non-spam into the
queue, and catches most of them.
There _are_ messages that are silently discarded though: those with a size
that is bigger than the limit, often because of attachments.
We sometimes miss valid mails to d-boot because of that (with installation
logs attached).
The main problem there IMO is that the sender gets no indication that
anything has gone wrong.
Also frustrating is the situation were an installation report (BR against
installation-reports) is accepted into the BTS, but rejected by the
mailservers, which means that we never get to see the report (at least
not in a timely manner)...
Out of curiosity, what is the limit? I know that the weekly RC bug
status emails are relatively large and those go out without a problem.
Or is that because they go out via dda (which may use some mechanism
other than size)?

-Roberto
--
Roberto C. Sanchez
http://familiasanchez.net/~roberto
Blars Blarson
2006-07-10 23:57:17 UTC
Permalink
Post by Don Armstrong
At least for the BTS, those messages are not discarded; they're just
separated out and processing on them is halted. Blars spends a lot of
time looking at "borderline" messages to put back in non-spam into the
queue, and catches most of them.
Not quite right, I look at the borderline messages that got passed
through and delete them from bugs if they are spam. (and use them to
train the filters either way.) I do look at the messages caught by
crossassassin and reinject if needed, but that's many orders of
magnatude less than those caught by spamassassin.
--
Blars Blarson ***@blars.org
http://www.blars.org/blars.html
With Microsoft, failure is not an option. It is a standard feature.
Thomas Bushnell BSG
2006-07-10 04:58:09 UTC
Permalink
Post by Henrique de Moraes Holschuh
Post by Thomas Bushnell BSG
Post by Henrique de Moraes Holschuh
You can, for example, use dynamic IP supersets to do the greylisting
"triplet" match. Now the problem is a matter of creating the supersets in a
way to not break incoming email from outgoing-SMTP clusters.
Is there a way of doing this which doesn't require you to know in
advance the setup of remote networks and such? Does it scale?
Yes. The most absurd way is to consider every non-stolen, valid for the
public Internet IPv4 netblock as belonging to a single IP superset, and
flushing the graylisted database often (but mind your outgoing email retry
policy!).
I don't think I understand just what you're saying. Can you spell out
the details for me?
Post by Henrique de Moraes Holschuh
Post by Thomas Bushnell BSG
Post by Henrique de Moraes Holschuh
You can also only graylist sites which match a set of conditions that flag
them as suspicious. Depending on what conditions you set, you do not have
the risk of blocking any server farms we would want to talk SMTP to.
You don't have the risk? Are you saying that there is exactly *zero*
risk? Please, if you don't mean that, be more precise.
We == Debian.
Server farms we want to talk to == those professionaly run by
non-botnet-<censored>. We also want to talk to MTAs run by geeks on their
home connections, but those are *not* outgoing SMTP farms, so they are not
an issue.
Keeping a list of such server farms is exactly what I meant by a
nonworking pseudo-solution. I said, specifically, "is there a way of
doing this which doesn't require you to know in advance the setup of
remote networks and such?" This was the same idea I had already said
in terms of "all I have seen is to...[include] an exactly accurate
hardcoded list of all such sites."

It distresses me that I have said twice now that a "solution" which
requires a hardcoded list of special sites exempted from the rules is
not a solution I regard as answering my objection.
Any graylister which requires a specific list of sites counts as a
dumb one in my book. I want a solution which specifically *never*
needs any preset hardcoded "this set of addresses/domains gets a
pass".
Post by Henrique de Moraes Holschuh
In their dumbest form, match using big, static netmasks like 255.255.128.0.
That should give you a hint of what I am talking about.
A hardcoded list is the problem. Got it? A loose hardcoded list is
still a problem.
Post by Henrique de Moraes Holschuh
Post by Thomas Bushnell BSG
Post by Henrique de Moraes Holschuh
Post by Thomas Bushnell BSG
Another problem is with hosts that do not accept a message from an MTA
unless that MTA is willing to accept replies. This is a common spam
prevention measure. The graylisting host cannot then send mail to
such sites until they've been whitelisted, because when they try the
reverse connection out, it always gets a 4xx error. I've been bitten
Why will the host implementing incoming graylisting *always* get a 4xx error
on his outgoing message? I am curious.
The other way round.
Alice wants to send email to Bob. Alice graylists incoming email. Bob does
sender verification trying to email people back before accepting a message.
You claim Alice cannot send mail to Bob because Bob will attempt to "almost
send email back to Alice", thus Bob's verification attempt will be
graylisted (with a 4xx), causing Bob to deny the delivery of Alice's message
with a 4xx.
If that's not correct, please clarify.
If it is correct, I am asking you *why* Alice's system will never let Bob's
verification probe through (thus allowing her email to be delivered to Bob).
Because Bob never sends a complete email message to Alice.
Post by Henrique de Moraes Holschuh
I *can* see a scenario where delivery might never happen (I am ignoring
configuration error scenarios on Alice's side), but it depends on Alice also
doing the same type of sender verification, and on one or both sides
violating RFC 2821.
Doing sender verification and graylisting are both violations of the
RFCs. You can hardly say "this will work as long as everyone else
follows the RFC" when you aren't doing so yourself. My point is that
this is a case where two RFC-noncompliant spam pseudo-solutions
interact badly, because each is making up their own new requirements,
not in the RFCs, and those new requirements interact poorly.

If your system causes any RFC-compliant mail to lose, then your system
loses. So far you have argued at best that you are willing to ignore
the cases where it loses. Great. I'm not.

Thomas
Henrique de Moraes Holschuh
2006-07-10 06:47:01 UTC
Permalink
Post by Thomas Bushnell BSG
I don't think I understand just what you're saying. Can you spell out
the details for me?
Does the second email I sent (with the missing stuff) provides the
clarification you asked for?
Post by Thomas Bushnell BSG
It distresses me that I have said twice now that a "solution" which
Read below. When you do, please remember that many of us consider that a
fully-open system which drowns us in SPAM is also broken, because you do
lose information for failure of locating it among the noise.
Post by Thomas Bushnell BSG
dumb one in my book. I want a solution which specifically *never*
needs any preset hardcoded "this set of addresses/domains gets a
pass".
There is no hardcoding. Please use more exact terms. I think I understood
what you wanted to say, but whitelists are not *hardcoded*. They have never
been, they are updated in runtime. So use the proper terms next time.
Post by Thomas Bushnell BSG
Post by Henrique de Moraes Holschuh
In their dumbest form, match using big, static netmasks like 255.255.128.0.
That should give you a hint of what I am talking about.
A hardcoded list is the problem. Got it? A loose hardcoded list is
still a problem.
What I believe you mean is that for you, a non-perfect solution for
identifying outgoing SMTP clusters is not acceptable, as it gives a non-zero
possibility of permanent delivery failure to a graylisted destination.

Well, there are solutions that are good enough in practice. If you do not
like them because they are not perfect (as in guaranteed zero fail rate),
then there is no solution I know of that will be acceptable to you.

But please remember that people operating outgoing SMTP clusters *want* to
deliver email, and that they are aware of graylisting practices and also of
the diminishing probability of sucessful delivery when the sending site has
broken DNS configuration, or is listed in popular blackists and dial-up
IP space lists.

Also, keep in mind that the Debian graylisting proposal specifically states
that graylisting is not to be applied to every single incoming connection,
but rather to those coming from broken DNS sources, and blacklisted sources,
which are extremely unlikely to be the class of sending cluster that would
break graylisting in the first place.

So you do NOT need a perfect theorical solution to get zero fail rate in
practice for the proposed graylisting scheme. You don't get any guarantees
of a zero fail rate, however.
Post by Thomas Bushnell BSG
Post by Henrique de Moraes Holschuh
Alice wants to send email to Bob. Alice graylists incoming email. Bob does
sender verification trying to email people back before accepting a message.
You claim Alice cannot send mail to Bob because Bob will attempt to "almost
send email back to Alice", thus Bob's verification attempt will be
graylisted (with a 4xx), causing Bob to deny the delivery of Alice's message
with a 4xx.
If that's not correct, please clarify.
If it is correct, I am asking you *why* Alice's system will never let Bob's
verification probe through (thus allowing her email to be delivered to Bob).
Because Bob never sends a complete email message to Alice.
That is a broken graylist implementation, then. It should be fixed (or
avoided at all costs). Which graylister was that one?

For graylisting, you need to verify that the sender will retry. This is not
done through verification of completed email delivery! It is done as soon
as you got enough information to identify it as the same sender and message.
If the sender will retry, you are to approve him through the graylist
regardless of any delivery taking place.
Post by Thomas Bushnell BSG
Post by Henrique de Moraes Holschuh
I *can* see a scenario where delivery might never happen (I am ignoring
configuration error scenarios on Alice's side), but it depends on Alice also
doing the same type of sender verification, and on one or both sides
violating RFC 2821.
Doing sender verification and graylisting are both violations of the
RFCs. You can hardly say "this will work as long as everyone else
follows the RFC" when you aren't doing so yourself. My point is that
Agreed, you cannot say that. But nobody did say it. And the scenario you
experienced for Alice's failure to deliver email to Bob requires a broken
graylisting implementation that acts in a specific *wrong* way, and that was
the answer to my question.

Now, I am a bit annoyed with the "graylisting violates the RFCs" generic
statement, so I'd really appreciate if you could make it more specific.
Please explain how the idea behind graylisting ("force a host to retry a
SMTP transaction at a later time") violates RFC 2821. RFC 2821, AFAIK,
requires that the sending side deal with that scenario, and anyone who
doesn't deal with it is the one violating the RFC.

There is an issue with current graylisting implementations that *I know of*
(and I certainly am no expert in the area), in that they *will* fail to
recognize shared-queue outgoing clusters in theory, and *may* fail to do so
in practice (depends on such cluster deployments failing to match known
patterns). This has nothing to do with RFC 2821 except if you go into
subjective "in spirit" violations. Was this the violation you were talking
about when refering to graylisting?
Post by Thomas Bushnell BSG
If your system causes any RFC-compliant mail to lose, then your system
loses. So far you have argued at best that you are willing to ignore
the cases where it loses. Great. I'm not.
Actually, I am ALSO arguing that these cases are probably not going to
happen in practice, now that graylisting is far more mature and widely used.
--
"One disk to rule them all, One disk to find them. One disk to bring
them all and in the darkness grind them. In the Land of Redmond
where the shadows lie." -- The Silicon Valley Tarot
Henrique Holschuh
Thomas Bushnell BSG
2006-07-10 17:12:47 UTC
Permalink
Post by Henrique de Moraes Holschuh
Read below. When you do, please remember that many of us consider that a
fully-open system which drowns us in SPAM is also broken, because you do
lose information for failure of locating it among the noise.
You may lose that information; I do not.
Post by Henrique de Moraes Holschuh
There is no hardcoding. Please use more exact terms. I think I understood
what you wanted to say, but whitelists are not *hardcoded*. They have never
been, they are updated in runtime. So use the proper terms next time.
Then how do you know which things to add to the white list *in the
case I mentioned*?
Post by Henrique de Moraes Holschuh
What I believe you mean is that for you, a non-perfect solution for
identifying outgoing SMTP clusters is not acceptable, as it gives a non-zero
possibility of permanent delivery failure to a graylisted destination.
I want you to be explicit and clear about which new rules you are
writing into the RFCs, so that people can conform to them. You are
making up new standards and hosing people who do not comply; at the
very least you have an obligation to document the new standards you
are making up.
Post by Henrique de Moraes Holschuh
Please explain how the idea behind graylisting ("force a host to retry a
SMTP transaction at a later time") violates RFC 2821. RFC 2821, AFAIK,
requires that the sending side deal with that scenario, and anyone who
doesn't deal with it is the one violating the RFC.
You must be willing to retry the transaction, but there is *not* a
requirement that you retry it from the same address, or the same
netblock, or the same anything.

If the graylisting waited until DATA, and cached the message contents
(or a hash of them, say) in such a way that it could detect the
retransmit no matter what address it came from the next time, this
would work just fine AFAICT.

Still, making up new standards is a risk-prone thing. The fact that
nobody has thought of the case where this will fail does not mean it
won't fail. Graylisting was a wonderful idea, but the people who
first thought of it didn't even notice the failure modes. This is the
danger, and a clear and open statement "these are the specific cases
where we know that our scheme will fail" would be a very nice thing.

Thomas
Marco d'Itri
2006-07-10 17:25:53 UTC
Permalink
Post by Thomas Bushnell BSG
I want you to be explicit and clear about which new rules you are
writing into the RFCs, so that people can conform to them. You are
making up new standards and hosing people who do not comply; at the
very least you have an obligation to document the new standards you
are making up.
No, not really. The Internet does not work this way.
People have no obligation to document anything at all, and the rest of
the world has to cope.

Experience shows that the rest of the world is coping pretty well with
graylisting, notwithstanding how many pathological situations you can
design (interesting, this list looks like debian-legal@ again...).
--
ciao,
Marco
Thomas Bushnell BSG
2006-07-10 17:39:59 UTC
Permalink
Post by Marco d'Itri
Post by Thomas Bushnell BSG
I want you to be explicit and clear about which new rules you are
writing into the RFCs, so that people can conform to them. You are
making up new standards and hosing people who do not comply; at the
very least you have an obligation to document the new standards you
are making up.
No, not really. The Internet does not work this way.
People have no obligation to document anything at all, and the rest of
the world has to cope.
I'm speaking about Debian here. We stand for openness, clarity, and
free software. We stand for the interests of our users. We do not
stand for keeping secrets and causing problems.

I want *you*, the people pushing this for Debian, to do this, if they
want it on Debian. What you do on your own systems is not my concern.
Marco d'Itri
2006-07-10 17:55:25 UTC
Permalink
Post by Thomas Bushnell BSG
I'm speaking about Debian here. We stand for openness, clarity, and
free software. We stand for the interests of our users. We do not
We used to. Nowadays we stand for the mechanical veneration of holy
principles.
--
ciao,
Marco
Adrian von Bidder
2006-07-10 16:10:45 UTC
Permalink
Post by Thomas Bushnell BSG
Doing sender verification and graylisting are both violations of the
RFCs.
Which rfcs and where, exactly? Specific filename, version and line numbers,
as Kimball would say it.

AFAICT, the protocol allows the receiving end to temporarily reject email,
and the sending end will retry. AFAICT QUIT is allowed after RCPT TO to
abort a mail transaction - and sender verification is no different from a
normal mail transaction in the view of the receiver.

-- vbi
--
featured link: http://fortytwo.ch/smtp
Marco d'Itri
2006-07-10 17:11:18 UTC
Permalink
Post by Adrian von Bidder
AFAICT, the protocol allows the receiving end to temporarily reject email,
and the sending end will retry. AFAICT QUIT is allowed after RCPT TO to
abort a mail transaction - and sender verification is no different from a
normal mail transaction in the view of the receiver.
Correct.
OTOH, sender verification is evil for a different reason: if a domain
is forged by a spammer and a large number of systems receiving the
spam perform sender verification, the MX of the forged domain will be
DoS'ed. This is about as antisocial as vacation messages and replies by
antivirus software.
--
ciao,
Marco
Andreas Metzler
2006-07-10 06:15:55 UTC
Permalink
[...]
Post by Thomas Bushnell BSG
It assumes, for example, that the remote MTA will use the same IP
address each time it sends the message.
[...]

eh no. Standard greylisting practise nowadays (it already was standard when
sarge was released) is to not greylist on host IP but at least on the /27
netblock.

cu andreas
Marc Haber
2006-07-10 07:30:08 UTC
Permalink
On Mon, 10 Jul 2006 06:15:55 +0000 (UTC), Andreas Metzler
Post by Andreas Metzler
[...]
Post by Thomas Bushnell BSG
It assumes, for example, that the remote MTA will use the same IP
address each time it sends the message.
[...]
eh no. Standard greylisting practise nowadays (it already was standard when
sarge was released) is to not greylist on host IP but at least on the /27
netblock.
So you will whitelist the spamming customer in the same rack farm than
your bona fide communications partner.

Greetings
Marc
--
-------------------------------------- !! No courtesy copies, please !! -----
Marc Haber | " Questions are the | Mailadresse im Header
Mannheim, Germany | Beginning of Wisdom " | http://www.zugschlus.de/
Nordisch by Nature | Lt. Worf, TNG "Rightful Heir" | Fon: *49 621 72739834
martin f krafft
2006-07-10 07:57:36 UTC
Permalink
Post by Marc Haber
Post by Andreas Metzler
eh no. Standard greylisting practise nowadays (it already was
standard when sarge was released) is to not greylist on host IP
but at least on the /27 netblock.
So you will whitelist the spamming customer in the same rack farm
than your bona fide communications partner.
That's better than not greylisting anyone. Nobody is trying to
design the perfect spam filter. We just want to reduce spam on
debian.org.
--
Please do not send copies of list mail to me; I read the list!

.''`. martin f. krafft <***@debian.org>
: :' : proud Debian developer and author: http://debiansystem.info
`. `'`
`- Debian - when you have better things to do than fixing a system

"prisons are built with stones of law,
brothels with bricks of religion."
-- william blake
Thomas Bushnell BSG
2006-07-10 17:08:32 UTC
Permalink
Post by martin f krafft
That's better than not greylisting anyone. Nobody is trying to
design the perfect spam filter. We just want to reduce spam on
debian.org.
A perfect spam filter is one which catches all spam and bounces no
valid mail. Saying "we aren't trying to be perfect" is ambiguous
about which imperfections you are willing to tolerate.

I would like you to be explicit and clear about which valid mail you
will be bouncing, rather than vague and inspecific.
Henrique de Moraes Holschuh
2006-07-11 01:35:05 UTC
Permalink
Post by Thomas Bushnell BSG
Post by martin f krafft
That's better than not greylisting anyone. Nobody is trying to
design the perfect spam filter. We just want to reduce spam on
debian.org.
A perfect spam filter is one which catches all spam and bounces no
valid mail. Saying "we aren't trying to be perfect" is ambiguous
about which imperfections you are willing to tolerate.
I would like you to be explicit and clear about which valid mail you
will be bouncing, rather than vague and inspecific.
It was pretty clear for anyone actually reading the messages. The error is
in the "safe side", i.e. let stuff through the graylisting without delaying
it.
--
"One disk to rule them all, One disk to find them. One disk to bring
them all and in the darkness grind them. In the Land of Redmond
where the shadows lie." -- The Silicon Valley Tarot
Henrique Holschuh
Thomas Bushnell BSG
2006-07-12 00:45:59 UTC
Permalink
Post by Henrique de Moraes Holschuh
Post by Thomas Bushnell BSG
Post by martin f krafft
That's better than not greylisting anyone. Nobody is trying to
design the perfect spam filter. We just want to reduce spam on
debian.org.
A perfect spam filter is one which catches all spam and bounces no
valid mail. Saying "we aren't trying to be perfect" is ambiguous
about which imperfections you are willing to tolerate.
I would like you to be explicit and clear about which valid mail you
will be bouncing, rather than vague and inspecific.
It was pretty clear for anyone actually reading the messages. The error is
in the "safe side", i.e. let stuff through the graylisting without delaying
it.
Huh? This makes no sense to me.
Henrique de Moraes Holschuh
2006-07-12 01:09:08 UTC
Permalink
Post by Thomas Bushnell BSG
Post by Henrique de Moraes Holschuh
Post by Thomas Bushnell BSG
Post by martin f krafft
That's better than not greylisting anyone. Nobody is trying to
design the perfect spam filter. We just want to reduce spam on
debian.org.
A perfect spam filter is one which catches all spam and bounces no
valid mail. Saying "we aren't trying to be perfect" is ambiguous
about which imperfections you are willing to tolerate.
I would like you to be explicit and clear about which valid mail you
will be bouncing, rather than vague and inspecific.
It was pretty clear for anyone actually reading the messages. The error is
in the "safe side", i.e. let stuff through the graylisting without delaying
it.
Huh? This makes no sense to me.
You do not graylist, i.e. you let it through the graylisting stage
unaffected.

The specific example used was some spam source sitting in the same /27
netblock in a colo server room, and getting through the graylister because
a proper MTA from the same /27 netblock had already been added to the
"approve it, it does retries" list of the graylister.
--
"One disk to rule them all, One disk to find them. One disk to bring
them all and in the darkness grind them. In the Land of Redmond
where the shadows lie." -- The Silicon Valley Tarot
Henrique Holschuh
Thomas Bushnell BSG
2006-07-13 18:01:09 UTC
Permalink
Post by Henrique de Moraes Holschuh
The specific example used was some spam source sitting in the same /27
netblock in a colo server room, and getting through the graylister because
a proper MTA from the same /27 netblock had already been added to the
"approve it, it does retries" list of the graylister.
Ok, now I understand. As I've already said, graylisting on /27
netblocks amounts to inventing new network standards, which I believe
should go through the IETF standardization process before we block
email from people who don't comply with our newly invented standards.

If you don't think the standard could make it through the IETF,
doesn't that tell you something?

Thomas
Wouter Verhelst
2006-07-15 08:49:22 UTC
Permalink
Post by Thomas Bushnell BSG
Post by Henrique de Moraes Holschuh
The specific example used was some spam source sitting in the same /27
netblock in a colo server room, and getting through the graylister because
a proper MTA from the same /27 netblock had already been added to the
"approve it, it does retries" list of the graylister.
Ok, now I understand. As I've already said, graylisting on /27
netblocks amounts to inventing new network standards, which I believe
should go through the IETF standardization process before we block
email from people who don't comply with our newly invented standards.
Really, I don't understand why you wrote this.

Greylisting already exists. This would just make it _less_ of a problem.

By greylisting from /27 netblocks, you wouldn't block any additional
mail as opposed to greylisting in general; quite to the contrary.

Greylisting in this manner does not require anything specific from a
remote host, except that it must follow the standards as defined in
RFC2821 and come back some time after it received the initial 4xx status
reply. What part of that is a "newly invented standard"?

Moreover, I'd like to point out that any piece of software which intends
to implement some anti-spam measures will have to interpret some
specific standard more strictly than required by the relevant RFCs so as
to be able to distinguish spambots from human beings. There is no way
around that, save making degrading some human being to "anti-spam
measure for the Debian Project" and requiring him or her to manually
approve each and every email to our mailinglists. I don't think you want
that.
--
Fun will now commence
-- Seven Of Nine, "Ashes to Ashes", stardate 53679.4
Andreas Metzler
2006-07-15 10:46:51 UTC
Permalink
[...]
Post by Wouter Verhelst
Post by Thomas Bushnell BSG
Ok, now I understand. As I've already said, graylisting on /27
netblocks amounts to inventing new network standards, which I believe
should go through the IETF standardization process before we block
email from people who don't comply with our newly invented standards.
Really, I don't understand why you wrote this.
Greylisting already exists. This would just make it _less_ of a problem.
By greylisting from /27 netblocks, you wouldn't block any additional
mail as opposed to greylisting in general; quite to the contrary.
Greylisting in this manner does not require anything specific from a
remote host, except that it must follow the standards as defined in
RFC2821 and come back some time after it received the initial 4xx status
reply. What part of that is a "newly invented standard"?
[...]

Hello,
The following setup would be in compliance with rfc2821 but would
not be able to deliver mail to a greylisting host:
- retrying every hour for up to five days
- messages are sent out from 120 different IP-addresses all living in
different /27 netblocks.
- retries do not happen on the same IP. Initial try IP-address #1, 2nd
try IP-address #2, ... ,120th try IP-address #120

This in an extreme setup, but if the retry strategy is more
complicated, e.g. every hour for 12 hours, then every two hours for
12 hours and every four hours from then on only 42 IP addresses are
needed.

If some (broken) caching is involved numbers go down further.

cu andreas
--
The 'Galactic Cleaning' policy undertaken by Emperor Zhark is a personal
vision of the emperor's, and its inclusion in this work does not constitute
tacit approval by the author or the publisher for any such projects,
howsoever undertaken. (c) Jasper Ffforde
Stephen Gran
2006-07-15 11:17:07 UTC
Permalink
Post by Andreas Metzler
[...]
Post by Wouter Verhelst
Post by Thomas Bushnell BSG
Ok, now I understand. As I've already said, graylisting on /27
netblocks amounts to inventing new network standards, which I believe
should go through the IETF standardization process before we block
email from people who don't comply with our newly invented standards.
Really, I don't understand why you wrote this.
Greylisting already exists. This would just make it _less_ of a problem.
By greylisting from /27 netblocks, you wouldn't block any additional
mail as opposed to greylisting in general; quite to the contrary.
Greylisting in this manner does not require anything specific from a
remote host, except that it must follow the standards as defined in
RFC2821 and come back some time after it received the initial 4xx status
reply. What part of that is a "newly invented standard"?
[...]
Hello,
The following setup would be in compliance with rfc2821 but would
- retrying every hour for up to five days
- messages are sent out from 120 different IP-addresses all living in
different /27 netblocks.
- retries do not happen on the same IP. Initial try IP-address #1, 2nd
try IP-address #2, ... ,120th try IP-address #120
I suggest that when we find a domain that sends mail from 120 /27's
(roughly a /20), we worry about it then.

zgrep -E 'H=[^[:space:]]*.yahoo.com ' /var/log/exim4/mainlog* | egrep -v '(-|=)>' | \
awk -F [ '{print $2}' | awk -F] '{print $1}' | sort -u | wc -l
2792

That's just over a /21, and they're the biggest I deal with. This is just
under a year's logs, on a fairly busy site. This site uses greylisting,
and does not use netblocks - it greylists the IP/sender/recipient tuple
as is. I have had no complaints about lost mail, although a few about
slow mail.

But that's not the entire point; there will be false positives. There are
probably false positives right now with the various other spam filters in
place, although I have no idea and can't check on them. Presumably the
sender doesn't get a notification in cases where a procmail rule or
spamassassin rule keeps a mail from hitting a list or my @debian.org
account.

With a greylisting system, there is no blackholing of mail - the sender
will get 'still retrying' DSN's, and finally a "couldn't deliver" DSN
in the above scenario. The sender is notified if there's a problem, so
long as the sending site pretends to follow the RFC.

The point is to make email useable without making it untreliable. This
way seems like a pretty good compromise to me.
--
-----------------------------------------------------------------
| ,''`. Stephen Gran |
| : :' : ***@debian.org |
| `. `' Debian user, admin, and developer |
| `- http://www.debian.org |
-----------------------------------------------------------------
Thomas Bushnell BSG
2006-07-16 02:35:44 UTC
Permalink
Post by Stephen Gran
I suggest that when we find a domain that sends mail from 120 /27's
(roughly a /20), we worry about it then.
An excellent strategy. Do you have some mechanism in place to detect
such a case when or if it happens?

Thomas
Magnus Holmgren
2006-07-17 15:36:23 UTC
Permalink
On Sunday 16 July 2006 04:35, Thomas Bushnell BSG took the opportunity to
Post by Thomas Bushnell BSG
Post by Stephen Gran
I suggest that when we find a domain that sends mail from 120 /27's
(roughly a /20), we worry about it then.
An excellent strategy.
I think so. How many systems (aside from the "big ones" like MSN, Gmail, ...,
which are generally known) do you estimate would be affected? At most sites
outgoing messages stay with the same host until delivered, except after the
initial delivery attempt a temporarily failed message may be passed to
a "secondary" MTA.
Post by Thomas Bushnell BSG
Do you have some mechanism in place to detect
such a case when or if it happens?
Deal with it when people complain. Also, this kind of information can be
shared so that not every mail admin has to find it out himself by users
complaining.
--
Magnus Holmgren ***@lysator.liu.se
(No Cc of list mail needed, thanks)
Stephen Gran
2006-07-17 15:57:37 UTC
Permalink
Post by Magnus Holmgren
On Sunday 16 July 2006 04:35, Thomas Bushnell BSG took the opportunity
Post by Thomas Bushnell BSG
Post by Stephen Gran
I suggest that when we find a domain that sends mail from 120
/27's (roughly a /20), we worry about it then.
An excellent strategy.
I think so. How many systems (aside from the "big ones" like MSN,
Gmail, ..., which are generally known) do you estimate would be
affected? At most sites outgoing messages stay with the same host
until delivered, except after the initial delivery attempt a
temporarily failed message may be passed to a "secondary" MTA.
It's not uncommon for big sites to have pools of high throughput
machines that don't have qrunners, and larger pools of machines that do.
The first group gets a message, and tries to deliver immediately, and
any temporary failure gets the messages shunted to the secondary pool.
Once in the secondary pool, it can be bounced from machine to machine
to load balance queue size and so on.

That being said, the original query about this was a strawman argument
designed specifically to find a problem, and I would say fairly
confidently we don't need to worry about this. I have analyzed the logs
on mail servers I have access to, and I cannot find any site which passes
a message between more than a half dozen or at most a dozen IP addresses
before delivery. This is two or three orders of magnitude less than
the kind of thing Thomas and others are concerned about. By the time
sites big enough to use pools that big exist (which I actually doubt -
scalability might just be too hard to manage to be worth it), greylisting
will be another dead tool in the arms race with spammers.

So far, all the arguments against the idea have just been assertions and
strawmen. Unless someone can present a serious argument, can we
consider this thread done?

Take care,
--
-----------------------------------------------------------------
| ,''`. Stephen Gran |
| : :' : ***@debian.org |
| `. `' Debian user, admin, and developer |
| `- http://www.debian.org |
-----------------------------------------------------------------
Martin Wuertele
2006-07-18 07:32:53 UTC
Permalink
Post by Stephen Gran
It's not uncommon for big sites to have pools of high throughput
machines that don't have qrunners, and larger pools of machines that do.
The first group gets a message, and tries to deliver immediately, and
any temporary failure gets the messages shunted to the secondary pool.
Once in the secondary pool, it can be bounced from machine to machine
to load balance queue size and so on.
That being said, the original query about this was a strawman argument
designed specifically to find a problem, and I would say fairly
confidently we don't need to worry about this. I have analyzed the logs
on mail servers I have access to, and I cannot find any site which passes
a message between more than a half dozen or at most a dozen IP addresses
before delivery. This is two or three orders of magnitude less than
the kind of thing Thomas and others are concerned about. By the time
sites big enough to use pools that big exist (which I actually doubt -
scalability might just be too hard to manage to be worth it), greylisting
will be another dead tool in the arms race with spammers.
So far, all the arguments against the idea have just been assertions and
strawmen. Unless someone can present a serious argument, can we
consider this thread done?
I've been using greylisting with postgrey and whitelists for some time
now (more than a year to be precise) and I still do get mail from gmail,
yahoo and msn accounts. And if one is so concerned about them one could
contact their postmasters asking for a list of IPs for whitelisting.

After all we are talking about developers @debian.org email addresses
not abouts lists.debian.org.

yours Martin
--
<***@debian.org> ---- Debian GNU/Linux - The Universal Operating System
* Myon wirft noch ein paar 'f' zum Verteilein in den Channel
-!- florolf is now known as fflorolff
Adrian von Bidder
2006-07-17 17:53:57 UTC
Permalink
[sending systems that don't deal with greylisting]

On Monday 17 July 2006 17:36, Magnus Holmgren wrote:
[...]
Post by Magnus Holmgren
Also, this kind of information can be
shared so that not every mail admin has to find it out himself by users
complaining.
Some data points:
* the default greylist shipped by greylist is growing only quite slow, so
apparently the big players are in there by now.
* big pools are only the smallest part of that whitelist, so this
discussion is a bit silly. The really problematic sites are not really rfc
compliant: sites that don't retry at all, or that retry with different
sender addresses (which from the pov of greylisting is the same,
obviously.)

So the question is, imho, not if we should potentially lock out users of big
mail pools - those are in the default whitelists anyway by now. The
question is: can we temporarily (until they can be whitelisted) lock out
users of "standards?-who-needs-standards?" systems that don't implement
sensible queueing. Many of these sites are small - but there are also a
few bigger names: Yahoo groups, Amazon, Roche, Motorola. (According to
postgrey's default whitelist. Some of these are from 2004 or earlier, and
AFAIK nobody tries to verify if these systems are still stupid in that
way.)

cheers
-- vbi
--
Wie man sein Kind nicht nennen sollte:
Hanno Ferr
Pierre Habouzit
2006-07-17 21:41:23 UTC
Permalink
Post by Adrian von Bidder
So the question is, imho, not if we should potentially lock out users
of big mail pools - those are in the default whitelists anyway by
now. The question is: can we temporarily (until they can be
whitelisted) lock out users of "standards?-who-needs-standards?"
systems that don't implement sensible queueing. Many of these sites
are small - but there are also a few bigger names: Yahoo groups,
Amazon, Roche, Motorola. (According to postgrey's default whitelist.
Some of these are from 2004 or earlier, and AFAIK nobody tries to
verify if these systems are still stupid in that way.)
OTOH those systems are not listed on RBL's (or it does not last) and you
won't greylist them.
--
·O· Pierre Habouzit
··O ***@debian.org
OOO http://www.madism.org
Magnus Holmgren
2006-07-17 22:08:01 UTC
Permalink
Post by Pierre Habouzit
Post by Adrian von Bidder
So the question is, imho, not if we should potentially lock out users
of big mail pools - those are in the default whitelists anyway by
now. The question is: can we temporarily (until they can be
whitelisted) lock out users of "standards?-who-needs-standards?"
systems that don't implement sensible queueing. Many of these sites
are small - but there are also a few bigger names: Yahoo groups,
Amazon, Roche, Motorola. (According to postgrey's default whitelist.
Some of these are from 2004 or earlier, and AFAIK nobody tries to
verify if these systems are still stupid in that way.)
OTOH those systems are not listed on RBL's (or it does not last) and you
won't greylist them.
Which RBL's do you have in mind? I mean, some RBL's, like XBL/SBL, are
high-quality enough that you can outright reject. Others, like SpamCop, are
likely to include some of the bigger names from time to time. DUL lists might
be good candidates.
--
Magnus Holmgren ***@lysator.liu.se
(No Cc of list mail needed, thanks)
Pierre Habouzit
2006-07-18 06:36:55 UTC
Permalink
Post by Magnus Holmgren
Post by Pierre Habouzit
Post by Adrian von Bidder
So the question is, imho, not if we should potentially lock out
users of big mail pools - those are in the default whitelists
anyway by now. The question is: can we temporarily (until they
can be whitelisted) lock out users of
"standards?-who-needs-standards?" systems that don't implement
sensible queueing. Many of these sites are small - but there are
also a few bigger names: Yahoo groups, Amazon, Roche, Motorola.
(According to postgrey's default whitelist. Some of these are
from 2004 or earlier, and AFAIK nobody tries to verify if these
systems are still stupid in that way.)
OTOH those systems are not listed on RBL's (or it does not last)
and you won't greylist them.
Which RBL's do you have in mind? I mean, some RBL's, like XBL/SBL,
are high-quality enough that you can outright reject. Others, like
SpamCop, are likely to include some of the bigger names from time to
time. DUL lists might be good candidates.
I personnaly use DUL, rfc-ignorant and XBL/SBL.
--
·O· Pierre Habouzit
··O ***@debian.org
OOO http://www.madism.org
Thomas Bushnell BSG
2006-07-17 21:27:04 UTC
Permalink
Post by Magnus Holmgren
Post by Thomas Bushnell BSG
Do you have some mechanism in place to detect
such a case when or if it happens?
Deal with it when people complain. Also, this kind of information can be
shared so that not every mail admin has to find it out himself by users
complaining.
Are you willing to promise that if someone gives a genuine complaint
about how this is blocking their legitimat email, you will amend your
practice to deal, rather than to insist that they should change
theirs?

Thomas
Magnus Holmgren
2006-07-17 22:14:01 UTC
Permalink
On Monday 17 July 2006 23:27, Thomas Bushnell BSG took the opportunity to
Post by Thomas Bushnell BSG
Post by Magnus Holmgren
Deal with it when people complain. Also, this kind of information can be
shared so that not every mail admin has to find it out himself by users
complaining.
Are you willing to promise that if someone gives a genuine complaint
about how this is blocking their legitimat email, you will amend your
practice to deal, rather than to insist that they should change
theirs?
Parse error. If someone complains because their mail servers are too spread
out, I'd whitelist them. If someone complains because their own software is
broken, well, that depends. I would explain to them nicely why they should
fix it, but I wouldn't argue unless I have a good reason to do so. Nothing
needs to be amended.
--
Magnus Holmgren ***@lysator.liu.se
(No Cc of list mail needed, thanks)
Martijn van Oosterhout
2006-07-15 11:21:30 UTC
Permalink
Post by Andreas Metzler
Hello,
The following setup would be in compliance with rfc2821 but would
- retrying every hour for up to five days
- messages are sent out from 120 different IP-addresses all living in
different /27 netblocks.
- retries do not happen on the same IP. Initial try IP-address #1, 2nd
try IP-address #2, ... ,120th try IP-address #120
I thought the point was that someone with such a setup is unlikely to
have all 120 servers either listed on an RBL or with broken reverse
DNS. And if they are, are you sure you want to receive mail from them?

Greylisting everything is silly, and that's not what's being discussed
here (AIUI anyway).

Have a nice day,
--
Martijn van Oosterhout <***@gmail.com> http://svana.org/kleptog/
Andreas Metzler
2006-07-15 12:02:50 UTC
Permalink
Post by Martijn van Oosterhout
Post by Andreas Metzler
Hello,
The following setup would be in compliance with rfc2821 but would
[...]
Post by Martijn van Oosterhout
I thought the point was that someone with such a setup is unlikely to
have all 120 servers either listed on an RBL or with broken reverse
DNS. And if they are, are you sure you want to receive mail from them?
[...]

I am all for greylisting as suggested, I just wanted to clarify Thomas'
claim that greylisting *can* break RFC compliant hosts, i.e. the
"inventing new network standards".

cu andreas
--
The 'Galactic Cleaning' policy undertaken by Emperor Zhark is a personal
vision of the emperor's, and its inclusion in this work does not constitute
tacit approval by the author or the publisher for any such projects,
howsoever undertaken. (c) Jasper Ffforde
Stig Sandbeck Mathisen
2006-07-15 13:58:30 UTC
Permalink
Post by Andreas Metzler
This in an extreme setup,
...or a setup designed to be used as an argument against greylisting.
--
Stig Sandbeck Mathisen <***@debian.org>
Thomas Bushnell BSG
2006-07-16 02:34:45 UTC
Permalink
Post by Wouter Verhelst
Greylisting already exists. This would just make it _less_ of a problem.
By greylisting from /27 netblocks, you wouldn't block any additional
mail as opposed to greylisting in general; quite to the contrary.
Yes, I understand. What I'm saying is that the confining the
graylisting to /27 netblocks instead of per-host, while an
improvement, is not enough of an improvement for me to say, "yes, what
a wonderful idea graylisting is." Or rather, it *is* a wonderful
idea, but I believe that conforming to network protocols is an even
more wonderful idea.

When you say "graylisting already exists", you seem to be ignoring the
possibility that we could have no graylisting. It's not like we are
somehow obliged to choose a graylisting "solution".
Post by Wouter Verhelst
Greylisting in this manner does not require anything specific from a
remote host, except that it must follow the standards as defined in
RFC2821 and come back some time after it received the initial 4xx status
reply. What part of that is a "newly invented standard"?
The standards do *not* say that the remote host must resend the
message from the same host, or the same /27 netblock. It is this
requirement which is newly invented.
Post by Wouter Verhelst
Moreover, I'd like to point out that any piece of software which intends
to implement some anti-spam measures will have to interpret some
specific standard more strictly than required by the relevant RFCs so as
to be able to distinguish spambots from human beings. There is no way
around that, save making degrading some human being to "anti-spam
measure for the Debian Project" and requiring him or her to manually
approve each and every email to our mailinglists. I don't think you want
that.
I can just hear George Bush using this argument. "We have no way of
imposing our will on evil-person so-and-so except by starting a war
and killing millions of people, so, golly shucks, we just have to
start the war. Sorry guys!"

Saying that there is no way to meet your goals other than by doing
some bad thing does not somehow eliminate the badness of the thing.
It is you who wants to avoid cooperating with the IETF on anti-spam
measures, finding solutions that perhaps can work for the whole
network. Not me.

Thomas
Thomas Bushnell BSG
2006-07-10 17:07:01 UTC
Permalink
Post by Andreas Metzler
[...]
Post by Thomas Bushnell BSG
It assumes, for example, that the remote MTA will use the same IP
address each time it sends the message.
[...]
eh no. Standard greylisting practise nowadays (it already was standard when
sarge was released) is to not greylist on host IP but at least on the /27
netblock.
Then, "it assumes, for example, that the remote MTA will use the same
/27 netblock each time it sends the message."

Thomas
Andreas Metzler
2006-07-10 18:09:25 UTC
Permalink
Post by Thomas Bushnell BSG
Post by Andreas Metzler
[...]
Post by Thomas Bushnell BSG
It assumes, for example, that the remote MTA will use the same IP
address each time it sends the message.
[...]
eh no. Standard greylisting practise nowadays (it already was standard when
sarge was released) is to not greylist on host IP but at least on the /27
netblock.
Then, "it assumes, for example, that the remote MTA will use the same
/27 netblock each time it sends the message."
No. It assumes that the sending MTA will not circle through a
number of different /27 netblocks that is so big that the retry limit
will be hit before successful delivery.
cu andreas
Stephen Gran
2006-07-10 16:41:20 UTC
Permalink
Post by Thomas Bushnell BSG
Post by martin f krafft
Anyway, I'll be interested to hear a summary of their arguments, as
Christian Perrier requested. I find it hard to imagine how properly
configured greylisting should cause any problems.
It's a violation of the standard. It is especially problematic,
because it is a violation against the spirit of being liberal in what
you accept, and conservative in what you require.
Sadly, those days may be coming to an end.
Post by Thomas Bushnell BSG
It assumes, for example, that the remote MTA will use the same IP
address each time it sends the message. If the remote MTA is a big
server farm, with a lot of different hosts that could be processing
the mail, what is your strategy for preventing essentially infinite
delay?
I use a greylist implementation that autowhitelists after a configurable
number of successful retries for a tuple. Assuming you mean places like
yahoo or aol, the essentially infinite delay you speak of has never been
an issue so far. They all end up whitelisted after a while, and then
mail from them proceeds without delay. Assuming the number of users
debian has, it shouldn't take very long to record hits for all of their
outbound servers.
Post by Thomas Bushnell BSG
Another problem is with hosts that do not accept a message from an MTA
unless that MTA is willing to accept replies. This is a common spam
prevention measure. The graylisting host cannot then send mail to
such sites until they've been whitelisted, because when they try the
reverse connection out, it always gets a 4xx error. I've been bitten
by this one before.
That is an odd implementation of sender callouts designed by someone who
doesn't understand SMTP, and is not really an issue for the conversation
at hand. Normal sender callouts, which route the message to the public
MX, have their pros and cons, but it's not under discussion at the
moment.
--
-----------------------------------------------------------------
| ,''`. Stephen Gran |
| : :' : ***@debian.org |
| `. `' Debian user, admin, and developer |
| `- http://www.debian.org |
-----------------------------------------------------------------
Marco d'Itri
2006-07-05 16:07:26 UTC
Permalink
Post by Wolfgang Lonien
@lists.debian.org?
No, we prefer to silently junk messages to mailing lists which appear
to be spam.

The @debian.org addreses have no filtering at all, so I implemented
some myself, which so far has been working very well:

***@master:~$cat .forward
| /home/md/bin/I-do-not-use-this-address
***@master:~$cat /home/md/bin/I-do-not-use-this-address
#!/bin/sh
echo "***********************************************************"
echo "************** PLEASE MAIL ME AT ***@linux.it **************"
echo "***********************************************************"
echo ""
echo "I never used my @debian.org address and I had to disable it because"
echo "it delivers a huge quantity of spam and almost no legitimate mail."
echo "If you want to send me mail you can use my usual ***@linux.it address."
echo ""
echo "If your address was forged by a spammer and you received this"
echo "backscatter bounce, feel free to report it to ***@debian.org."
exit 1
***@master:~$
--
ciao,
Marco
martin f krafft
2006-07-05 16:13:06 UTC
Permalink
Post by Marco d'Itri
echo "If your address was forged by a spammer and you received this"
Very productive and cooperative.
--
Please do not send copies of list mail to me; I read the list!

.''`. martin f. krafft <***@debian.org>
: :' : proud Debian developer and author: http://debiansystem.info
`. `'`
`- Debian - when you have better things to do than fixing a system

i've not lost my mind. it's backed up on tape somewhere.
Matthew R. Dempsky
2006-07-05 16:58:19 UTC
Permalink
(Is debian-devel the correct list for this?)
Post by Wolfgang Lonien
If not, then we should probably try it
Can it be limited to suspected spam (e.g. mail with a high smtp-time
spamassassin score)? Others may disagree, but I prefer the small
amount of spam that does plague Debian's mailing lists to graylisting's
obnoxious delays for legitimate mail.
Christian Perrier
2006-07-16 06:36:31 UTC
Permalink
Post by Wolfgang Lonien
Hi all,
@lists.debian.org?
So, up to now, we've found Thomas Bushnell who seems really hardly
voting against greylisting on Debian hosts, with arguments about it
breaking established standards. I personnally find these arguments
very nitpicking and mostly aimed at finding a justification for an
opinion that will definitely not change.

So far and unless I forget someone, I haven't seen much other people
being strongly opposed to greylisting on Debian hosts, especially with
the setup described by Pierre Habouzit (greylisting only "suspicious"
hosts).

Isn't it time to move on?
Lionel Elie Mamane
2006-07-17 20:29:23 UTC
Permalink
Post by Christian Perrier
Post by Wolfgang Lonien
@lists.debian.org?
So, up to now, we've found Thomas Bushnell who seems really hardly
voting against greylisting on Debian hosts, (...).
So far and unless I forget someone, I haven't seen much other people
being strongly opposed to greylisting on Debian hosts,
Here is one: I am strongly opposed to greylisting (on mail sent to me
or that I send), for the reason that it delays legitimate mail.

Best Regards,
--
Lionel
Pierre Habouzit
2006-07-17 21:48:21 UTC
Permalink
Post by Lionel Elie Mamane
Here is one: I am strongly opposed to greylisting (on mail sent to me
or that I send), for the reason that it delays legitimate mail.
which shows that you didn't read the discussion that was about enabling
greylisting on *certain* *specificaly* *suspicious* hosts. a suspicious
host is:
* either listed on some RBL's (rbl listing "dynamic" blocks are a good
start usually)
* either having no reverse DNS set
* either having curious EHLO lines (that one may catch too much good
mail sadly, so it's to handle with care).
* ...

I apply greylisting on the two first criteriums on a quite used mail
server (around 300.k mails per week, which is not very big, but should
be representative enough).

there is less than 50 mails a week over those that *may* be legitimate
mails that are actually slowed down.

so *please* do me a favour, read the thread you are answering to,
because you really really answer miles away from the debate.

and if you never actually realized, there *IS* such a slowdown on debian
mail lists, it's called crossassassin, it kills master on a regular
basis, and is *REALLY* less effective than greylisting.

when spam makes our MX load go to highs I never suspected a machine
could resist, I think maybe it's time to try a more robust solution.

Pierre, that is pissed that his @debian.org address barely more usable
than a hotmail one (and I do not know any worse mail service on the
entire web).
--
·O· Pierre Habouzit
··O ***@debian.org
OOO http://www.madism.org
Lionel Elie Mamane
2006-07-18 07:34:06 UTC
Permalink
Post by Pierre Habouzit
Post by Lionel Elie Mamane
Here is one: I am strongly opposed to greylisting (on mail sent to
me or that I send), for the reason that it delays legitimate mail.
which shows that you didn't read the discussion
Wrong. Disagreeing with you is not the same as not reading your
arguments. Sorry that you were not convincing.
Post by Pierre Habouzit
that was about enabling greylisting on *certain* *specificaly*
*suspicious* hosts.
I know.
Post by Pierre Habouzit
* either listed on some RBL's (rbl listing "dynamic" blocks are a good
start usually)
* either having no reverse DNS set
* either having curious EHLO lines (that one may catch too much good
mail sadly, so it's to handle with care).
* ...
This will still include legitimate mail.
Post by Pierre Habouzit
I apply greylisting on the two first criteriums on a quite used mail
server (around 300.k mails per week, which is not very big, but should
be representative enough).
there is less than 50 mails a week over those that *may* be
legitimate mails that are actually slowed down.
Bingo: Legitimate mail slowed down. You think the price is worth it,
which is a valid opinion. I happen not to think so.

Usually when mail I send gets greylisted, it is because the software
thinks I am "suspicious".
Post by Pierre Habouzit
so *please* do me a favour, read the thread you are answering to,
I did.
Post by Pierre Habouzit
because you really really answer miles away from the debate.
No, I'm not. I'm expressing an opinion after reading all of the
debate, from the points of it I remember.
Post by Pierre Habouzit
and if you never actually realized, there *IS* such a slowdown on
debian mail lists, it's called crossassassin, it kills master on a
regular basis, and is *REALLY* less effective than greylisting.
I don't remember the "master cannot cope under mail load, we need
desperate measures" point being brought up before. I may have missed
it.


Best Regards,
--
Lionel
--
To UNSUBSCRIBE, email to debian-devel-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Pierre Habouzit
2006-07-18 07:56:58 UTC
Permalink
Post by Lionel Elie Mamane
This will still include legitimate mail.
something like 50 over 300k is less than 0.016%.

which is also really less than the usual number of false positives of
your bayesian mail filter. see end of mail.
Post by Lionel Elie Mamane
Post by Pierre Habouzit
and if you never actually realized, there *IS* such a slowdown on
debian mail lists, it's called crossassassin, it kills master on a
regular basis, and is *REALLY* less effective than greylisting.
I don't remember the "master cannot cope under mail load, we need
desperate measures" point being brought up before. I may have missed
it.
these days master has a high load on a regular basis:
load average: 239.68, 299.68, 326.84

from IRC a couple of days ago,


What I experience as a debian developer is that:

* 80% of the overall spam that eventually comes into my inbox went
through my debian.org account, that renders the read of such a
mailbox really hard, and I'm pretty sure that I miss more than 0.016%
of legitimate mail in my readings.

* my @debian.org address has considerable slowdowns due to our MXs
beeing overloaded from time to time. 80% of the time, it's because of
crossassassin becoming mad, or some spam attack.


Just take some factual numbers: I receive sth like 300 mails a day (top,
I think the mean value is more around 150). that makes 109.500 mails a
year. I know for a fact that my bayesian filter makes sth like 4 to 5
errors per year. And yes I know how to train one. So my bayesian mail
filter has at least a 0.05% false positive rate, and I'm really
convinced in fact it's more like 0.1% (maybe even more).

SA is used extensively on debian hosts, I'm also quite sure it also has
worse rates than a 0.1%. So you are claiming that greylisting is a
really bad method ? come on !

currently, I receive so many spams from debian, that I just CANT sort
them. it's sth like 90spams a *day* sometimes. How do you find the time
to look at the good mails in there ? I can't. So by not delaying 0.016%
of the legitimate mails, you make a lot of people *LOOSE* for real way
more than that.

please, your point is only made of impressions, now you have numbers.
--
·O· Pierre Habouzit
··O ***@debian.org
OOO http://www.madism.org
Lionel Elie Mamane
2006-07-18 08:00:20 UTC
Permalink
the discussion (...) was about enabling greylisting on *certain*
*specificaly* *suspicious* hosts. a suspicious
* either listed on some RBL's (rbl listing "dynamic" blocks are a good
start usually)
* either having no reverse DNS set
* either having curious EHLO lines (that one may catch too much good
mail sadly, so it's to handle with care).
* ...
I apply greylisting on the two first criteriums on a quite used mail
server (around 300.k mails per week, which is not very big, but
should be representative enough).
there is less than 50 mails a week over those that *may* be
legitimate mails that are actually slowed down.
On second thought, I'm very interested in how you measured this false
positive rate. Do all the recipients of the 300k mails per week check
for every mail if it was greylisted (that means you would put a header
or something like that saying "this mail was greylisted"?), and they
_always_ check on _every_ legitimate mail and _always_ report false
positives to you? Probably not. So, are these 50 mails a week all the
mail that undergoes greylisting but *still* goes through (i.e. gets
retried, roughly)? Something else?
--
Lionel
--
To UNSUBSCRIBE, email to debian-devel-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Josselin Mouette
2006-07-17 22:47:49 UTC
Permalink
Post by Lionel Elie Mamane
Post by Christian Perrier
Post by Wolfgang Lonien
@lists.debian.org?
So, up to now, we've found Thomas Bushnell who seems really hardly
voting against greylisting on Debian hosts, (...).
So far and unless I forget someone, I haven't seen much other people
being strongly opposed to greylisting on Debian hosts,
Here is one: I am strongly opposed to greylisting (on mail sent to me
or that I send), for the reason that it delays legitimate mail.
I have refused greylisting for a long time for that exact reason.
However the setup Pierre Habouzit describes does not delay most of
legitimate mail. Frankly, the remaining delays are sporadic and one can
live with them.

I'm applying greylisting if one of these conditions is met:
* the incoming IP is listed in a DUL;
* Exim sender/callout fails with a fatal error.
This setup has considerably reduced both the load and the amount of spam
on the server. However I still have to deal with @debian.org spam with a
less and less efficient (and more and more cpu consuming) bayesian
filter, as it cannot be filtered out this way.
--
.''`. Josselin Mouette /\./\
: :' : ***@ens-lyon.org
`. `' ***@debian.org
`- Debian GNU/Linux -- The power of freedom
Lionel Elie Mamane
2006-07-18 07:47:13 UTC
Permalink
Post by Josselin Mouette
Post by Lionel Elie Mamane
Post by Christian Perrier
Post by Wolfgang Lonien
@lists.debian.org?
So, up to now, we've found Thomas Bushnell who seems really hardly
voting against greylisting on Debian hosts, (...).
So far and unless I forget someone, I haven't seen much other
people being strongly opposed to greylisting on Debian hosts,
Here is one: I am strongly opposed to greylisting (on mail sent to
me or that I send), for the reason that it delays legitimate mail.
I have refused greylisting for a long time for that exact reason.
However the setup Pierre Habouzit describes does not delay most of
legitimate mail.
That is the crux of the disagreement. You guys think that as long as
"most" of the legitimate mail is not delayed, the price is worth it. I
don't think so.
Post by Josselin Mouette
Frankly, the remaining delays are sporadic and one can live with
them.
Knowing that most legitimate mail doesn't get delayed doesn't make me
feel better when mail I sit waiting for gets delayed. Obviously, for
most mail I don't care as I don't sit waiting for it, I batch-treat it
a fez times per day or per week. So a half-hour delay on it, I don't
even see it. For *most* mail.
Post by Josselin Mouette
* the incoming IP is listed in a DUL;
Bingo! You hit a hot button of mine.
Post by Josselin Mouette
* Exim sender/callout fails with a fatal error.
"Fatal" means not temporary?
--
Lionel
--
To UNSUBSCRIBE, email to debian-devel-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Josselin Mouette
2006-07-18 07:55:35 UTC
Permalink
Post by Lionel Elie Mamane
That is the crux of the disagreement. You guys think that as long as
"most" of the legitimate mail is not delayed, the price is worth it. I
don't think so.
If too much spam gets through, *all* legitimate mail gets delayed. It
gets delayed by the additional filters it has to get through thereafter,
and it gets delayed by having to dig it out of a mailbox full of spam.
Post by Lionel Elie Mamane
Post by Josselin Mouette
* Exim sender/callout fails with a fatal error.
"Fatal" means not temporary?
It means either the domain doesn't exist, or the server explicitly
replied the user doesn't exist.
--
.''`. Josselin Mouette /\./\
: :' : ***@ens-lyon.org
`. `' ***@debian.org
`- Debian GNU/Linux -- The power of freedom
Thomas Bushnell BSG
2006-07-17 21:26:04 UTC
Permalink
Post by Christian Perrier
So, up to now, we've found Thomas Bushnell who seems really hardly
voting against greylisting on Debian hosts, with arguments about it
breaking established standards. I personnally find these arguments
very nitpicking and mostly aimed at finding a justification for an
opinion that will definitely not change.
I'm not a nitpicker for its own sake; I'm a nitpicker for the
principle "be liberal in what you accept and conservative in what you
send." That calls for being nitpicky on one side and not the other.

Still, if you think it's just nitpicking, then why not ask the IETF to
amend the standard to clearly permit this practice?

And finally, if we don't care about standards conformance, I have said
that a good second-best is to document exactly what our requirements
are, rather than burying them in apparent secrecy.

This is not about stonewalling. So how about the last of these: clear
and accurate documentation?

Thomas
Stephen Gran
2006-07-17 21:42:22 UTC
Permalink
Post by Thomas Bushnell BSG
And finally, if we don't care about standards conformance, I have said
that a good second-best is to document exactly what our requirements
are, rather than burying them in apparent secrecy.
What standards, exactly? Can you be specific? I have seen you assert
this several times, but I see nothing in the RFCs preventing a site from
saying it has a temporary local problem when it doesn't. You've been
asked this before in response to your assertion, and haven't answered.
--
-----------------------------------------------------------------
| ,''`. Stephen Gran |
| : :' : ***@debian.org |
| `. `' Debian user, admin, and developer |
| `- http://www.debian.org |
-----------------------------------------------------------------
Adam Borowski
2006-07-17 21:58:20 UTC
Permalink
Post by Stephen Gran
Post by Thomas Bushnell BSG
And finally, if we don't care about standards conformance, I have said
that a good second-best is to document exactly what our requirements
are, rather than burying them in apparent secrecy.
What standards, exactly? Can you be specific? I have seen you assert
this several times, but I see nothing in the RFCs preventing a site from
saying it has a temporary local problem when it doesn't.
Even worse, there's nothing preventing a site from saying it has a
temporary local problem when it _does_. Thus, if your mail server
can't handle retrying, it will drop mail every time something is not
in perfect working order. And hardware or network failures are
something to be expected.

Any legitimate server must support retrying. For any reason.
--
1KB // Microsoft corollary to Hanlon's razor:
// Never attribute to stupidity what can be
// adequately explained by malice.
Marco d'Itri
2006-07-17 23:22:00 UTC
Permalink
Post by Thomas Bushnell BSG
Still, if you think it's just nitpicking, then why not ask the IETF to
amend the standard to clearly permit this practice?
Because there is no reason to do this, this is not a standard issue but
plain operations.
--
ciao,
Marco
Loading...