You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We want to be able to accept mails from .onion domain. We cannot have DKIM authentication as there is no support for DNS (see https://forum.torproject.org/t/mock-dns-records-for-onion-services/4671 and https://media.ccc.de/v/gpn22-469-self-authenticating-tls-certificates-for-tor-onion-services for some work on this due to the need to have TLSA records). However, we can authenticate by pulling from the .onion domain. We need an HTTP service that is mounted to /im2000/... with a webhook on /im2000/push that listens to POST requests saying "pull mail from http://foobarbaz.onion/im2000/<blake3 hash of the message> and then the service pulls the mail over HTTP with a GET request, checks that From field in all mails corresponds to the foobarbaz.onion and drops the mails into Postfix pickup queue. Then should issue a DELETE request for delivery. If delivery is not picked up for some time, the service will remove it eventually.
To discover if we can deliver over HTTP we can do what MTA-STS daemon does and have a separate transport policy daemon that checks .well-known or just /im2000 URL (over HTTPS for non-onion or HTTP over .onion) to check if pull-based delivery protocol is supported.
The end goal of this is to allow chatmail servers which have no real domain but only .onion domain to federate with each other and with non-onion chatmail servers.
DMTP proposal from 2006 is more complicated that we need, differentiating between spammers and non-spammers and allowing to use both SMTP for mails that pass the filter and DMTP (pull) for other mails. We are not really interested in spam filtering and just allow SMTP. We need pull-based delivery for authentication of onion services and delivery without port 25 rather than anti-spam measure.
Stubmail (by Meng Wong, author of SPF) is the most interesting proposal, it uses HTTP for delivery, but for notifications it uses "stub" mails that are just normal mails that notify about messages. Such messages need to be delivered over SMTP, but we will use HTTP instead.
Have not looked too deep, but here is a PDF of some talk: http://www.mengwong.com/rssemail/200607260-oscon-lightningtalk.pdf
Talk "Turning Email Upside Down: RSS/Email and IM2000" is at https://www.youtube.com/watch?v=egHGwitIC1Q
There is some research from 2010 on pull-based delivery:
It proposes extending SMTP with GDEL (notification about new mail, similar to the webhook POST request) and RETR (downloading a mail, similar to GET request).
The main problem is avoiding the possibility to trigger arbitrary HTTP GET requests, so we should probably only allow URLs of some form and remember if it does not return expected answer so request to the same domain cannot be triggered again, at least for a day.
We should also make sure that HTTP cannot be used to host arbitrary files, it should serve files with an email MIME type. Matrix had similar problem with the possibility to abuse the servers as a CDN: https://matrix.org/docs/spec-guides/authed-media-servers/ GET request should require a special header, e.g. Authorization, which cannot be easily sent from a browser requesting images.
The text was updated successfully, but these errors were encountered:
We want to be able to accept mails from .onion domain. We cannot have DKIM authentication as there is no support for DNS (see https://forum.torproject.org/t/mock-dns-records-for-onion-services/4671 and https://media.ccc.de/v/gpn22-469-self-authenticating-tls-certificates-for-tor-onion-services for some work on this due to the need to have TLSA records). However, we can authenticate by pulling from the .onion domain. We need an HTTP service that is mounted to
/im2000/...
with a webhook on/im2000/push
that listens to POST requests saying "pull mail fromhttp://foobarbaz.onion/im2000/<blake3 hash of the message>
and then the service pulls the mail over HTTP with a GET request, checks thatFrom
field in all mails corresponds to thefoobarbaz.onion
and drops the mails into Postfix pickup queue. Then should issue aDELETE
request for delivery. If delivery is not picked up for some time, the service will remove it eventually.To discover if we can deliver over HTTP we can do what MTA-STS daemon does and have a separate transport policy daemon that checks .well-known or just
/im2000
URL (over HTTPS for non-onion or HTTP over .onion) to check if pull-based delivery protocol is supported.See Internet Mail 2000 for previous discussion of a similar concept.
The end goal of this is to allow chatmail servers which have no real domain but only .onion domain to federate with each other and with non-onion chatmail servers.
DMTP proposal from 2006 is more complicated that we need, differentiating between spammers and non-spammers and allowing to use both SMTP for mails that pass the filter and DMTP (pull) for other mails. We are not really interested in spam filtering and just allow SMTP. We need pull-based delivery for authentication of onion services and delivery without port 25 rather than anti-spam measure.
Stubmail (by Meng Wong, author of SPF) is the most interesting proposal, it uses HTTP for delivery, but for notifications it uses "stub" mails that are just normal mails that notify about messages. Such messages need to be delivered over SMTP, but we will use HTTP instead.
Have not looked too deep, but here is a PDF of some talk: http://www.mengwong.com/rssemail/200607260-oscon-lightningtalk.pdf
Talk "Turning Email Upside Down: RSS/Email and IM2000" is at https://www.youtube.com/watch?v=egHGwitIC1Q
There is some research from 2010 on pull-based delivery:
It proposes extending SMTP with
GDEL
(notification about new mail, similar to the webhook POST request) andRETR
(downloading a mail, similar toGET
request).The main problem is avoiding the possibility to trigger arbitrary HTTP GET requests, so we should probably only allow URLs of some form and remember if it does not return expected answer so request to the same domain cannot be triggered again, at least for a day.
We should also make sure that HTTP cannot be used to host arbitrary files, it should serve files with an email MIME type. Matrix had similar problem with the possibility to abuse the servers as a CDN: https://matrix.org/docs/spec-guides/authed-media-servers/
GET
request should require a special header, e.g.Authorization
, which cannot be easily sent from a browser requesting images.The text was updated successfully, but these errors were encountered: