From info at netocean.de Sat Oct 1 00:39:58 2016 From: info at netocean.de (=?UTF-8?Q?Leander_Sch=c3=a4fer?=) Date: Sat, 1 Oct 2016 02:39:58 +0200 Subject: acl_group not working not working correctly In-Reply-To: <2416c979-e387-2301-f485-1d88d39ebcac@netocean.de> References: <2416c979-e387-2301-f485-1d88d39ebcac@netocean.de> Message-ID: <21cc4903-838d-d63c-335f-e9d0c4579910@netocean.de> Any idea? Am 17.09.16 um 00:44 schrieb Leander Sch?fer: > Hi, > > I'm trying to setup group based ACLs coming from OpenLDAP. My setup > doesn't require a POSIX Group match. In the Dovecot configuration file > I have this: "user_attrs = [...], mailAclGroups=acl_groups" as well as > "acl = vfile:/usr/local/etc/dovecot/global-acls:cache_secs=300". The > user has "public" in the LDAP attribute "mailAclGroups". It seems to > get everything right. I checked with doveadm - and I see public ist > listed as expected: > > cat /var/log/debug.log > [...] > Sep 16 23:39:04 WM-01 dovecot: auth: Debug: client passdb out: > OK 1 user=leander at mydomain.localdomain acl_groups=public > [...] > > cat /usr/local/etc/dovecot/global-acls > INBOX owner lrwstipekxa > Drafts owner lrwstipeka > Sent owner lrwstipeka > Spam owner lrwstipeka > Trash owner lrwstipeka > Public authenticated l > Public group-override=public lrwstipekx > Public/* group-override=public lrwstipekx > > > doveadm mailbox list -u leander at mydomain.localdomain > Drafts > Sent > Trash > Spam > Shared > Public > Public/Service Center > Shared/test at mydomain.localdomain > Shared/test at mydomain.localdomain/Drafts > Shared/test at mydomain.localdomain/Sent > Shared/test at mydomain.localdomain/Trash > Shared/test at mydomain.localdomain/Spam > INBOX > > > But here comes the strange thing: telnet equal to Thunderbird: > . LIST "" "*" > * LIST (\HasNoChildren \Drafts) "/" Drafts > * LIST (\HasNoChildren \Sent) "/" Sent > * LIST (\HasNoChildren \Trash) "/" Trash > * LIST (\HasNoChildren \Junk) "/" Spam > * LIST (\Noselect \HasChildren) "/" Shared > * LIST (\HasChildren) "/" Shared/test at mydomain.localdomain > * LIST (\HasNoChildren) "/" Shared/test at mydomain.localdomain/Drafts > * LIST (\HasNoChildren) "/" Shared/test at mydomain.localdomain/Sent > * LIST (\HasNoChildren) "/" Shared/test at mydomain.localdomain/Trash > * LIST (\HasNoChildren) "/" Shared/test at mydomain.localdomain/Spam > * LIST (\HasNoChildren) "/" INBOX > . OK List completed (0.000 + 0.000 + 0.092 secs). > > > Public and Public/* shoul be listed as well, but it isn't. Any idea > why it is behaving like this? > Thanks > > Best regards > Leander Sch?fer From info at netocean.de Sat Oct 1 00:41:21 2016 From: info at netocean.de (=?UTF-8?Q?Leander_Sch=c3=a4fer?=) Date: Sat, 1 Oct 2016 02:41:21 +0200 Subject: Bug: Shared Mailbox - Case Sensitivity In-Reply-To: <786ad6de-e1a1-974b-0285-65606ba3c010@netocean.de> References: <14031913-6e4c-1a0d-7e4e-090407c6dca2@netocean.de> <01c317c6-5b1b-d6ef-12fb-720b9c105cdd@dovecot.fi> <786ad6de-e1a1-974b-0285-65606ba3c010@netocean.de> Message-ID: Am I missing something, or might this be a bug as it seems to me? Am 16.09.16 um 14:21 schrieb Leander Sch?fer: > Hi Aki, > > > Thanks for your advice. Yes, I'm aware of this. Yet lowercasing should > be the default since Dovecot 2.1.x., isn't it? Yet I wouldn't know > where exactly to implement this %L, since the ACLs are set through > IMAP commands through the users mailclient like Thunderbird. So in > other words, the email address to whom the user want to grant ACLs > provided by the user's mailclient, has nothing to do with my auth > backend where e.g. %u => %Lu would apply. PLease correct me if I'm > wrong here. > > > It clearly looks like a bug of the internal processing of the > "dovecot-acl-list" files. It simply lacks on a lowercase enforcement > in the code, like it already seems to do for the "dovecot-acl" file. > > > Best regards > > Leander Sch?fer > > > > Am 16.09.16 um 12:53 schrieb Aki Tuomi: >> >> On 16.09.2016 12:54, Leander Sch?fer wrote: >>> Hi, >>> >>> unfortunately I found a bug in Dovecot's ACL handling for shared >>> mailboxes. It turns out Dovecot doesn't enforce lower casing the >>> privileged username to whom the mailbox should be shared to. This >>> results in a invalid configuration. Users get confused, since they >>> passed on a valid email address in their ACL setup. >>> >>> /usr/local/www/default/mail/test at mydomain.localdomain/maildir/.Spam/dovecot-acl >>> >>> >>> user=leander at mydomain.localdomain eilrwts >>> ^^ works >>> >>> /usr/local/www/default/mail/leander at mydomain.localdomain/maildir/dovecot-acl >>> >>> >>> user=test at mydomain.localdomain eilrwts >>> ^^ works >>> >>> /usr/local/www/default/mail/test at mydomain.localdomain/maildir/.Drafts/dovecot-acl >>> >>> >>> user=Leander at MyDomain.LocalDomain eilrwts >>> ^^ Doesn't work >>> >>> Best regards >>> Leander Sch?fer >> Hi! Did you know you can use %Lu instead of %u to force lowercasing? >> >> Aki From noel.butler at ausics.net Sat Oct 1 01:24:24 2016 From: noel.butler at ausics.net (Noel Butler) Date: Sat, 01 Oct 2016 11:24:24 +1000 Subject: NFSv4 and Maildir In-Reply-To: References: Message-ID: <274f400f5a183c980a632dfe4f71e850@ausics.net> On 01/10/2016 08:27, Joseph Tam wrote: >> we have a setup with (CentOS 6) Director+Dovecot, Maildir as storage >> on >> NetApp NFS v3. Every time I try to switch to NFS v4 I found issue with >> lock (and others). So for me NFSv4 with Maildir is "unstable" or need >> a >> fine tuning that I don't know. > > I found the same thing, and turning off write delegation seemed > to have solved the problem. I still don't know why, though. > > Joseph Tam write delegation is disabled by default on NetApp with v4, or have they changed this now? -------------- next part -------------- A non-text attachment was scrubbed... Name: 0x7FD036C7.asc Type: application/pgp-keys Size: 4773 bytes Desc: not available URL: From aki.tuomi at dovecot.fi Sat Oct 1 05:54:33 2016 From: aki.tuomi at dovecot.fi (Aki Tuomi) Date: Sat, 01 Oct 2016 08:54:33 +0300 Subject: Bug: Shared Mailbox - Case Sensitivity Message-ID: Can you provide doveconf -n? ---Aki TuomiDovecot oy -------- Original message --------From: Leander Sch?fer Date: 01/10/2016 03:41 (GMT+02:00) To: Aki Tuomi , Dovecot Mailing List , Timo Sirainen Subject: Re: Bug: Shared Mailbox - Case Sensitivity Am I missing something, or might this be a bug as it seems to me? Am 16.09.16 um 14:21 schrieb Leander Sch?fer: > Hi Aki, > > > Thanks for your advice. Yes, I'm aware of this. Yet lowercasing should > be the default since Dovecot 2.1.x., isn't it? Yet I wouldn't know > where exactly to implement this %L, since the ACLs are set through > IMAP commands through the users mailclient like Thunderbird. So in > other words, the email address to whom the user want to grant ACLs > provided by the user's mailclient, has nothing to do with my auth > backend where e.g. %u => %Lu would apply. PLease correct me if I'm > wrong here. > > > It clearly looks like a bug of the internal processing of the > "dovecot-acl-list" files. It simply lacks on a lowercase enforcement > in the code, like it already seems to do for the "dovecot-acl" file. > > > Best regards > > Leander Sch?fer > > > > Am 16.09.16 um 12:53 schrieb Aki Tuomi: >> >> On 16.09.2016 12:54, Leander Sch?fer wrote: >>> Hi, >>> >>> unfortunately I found a bug in Dovecot's ACL handling for shared >>> mailboxes. It turns out Dovecot doesn't enforce lower casing the >>> privileged username to whom the mailbox should be shared to. This >>> results in a invalid configuration. Users get confused, since they >>> passed on a valid email address in their ACL setup. >>> >>> /usr/local/www/default/mail/test at mydomain.localdomain/maildir/.Spam/dovecot-acl >>> >>> >>> user=leander at mydomain.localdomain eilrwts >>> ^^ works >>> >>> /usr/local/www/default/mail/leander at mydomain.localdomain/maildir/dovecot-acl >>> >>> >>> user=test at mydomain.localdomain eilrwts >>> ^^ works >>> >>> /usr/local/www/default/mail/test at mydomain.localdomain/maildir/.Drafts/dovecot-acl >>> >>> >>> user=Leander at MyDomain.LocalDomain eilrwts >>> ^^ Doesn't work >>> >>> Best regards >>> Leander Sch?fer >> Hi! Did you know you can use %Lu instead of %u to force lowercasing? >> >> Aki From aki.tuomi at dovecot.fi Mon Oct 3 10:01:39 2016 From: aki.tuomi at dovecot.fi (Aki Tuomi) Date: Mon, 3 Oct 2016 13:01:39 +0300 Subject: Shared folder in a sharded cluster setup In-Reply-To: <57EE7248.5070001@heinlein-support.de> References: <57EE7248.5070001@heinlein-support.de> Message-ID: On 30.09.2016 17:10, Peer Heinlein wrote: > > Hi! > > With Dovecot Director and Proxy or the new (great!) TAG-feature from > Dovecot it's easy to set up a shared IMAP-Cluster with individual local > filesystems. > > But I'm unsure if it's possible to build a setup where shared mailboxes > still can work. > > If user A is on Cluster (1) and user B is on (2), > and Cluster (1) does not have access to the mail-home from B on (2), > > then user A can not reach the shared folders provided from User B on (2). > > I hope that there is a kind of backend-proxy-mechanism, so that the imap > process of A on (1) can imap-proxy the requests for the shared folder to > a node from cluster shard (2). > > And: To be exact, the imap process on (1) should forward the request to > cluster (2) by the director system to make sure, that the connection > will terminate on the right active backend of User B. > > > This sounds like a special problem if local filesystems with mdbox are > used and I now the great features of using Dovecot on Object Store, > where every node can check out all mail-locations from all users. > > But especially on obox-systems it is very important that requests for a > user are always terminated on the same backend. So how can shared > folders work there?! Node (1) can not checkout the shared folders from > User B if his obox storage is already active on another host (2)! > > Peer > > > Just wanted to point out that asking about obox questions here is bit futile since it's pro-only feature. Shared folders should not be a problem if your backends can access same storage. Aki From scherff at blauwiesenweg.de Mon Oct 3 10:23:30 2016 From: scherff at blauwiesenweg.de (Scherff) Date: Mon, 3 Oct 2016 12:23:30 +0200 Subject: shared folders Message-ID: Hi, i am stuck. Try to install shared folders - dovecot is running fine. ACL is working. But i can't get running the shared folders. Maybe someone can help. This are the relevant conf. I think i have some mistake there - perhaps in location - changed try and error - stuck: mail_home = /var/vmail/mailboxes/%d/%n mail_location = maildir:~/mail:LAYOUT=fs namespace { hidden = no ignore_on_failure = no list = children location = maildir:%%h/mail:INDEX=%h/mail/shared/%%u:CONTROL=%h/mail/shared/%%u prefix = shared/%%u/ separator = / subscriptions = yes type = shared } namespace inbox { inbox = yes location = mailbox Archives { auto = subscribe special_use = \Archive } mailbox Drafts { auto = subscribe special_use = \Drafts } mailbox Notes { auto = subscribe } mailbox Sent { auto = subscribe special_use = \Sent } mailbox Spam { auto = subscribe special_use = \Junk } mailbox Trash { auto = subscribe special_use = \Trash } prefix = } plugin { # global acl - prevent expunche for some folder acl = vfile:/var/vmail/dovecot-acl acl_shared_dict = file:/var/vmail/db/shared-mailboxes.db } protocol imap { imap_idle_notify_interval = 15 mins mail_max_userip_connections = 30 mail_plugins = " acl imap_acl" } protocol lmtp { mail_plugins = " acl" } Need some hint. Thanks Alfred From jerry at seibercom.net Mon Oct 3 11:06:29 2016 From: jerry at seibercom.net (Jerry) Date: Mon, 3 Oct 2016 07:06:29 -0400 Subject: shared folders In-Reply-To: References: Message-ID: <20161003070629.000070f6@seibercom.net> On Mon, 3 Oct 2016 12:23:30 +0200, Scherff stated: >Hi, >i am stuck. Try to install shared folders - dovecot is running fine. >ACL is working. But i can't get running the shared folders. Maybe >someone can help. > >This are the relevant conf. I think i have some mistake there - >perhaps in location - changed try and error - stuck: What you are posting is not necessarily what Dovecot is seeing. Please post the complete output of "dovecot -n" -- Jerry From scherff at blauwiesenweg.de Mon Oct 3 11:58:56 2016 From: scherff at blauwiesenweg.de (Scherff) Date: Mon, 3 Oct 2016 13:58:56 +0200 Subject: shared folders Message-ID: <1ddff461-18c5-6a6b-5811-b8d949e338ad@blauwiesenweg.de> Well, these is the complete output for dovecot -n : mail:~ # dovecot -n # 2.2.18: /etc/dovecot/dovecot.conf # Pigeonhole version 0.4.8 (0c4ae064f307+) # OS: Linux 4.1.31-30-default x86_64 openSUSE 42.1 (x86_64) auth_mechanisms = plain login mail_gid = vmail mail_home = /var/vmail/mailboxes/%d/%n mail_location = maildir:~/mail:LAYOUT=fs mail_plugins = " acl" mail_privileged_group = vmail mail_uid = vmail managesieve_notify_capability = mailto managesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy include variables body enotify environment mailbox date index ihave duplicate namespace { hidden = no ignore_on_failure = no list = children location = maildir:%%h/mail:INDEX=%h/mail/shared/%%u:CONTROL=%h/mail/shared/%%u prefix = shared/%%u/ separator = / subscriptions = yes type = shared } namespace inbox { inbox = yes location = mailbox Archives { auto = subscribe special_use = \Archive } mailbox Drafts { auto = subscribe special_use = \Drafts } mailbox Notes { auto = subscribe } mailbox Sent { auto = subscribe special_use = \Sent } mailbox Spam { auto = subscribe special_use = \Junk } mailbox Trash { auto = subscribe special_use = \Trash } prefix = } passdb { args = /etc/dovecot/dovecot-sql.conf driver = sql } plugin { acl = vfile:/var/vmail/dovecot-acl acl_shared_dict = file:/var/vmail/db/shared-mailboxes.db quota = maildir:User quota quota_exceeded_message = Benutzer %u hat das Speichervolumen ?berschritten. / User %u has exhausted allowed storage space. sieve = /var/vmail/sieve/%d/%n/active-script.sieve sieve_before = /var/vmail/sieve/global/spam-global.sieve sieve_dir = /var/vmail/sieve/%d/%n/scripts zlib_save = gz zlib_save_level = 6 } protocols = imap lmtp sieve service auth { unix_listener /var/spool/postfix/private/auth { group = postfix mode = 0660 user = postfix } unix_listener auth-userdb { group = vmail mode = 0660 user = vmail } } service imap-login { inet_listener imap { port = 143 } } service lmtp { unix_listener /var/spool/postfix/private/dovecot-lmtp { group = postfix mode = 0660 user = postfix } user = vmail } service managesieve-login { inet_listener sieve { port = 4190 } } ssl = required ssl_cert = Hi, I'm trying to require client certificates on only one interface. I'm running dovecot 2.1.7. There have been a couple of recent threads about this kind of configuration: http://dovecot.org/list/dovecot/2016-August/105244.html (Aug 2016) http://www.dovecot.org/list/dovecot/2016-February/103067.html (Feb 2016) However, these threads recommend an approach that no longer works. Specifically, the "-l" or "-P" arguments to imap-login no longer work. Is there currently a recommended way to configure dovecot like this? Braden From webert.boss at gmail.com Mon Oct 3 18:29:26 2016 From: webert.boss at gmail.com (Webert de Souza Lima) Date: Mon, 03 Oct 2016 18:29:26 +0000 Subject: doveadm backup fails (compromised single attachment storage) In-Reply-To: References: Message-ID: Since no one seems to know if mailboxes can be "fixed", is possible to run dsync backup ignoring errors? There is no such documentation. When the describe errors occur, sync is interrupted. On Fri, Sep 30, 2016 at 10:18 AM Webert de Souza Lima wrote: > by SAS I meant SIAS (Single Instance Attachment Storage). > > On Thu, Sep 29, 2016 at 9:33 AM Webert de Souza Lima < > webert.boss at gmail.com> wrote: > >> Hi, >> >> A couple of months ago I had a problem with Single Attachment Storage >> after infrastructure migration; >> >> All mailboxes were rsynced to another filesystem, and that may have >> broken Single Attachment Storage. Many, many (if not all) mailboxes show >> the below logs on dovecot: >> >> imap(foo at bar.com): Error: >> read(attachments-connector(zlib(/dovecotdir/mail/ >> bar.com/foo/mailboxes/INBOX/dbox-Mails/u.26426))) failed: >> read(/dovecotdir/attach/ >> bar.com/de/86/de8673894d6fb3f4460e3c26436eefa9a73517fa0f000452f553822367220761502e1d0ce220eee5aa9acf232df0adebf40cce90b57d2e60e1eb9c9ef21671fa-b0d3411772c14957536100009331bd36-43cea6154b3275573b0800009331bd36-26426[base64:19 >> >> b/l]) failed: open(/dovecotdir/attach/ >> bar.com/de/86/de8673894d6fb3f4460e3c26436eefa9a73517fa0f000452f553822367220761502e1d0ce220eee5aa9acf232df0adebf40cce90b57d2e60e1eb9c9ef21671fa-b0d3411772c14957536100009331bd36-43cea6154b3275573b0800009331bd36-26426) >> failed: No such file or directory >> >> >> When that happens, the MUA keeps syncing forever. >> >> Now, I need to migrate all mailboxes (again) to another dovecot instance >> (with no SAS), which works perfectly for new users but when I try to >> migrate users from my current dovecot server for this new server, I get >> such errors again, and I can't migrate: >> >> 2016-09-29T12:20:50.995934059Z Sep 29 12:20:50 dsync-server(foo at bar.com): >> Error: dsync(cf7d091311eb): >> read(attachments-connector(zlib(/dovecotdir/mdbox/ >> bar.com/foo/storage/m.1))) failed: read(/dovecotdir/attach/ >> bar.com/0c/df/0cdf86b1920938fe3a043f87e2ee9e63dda276bd5b9fba687e4a0c63d181c3b6ebdb96a9517f048c963db71404ad5d14e896e2e67b7abb0c9e107aed5c15ecf1-430ea904dff46757ba1700009331bd36[base64:18 >> >> b/l]) failed: open(/dovecotdir/attach/ >> bar.com/0c/df/0cdf86b1920938fe3a043f87e2ee9e63dda276bd5b9fba687e4a0c63d181c3b6ebdb96a9517f048c963db71404ad5d14e896e2e67b7abb0c9e107aed5c15ecf1-430ea904dff46757ba1700009331bd36) >> failed: No such file or directory (last sent=mail, last recv=mail_request >> (EOL)) >> >> Is there a way to fix the attachments problem? (I know I can't recover >> such files, that's Ok) >> Is there a way to migrate (dsync backup) ignoring such problems? >> >> Thanks in advance. >> > From jtam.home at gmail.com Mon Oct 3 22:45:41 2016 From: jtam.home at gmail.com (Joseph Tam) Date: Mon, 3 Oct 2016 15:45:41 -0700 (PDT) Subject: NFSv4 and Maildir In-Reply-To: References: Message-ID: Noel Butler writes: >> I found the same thing, and turning off write delegation seemed >> to have solved the problem. I still don't know why, though. > > write delegation is disabled by default on NetApp with v4, or have they > changed this now? I think this is still the case: when I exported NFSv4, I also turned on both r/w delegation as well. The exported filesystem exhibited weird locking or slow write operations on NFS clients (e.g. "touch newfile" would take a second to complete). This went away when I turned off write delegation. Joseph Tam From maria.arrea at gmx.com Tue Oct 4 10:53:13 2016 From: maria.arrea at gmx.com (=?UTF-8?Q?Mar=c3=ada_Arrea?=) Date: Tue, 4 Oct 2016 12:53:13 +0200 Subject: Dovecot + SAN with dedup+compression Message-ID: <455911a4-353e-803d-e8ac-7c36655c653f@gmx.com> Hello We have a big Dovecot Server (over 20 TB of mail, 40K users, over 30K iops) and we are evaluating a new SAN Array. We have competitive proposals from Pure Storage (All Flash) , HP 3Par (Hybrid), NetApp FaaS (Hybrid) & EMC Unity (Hybrid). We rely in mdbox+zlib for main storage and mdbox+lzma for secondary storage, does anybody in the list have experience with SAN-based deduplication/compression? Regards Maria. From kjzero at gmail.com Mon Oct 3 22:39:35 2016 From: kjzero at gmail.com (Kristopher Joyce) Date: Mon, 3 Oct 2016 16:39:35 -0600 Subject: Changing Dovecot's format from Maildir to mdbox In-Reply-To: <94eb2c058e742752aa053dfd93f0@google.com> References: <94eb2c058e742752aa053dfd93f0@google.com> Message-ID: Hello, I am trying to change Dovecot's email format from Maildir to mdbox. I have changed Maildir to mdbox in Mail_location from: maildir:/var/vmail/%d/%n to: mdbox:/var/vmail/%d/%n. Nothing seems to happen after I restart Dovecot. The users are virtual users and not local users. Is there something I am missing? Thanks Kris --- This email has been checked for viruses by Avast antivirus software. https://www.avast.com/antivirus From skdovecot at smail.inf.fh-brs.de Tue Oct 4 12:03:10 2016 From: skdovecot at smail.inf.fh-brs.de (Steffen Kaiser) Date: Tue, 4 Oct 2016 14:03:10 +0200 (CEST) Subject: shared folders In-Reply-To: <1ddff461-18c5-6a6b-5811-b8d949e338ad@blauwiesenweg.de> References: <1ddff461-18c5-6a6b-5811-b8d949e338ad@blauwiesenweg.de> Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Mon, 3 Oct 2016, Scherff wrote: you do not write, if you get errors in the log. Enable mail_debug and see what Dovecot thinks about the location of mailboxes etc. Also, use doveadm acl debug -u to verify the ACLs. > mail_home = /var/vmail/mailboxes/%d/%n > mail_location = maildir:~/mail:LAYOUT=fs > namespace { ^^ maybe this namespace section is missing a name > hidden = no > ignore_on_failure = no > list = children > location = maildir:%%h/mail:INDEX=%h/mail/shared/%%u:CONTROL=%h/mail/shared/%%u This location does not match mail_location above, it is missing LAYOUT=fs > prefix = shared/%%u/ > separator = / > subscriptions = yes > type = shared > } - -- Steffen Kaiser -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQEVAwUBV/Oafnz1H7kL/d9rAQKDlgf8CjTbLVHs9Lfof4vfoHyCXgusB//39+rj UEl/fsP+4NkWr8naU5Rb4RU+7/LfhhCGPC5H7VeXBNCO+a+VxzJEzncNOdMAaQt/ AUYz9oHnPO/NptVuCV/LbYKaULE8KsXQWUr1BYScmt8F91KDIO6rpkuwaMaA+p+s XRkh10+ucnPKO1cUv6yBiBu/citff2uQdzX4+jr66djS5DXWZgOh/XsZDGS868Y1 Id88Kh0ZudpFBhEAQbrwbUCbgVx5O+7O9AC9s4RKyMGqCKS7DVIKM2VvCBQgvaad ApHpTkt6MegAMY0+BW9bsxpdb/lmhfCFbwyexVSwEJRXc4qS0qdE4Q== =oC93 -----END PGP SIGNATURE----- From scherff at blauwiesenweg.de Tue Oct 4 12:54:47 2016 From: scherff at blauwiesenweg.de (Scherff) Date: Tue, 4 Oct 2016 14:54:47 +0200 Subject: shared folders In-Reply-To: References: <1ddff461-18c5-6a6b-5811-b8d949e338ad@blauwiesenweg.de> Message-ID: <9423c404-9a4a-9c7f-1481-2cc9b184b37a@blauwiesenweg.de> Hi Steffen, thanks. The ACLs seems ok. Each share generates a dovecot-acl file in the folder with a text e.g. user=name at domain.de lr How to check ACL for a share? doveadm acl debug -u user at domain.de shared shows Can't open mailbox shared: Mailbox doesn't exist: shared namespace now has name "share". Debug shows: Debug: Namespace share: type=shared, prefix=shared/%u/, sep=/, inbox=no, hidden=no, list=children, subscriptions=yes location=maildir:/var/vmail/mailboxes/%d/%n/mail:LAYOUT=fs:INDEX=/var/vmail/mailboxes/%d/%n/mail/shared/%u:CONTROL=/var/vmail/mailboxes/DOMAIN/USER/shared/%u Changing location with :LAYOUT=fs - no effect Am 04.10.2016 um 14:03 schrieb Steffen Kaiser: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > On Mon, 3 Oct 2016, Scherff wrote: > > you do not write, if you get errors in the log. Enable mail_debug and > see what Dovecot thinks about the location of mailboxes etc. > > Also, use doveadm acl debug -u to verify the ACLs. > >> mail_home = /var/vmail/mailboxes/%d/%n >> mail_location = maildir:~/mail:LAYOUT=fs > >> namespace { > > ^^ maybe this namespace section is missing a name > >> hidden = no >> ignore_on_failure = no >> list = children >> location = >> maildir:%%h/mail:INDEX=%h/mail/shared/%%u:CONTROL=%h/mail/shared/%%u > > This location does not match mail_location above, it is missing LAYOUT=fs > >> prefix = shared/%%u/ >> separator = / >> subscriptions = yes >> type = shared >> } > > - -- Steffen Kaiser > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1 > > iQEVAwUBV/Oafnz1H7kL/d9rAQKDlgf8CjTbLVHs9Lfof4vfoHyCXgusB//39+rj > UEl/fsP+4NkWr8naU5Rb4RU+7/LfhhCGPC5H7VeXBNCO+a+VxzJEzncNOdMAaQt/ > AUYz9oHnPO/NptVuCV/LbYKaULE8KsXQWUr1BYScmt8F91KDIO6rpkuwaMaA+p+s > XRkh10+ucnPKO1cUv6yBiBu/citff2uQdzX4+jr66djS5DXWZgOh/XsZDGS868Y1 > Id88Kh0ZudpFBhEAQbrwbUCbgVx5O+7O9AC9s4RKyMGqCKS7DVIKM2VvCBQgvaad > ApHpTkt6MegAMY0+BW9bsxpdb/lmhfCFbwyexVSwEJRXc4qS0qdE4Q== > =oC93 > -----END PGP SIGNATURE----- From sven at svenhartge.de Tue Oct 4 14:07:18 2016 From: sven at svenhartge.de (Sven Hartge) Date: Tue, 4 Oct 2016 16:07:18 +0200 Subject: Dovecot + SAN with dedup+compression References: <455911a4-353e-803d-e8ac-7c36655c653f@gmx.com> Message-ID: <4cu31l92lhkbv8@mids.svenhartge.de> Mar?a Arrea wrote: > We have a big Dovecot Server (over 20 TB of mail, 40K users, over 30K > iops) and we are evaluating a new SAN Array. We have competitive > proposals from Pure Storage (All Flash) , HP 3Par (Hybrid), NetApp > FaaS (Hybrid) & EMC Unity (Hybrid). We rely in mdbox+zlib for main > storage and mdbox+lzma for secondary storage, does anybody in the list > have experience with SAN-based deduplication/compression? My experiences and experiments with NetApp and Dovecot mdbox show: a) Compression is better done in Dovecot, don't enable the compression on the filer. b) Mail in Maildir or mdbox format does not deduplicate really well on the filer, expect to get about 10% to 15% savings. It does not hurt the performance if you turn it on (at least on NetApp), but you won't get the big savings you get with Exchange-Databases. (I suspect this is because of the layout of the mails inside mdbox on disk.) If you really need to deduplicate, then use the builtin SIS from Dovecot. Gr??e, Sven. -- Sigmentation fault. Core dumped. From thedoys at gmail.com Tue Oct 4 11:57:25 2016 From: thedoys at gmail.com (doys) Date: Tue, 4 Oct 2016 04:57:25 -0700 (PDT) Subject: Dovecot + SAMBA4 Message-ID: <1475582244975-57421.post@n4.nabble.com> hello, I'm new to this forum:) I couldn't found any topic with the same problem and I'm sorry for the inconvenience if there is already one. I'd like to use Dovecot with an authentification from Samba4 via Kerberos. I've followed this procedure : http://wiki2.dovecot.org/Authentication/Kerberos but I cannot connect to the domain. I have not met any difficulty with Samba ; I have others servers that are connected to the samba4 domain via samba/winbind wbinfo -u / -g et getent passwod / group works fine. I don't find the error and the logs are fruitless : Oct 4 09:03:46 mail dovecot: imap-login: Aborted login (no auth attempts in 0 secs): user=<>, rip=MY_IP, lip=IP_SERVER, session=<4ilrqQQ+2AAKaBFq> Oct 4 09:03:46 mail dovecot: imap-login: Aborted login (no auth attempts in 0 secs): user=<>, rip=MY_IP, lip=IP_SERVER, session=<7SprqQQ+2QAKaBFq> Oct 4 09:03:55 mail dovecot: imap-login: Disconnected (no auth attempts in 0 secs): user=<>, rip=MY_IP, lip=IP_SERVER, session= Oct 4 09:03:55 mail dovecot: imap-login: Disconnected (no auth attempts in 0 secs): user=<>, rip=MY_IP, lip=IP_SERVER, session= This is my docevot -n # 2.2.13: /etc/dovecot/dovecot.conf # OS: Linux 3.16.0-4-amd64 x86_64 Debian 8.6 auth_default_realm = DOMAIN.COM auth_gssapi_hostname = $ALL auth_mechanisms = gssapi auth_realms = DOMAIN.COM auth_username_translation = /@ mail_location = maildir:~/Maildir namespace inbox { inbox = yes location = mailbox Drafts { special_use = \Drafts } mailbox Junk { special_use = \Junk } mailbox Sent { special_use = \Sent } mailbox "Sent Messages" { special_use = \Sent } mailbox Trash { special_use = \Trash } prefix = } plugin { sieve = ~/.dovecot.sieve sieve_dir = ~/sieve } protocols = " imap pop3" ssl = no userdb { driver = static } I wish to use maildir and I wish not to have multidomain. The server is a Debian 8 with compilated samba 4.4.5 + dovecot 2.2.13 (deb file). i dovecot-core - secure POP3/IMAP server - core files i dovecot-gssapi - secure POP3/IMAP server - GSSAPI support i dovecot-imapd - secure POP3/IMAP server - IMAP daemon i dovecot-pop3d - secure POP3/IMAP server - POP3 daemon i dovecot-sieve - secure POP3/IMAP server - Sieve filters su And this is what telnet returns : telnet localhost 143 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. * OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE IDLE LOGINDISABLED AUTH=GSSAPI] Dovecot ready. Can you give me a hint, please ? Thank you ! doys Messages: 1 Inscrit le: 2016-10-04 10:31 Haut -- View this message in context: http://dovecot.2317879.n4.nabble.com/Dovecot-SAMBA4-tp57421.html Sent from the Dovecot mailing list archive at Nabble.com. From voytek at sbt.net.au Wed Oct 5 00:47:29 2016 From: voytek at sbt.net.au (voytek at sbt.net.au) Date: Wed, 5 Oct 2016 11:47:29 +1100 Subject: Connection reset by peer query Message-ID: One of my user having trouble 'synchronizing' his mobile with his mailbox looking at logs I can see like[1]: searching around, I found some posts suggesting index corruption?? what can or what should I do ? can I force re-index ? delete index files ? which ones ? using outlook on Win7, if relevant [1] Oct 05 11:24:32 imap(aaa at aaa.com.au): Info: Connection closed: Connection reset by peer in=238 out=1892993 # grep aaa at aaa /var/log/dovecot.log | grep reset | wc 10 130 1117 # grep aaa at aaa /var/log/dovecot.log.1 | grep reset | wc 30 406 3582 From aki.tuomi at dovecot.fi Wed Oct 5 11:53:24 2016 From: aki.tuomi at dovecot.fi (Aki Tuomi) Date: Wed, 5 Oct 2016 14:53:24 +0300 Subject: Changing Dovecot's format from Maildir to mdbox In-Reply-To: References: <94eb2c058e742752aa053dfd93f0@google.com> Message-ID: <501b397c-a978-27d4-14c6-1b8a9b347f9f@dovecot.fi> On 04.10.2016 01:39, Kristopher Joyce wrote: > Hello, > > I am trying to change Dovecot's email format from Maildir to mdbox. I > have changed Maildir to mdbox in Mail_location from: > maildir:/var/vmail/%d/%n to: mdbox:/var/vmail/%d/%n. Nothing seems to > happen after I restart Dovecot. The users are virtual users and not > local users. Is there something I am missing? > > Thanks > > Kris > > > > --- > This email has been checked for viruses by Avast antivirus software. > https://www.avast.com/antivirus You also need to migrate the data from maildir. If you have backups, try doveadm sync -A maildir: Aki Tuomi From webert.boss at gmail.com Wed Oct 5 18:59:32 2016 From: webert.boss at gmail.com (Webert de Souza Lima) Date: Wed, 05 Oct 2016 18:59:32 +0000 Subject: fix SIS attachment errors Message-ID: Hi, I've sent some e-mails about this before but since there was no answers I'll write it differently, with different information. I'm using SIS (Single Instance Attachment Storage). For any reason that is not relevant now, many attachments are missing and the messages can't be fetched: Error: read(attachments-connector(zlib(/dovecot/mdbox/bar.example/foo/storage/m.1))) failed: read(/dovecot/attach/bar.example/23/ae/23aed008c1f32f048afd38d9aae68c5aeae2d17a9170e28c60c75a02ec199ef4e7079cd92988ad857bd6e12cd24cdd7619bd29f26edeec842a6911bb14a86944-fb0b6a214dfa63573c1f00009331bd36[base64:19 b/l]) failed: open(/dovecot/attach/bar.example/23/ae/23aed008c1f32f048afd38d9aae68c5aeae2d17a9170e28c60c75a02ec199ef4e7079cd92988ad857bd6e12cd24cdd7619bd29f26edeec842a6911bb14a86944-fb0b6a214dfa63573c1f00009331bd36) failed: No such file or directory in this specific case, the /dovecot/attach/bar.example/23/ae/ director doesn't exist. In other cases, just one file is missing so I would assume the hardlink could be recreated and it would work. If I create the missing file (with touch or whatever), I get the following errors: Error: read(/dovecot/attach/bar.example/23/ae/23aed008c1f32f048afd38d9aae68c5aeae2d17a9170e28c60c75a02ec199ef4e7079cd92988ad857bd6e12cd24cdd7619bd29f26edeec842a6911bb14a86944-fb0b6a214dfa63573c1f00009331bd36[base64:19 b/l]) failed: Stream is smaller than expected (0 < 483065) Error: read(attachments-connector(zlib(/dovecot/mdbox/bar.example/foo/storage/m.1))) failed: read(/dovecot/attach/bar.example/23/ae/23aed008c1f32f048afd38d9aae68c5aeae2d17a9170e28c60c75a02ec199ef4e7079cd92988ad857bd6e12cd24cdd7619bd29f26edeec842a6911bb14a86944-fb0b6a214dfa63573c1f00009331bd36[base64:19 b/l]) failed: Stream is smaller than expected (0 < 483065) Error: fetch(body) failed for box=INBOX uid=15: BUG: Unknown internal error If I try to fill the file with the amount of bytes it complains about with the following command: $ dd if=/dev/zero of=/dovecot/attach/bar.example/23/ae/23aed008c1f32f048afd38d9aae68c5aeae2d17a9170e28c60c75a02ec199ef4e7079cd92988ad857bd6e12cd24cdd7619bd29f26edeec842a6911bb14a86944-fb0b6a214dfa63573c1f00009331bd36 bs=1 count=483065 then I get the following error: Error: read(/dovecot/attach/bar.example/23/ae/23aed008c1f32f048afd38d9aae68c5aeae2d17a9170e28c60c75a02ec199ef4e7079cd92988ad857bd6e12cd24cdd7619bd29f26edeec842a6911bb14a86944-fb0b6a214dfa63573c1f00009331bd36[base64:19 b/l]) failed: Stream is larger than expected (483928 > 483065, eof=0) Error: read(attachments-connector(zlib(/srv/dovecot/mdbox/bar.example/foo/storage/m.1))) failed: read(//dovecot/attach/bar.example/23/ae/23aed008c1f32f048afd38d9aae68c5aeae2d17a9170e28c60c75a02ec199ef4e7079cd92988ad857bd6e12cd24cdd7619bd29f26edeec842a6911bb14a86944-fb0b6a214dfa63573c1f00009331bd36[base64:19 b/l]) failed: Stream is larger than expected (483928 > 483065, eof=0) Error: fetch(body) failed for box=INBOX uid=15: BUG: Unknown internal error Based on this I have a few questions: 1. Is there a way, or a tool to scan all mailboxes to get all the messages that have compromised attachments? 2. is there a way to "fix" the missing files (even if creating fake files or removing the attachments information from the messages) 3. What I need is to migrate these boxes using doveadm backup/sync, which fails when these errors occur. Is is possible to ignore them or is there another tool that would do it? Thank you. Webert Lima Belo Horizonte, Brasil From krzf83 at gmail.com Wed Oct 5 22:46:01 2016 From: krzf83 at gmail.com (krzf83@gmail.com ) Date: Thu, 6 Oct 2016 00:46:01 +0200 Subject: [feature suggestion] pigeonhole - sendmail path for outgoing email Message-ID: pigeonhole seems to use /usr/sbin/sendmail for its outgoing emails - even tought that is does not seem to be documented anywhere. How about setting to specify diffrent sendmail program path and parameters? From krzf83 at gmail.com Wed Oct 5 22:49:57 2016 From: krzf83 at gmail.com (krzf83@gmail.com ) Date: Thu, 6 Oct 2016 00:49:57 +0200 Subject: [feature suggestion] pigeonhole - sendmail path for outgoing email In-Reply-To: References: Message-ID: Possibility of adding custom header to outgoing sieve message would also be nice feature. From adi at ddns.com.au Thu Oct 6 04:27:00 2016 From: adi at ddns.com.au (Adi Pircalabu) Date: Thu, 06 Oct 2016 15:27:00 +1100 Subject: [imap-login] SSL related crashes using the latest 2.2.25 Message-ID: <1234f8996ddd7278d94116ab17a4c4c9@ddns.com.au> I'm running Dovecot as proxy in front of some IMAP/POP3 Dovecot & Courier-IMAP servers and in the last couple of days I've been seeing a lot of imap-login crashes (signal 11) on both 2.2.18 and 2.2.25, all SSL related. The following backtraces are taken running 2.2.25, built from source on a test system similar to the live proxy servers. OS: CentOS 6.8 64bit Packages: openssl-1.0.1e-48.el6_8.3.x86_64, dovecot-2.2.25-2.el6.x86_64 built from source RPM. Can post "doveconf -a" if required. Core was generated by `dovecot/imap-login -D'. Program terminated with signal 11, Segmentation fault. #0 ssl_proxy_has_broken_client_cert (proxy=0x0) at ssl-proxy-openssl.c:677 677 { (gdb) bt full #0 ssl_proxy_has_broken_client_cert (proxy=0x0) at ssl-proxy-openssl.c:677 No locals. #1 0x00007fdec4e6b489 in login_proxy_ssl_handshaked (context=0x14b4170) at login-proxy.c:759 proxy = 0x14b4170 #2 0x00007fdec4e70e4b in ssl_handshake (proxy=0x169d7b0) at ssl-proxy-openssl.c:468 ret = #3 ssl_step (proxy=0x169d7b0) at ssl-proxy-openssl.c:519 No locals. #4 0x00007fdec4beee0b in io_loop_call_io (io=0x13fdab0) at ioloop.c:564 ioloop = 0x12a07b0 t_id = 2 __FUNCTION__ = "io_loop_call_io" #5 0x00007fdec4bf0407 in io_loop_handler_run_internal (ioloop=) at ioloop-epoll.c:220 ctx = 0x12fb8d0 events = event = 0x171fb20 list = 0x15f8c50 io = tv = {tv_sec = 46, tv_usec = 134490} events_count = msecs = ret = 1 i = call = __FUNCTION__ = "io_loop_handler_run_internal" #6 0x00007fdec4beeeb5 in io_loop_handler_run (ioloop=0x12a07b0) at ioloop.c:612 No locals. #7 0x00007fdec4bef058 in io_loop_run (ioloop=0x12a07b0) at ioloop.c:588 __FUNCTION__ = "io_loop_run" #8 0x00007fdec4b81b23 in master_service_run (service=0x12a0650, callback=) at master-service.c:640 No locals. #9 0x00007fdec4e6e593 in login_binary_run (binary=, argc=2, argv=0x12a0390) at main.c:486 set_pool = 0x12a0b80 login_socket = c = #10 0x00007fdec47dad1d in __libc_start_main (main=0x402ac0
, argc=2, ubp_av=0x7ffc53ee5688, init=, fini=, rtld_fini=, stack_end=0x7ffc53ee5678) at libc-start.c:226 result = unwind_buf = {cancel_jmp_buf = {{jmp_buf = {0, 5496455093114277129, 4204960, 140721716614784, 0, 0, -5494405746439844599, -5477823887334535927}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x404f70, 0x7ffc53ee5688}, data = { prev = 0x0, cleanup = 0x0, canceltype = 4214640}}} not_first_call = #11 0x00000000004029c9 in _start () No symbol table info available. Core was generated by `dovecot/imap-login -D'. Program terminated with signal 11, Segmentation fault. #0 0x00007f1a58620dec in _IO_vfprintf_internal (s=, format=, ap=) at vfprintf.c:1641 1641 process_string_arg (((struct printf_spec *) NULL)); (gdb) bt full #0 0x00007f1a58620dec in _IO_vfprintf_internal (s=, format=, ap=) at vfprintf.c:1641 len = string_malloced = step0_jumps = {0, -1285, -1198, 3818, 3910, 3206, 3307, 4086, 1925, 2133, 2249, 3731, 4474, -4059, -1109, -1062, 868, 956, 968, 980, -1505, -495, 665, 755, 827, -3962, 395, 4392, -4059, 3997} space = 0 is_short = 0 use_outdigits = 0 step1_jumps = {0, 0, 0, 0, 0, 0, 0, 0, 0, 2133, 2249, 3731, 4474, -4059, -1109, -1062, 868, 956, 968, 980, -1505, -495, 665, 755, 827, -3962, 395, 4392, -4059, 0} group = 0 prec = -1 step2_jumps = {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2249, 3731, 4474, -4059, -1109, -1062, 868, 956, 968, 980, -1505, -495, 665, 755, 827, -3962, 395, 4392, -4059, 0} string = left = 0 is_long_double = 0 width = 0 step3a_jumps = {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2336, 0, 0, 0, -1109, -1062, 868, 956, 968, 0, 0, 0, 0, 755, 0, 0, 0, 0, 0, 0} alt = 0 showsign = 0 is_long = 0 is_char = 0 pad = 32 ' ' step3b_jumps = {0 , 4474, 0, 0, -1109, -1062, 868, 956, 968, 980, -1505, -495, 665, 755, 827, -3962, 395, 0, 0, 0} step4_jumps = {0 , -1109, -1062, 868, 956, 968, 980, -1505, -495, 665, 755, 827, -3962, 395, 0, 0, 0} is_negative = base = the_arg = {pa_wchar = 0 L'\000', pa_int = 0, pa_long_int = 0, pa_long_long_int = 0, pa_u_int = 0, pa_u_long_int = 0, pa_u_long_long_int = 0, pa_double = 0, pa_long_double = 0, pa_string = 0x0, pa_wstring = 0x0, pa_pointer = 0x0, pa_user = 0x0} spec = 115 's' _buffer = {__routine = 0, __arg = 0xf583a1d84, __canceltype = 24, __prev = 0x0} _avail = thousands_sep = 0x0 grouping = 0xffffffffffffffff
done = 97 f = 0x7f1a58c90b0d "s" lead_str_end = 0x7f1a58c90b05 "%s:%u: %s" end_of_spec = work_buffer = "\341v\361[\372\037\002\363\315\301y\017\302.\a\272\306B\267\001\377\244\276Uu\023\005\301\a\227\f\353\374\062V\002\000\000\000\000\320\006}S\375\177\000\000\340\062V\002\000\000\000\000\273\252\377W\032\177\000\000(\036f\002\000\000\000\000|\004}S\375\177\000\000\320\006}S\375\177\000\000a+\aX\032\177\000\000(\036f\002\000\000\000\000 \036f\002\000\000\000\000\360\003}S\375\177\000\000\301\025\000X\000\000\000\000\210\033f\002\000\000\000\000(\036f\002\024\000\000\000\t`\v\217~0\"\"\\\200'\217X_\331q\325o\244\210\000\000\000\000\340\062V\002\000\000\000\000\n\000\000\000\000\000\000\000Wo\020X\000\000\000\000\b\024V\002\000\000\000\000\226$\377W\032\177\000\000\001\000\000\000\000\000\000\000\211\001\000\000\000\000\000\000\260\006}S\375\177\000\000\240\257\235\003\000\000\000\000\260\006}S\375\177\000\000\200"... workstart = 0x0 workend = 0x7ffd537d0748 "" ap_save = {{gp_offset = 8, fp_offset = 48, overflow_arg_area = 0x7ffd537d0b60, reg_save_area = 0x7ffd537d0aa0}} nspecs_done = 2 save_errno = 0 readonly_format = 0 args_malloced = 0x0 specs = 0xa00000001 specs_malloced = false jump_table = "\001\000\000\004\000\016\000\006\000\000\a\002\000\003\t\000\005\b\b\b\b\b\b\b\b\b\000\000\000\000\000\000\000\032\000\031\000\023\023\023\000\035\000\000\f\000\000\000\000\000\000\025\000\000\000\000\022\000\r\000\000\000\000\000\000\032\000\024\017\023\023\023\n\017\034\000\v\030\027\021\026\f\000\025\033\020\000\000\022\000\r" #1 0x00007f1a586d8c50 in ___vsnprintf_chk ( s=0x1fe0e38 "proxy: Received invalid SSL certificate from \314-A\235q\210\021\b\354\062Lz?)\367.\002 \031\233 \362w?\224\356K7\343\224 \002\037\364!+\266\371\277O`K\021\b\315:6: k/CN=AddTrust External CA Root", maxlen=, flags=1, slen=, format=0x7f1a58c90ad8 "proxy: Received invalid SSL certificate from %s:%u: %s", args=0x7ffd537d0a80) at vsnprintf_chk.c:65 sf = {f = {_sbf = {_f = {_flags = -72515583, _IO_read_ptr = 0x1fe0e38 "proxy: Received invalid SSL certificate from \314-A\235q\210\021\b\354\062Lz?)\367.\002 \031\233 \362w?\224\356K7\343\224 \002\037\364!+\266\371\277O`K\021\b\315:6: k/CN=AddTrust External CA Root", _IO_read_end = 0x1fe0e38 "proxy: Received invalid SSL certificate from \314-A\235q\210\021\b\354\062Lz?)\367.\002 \031\233 \362w?\224\356K7\343\224 \002\037\364!+\266\371\277O`K\021\b\315:6: k/CN=AddTrust External CA Root", _IO_read_base = 0x1fe0e38 "proxy: Received invalid SSL certificate from \314-A\235q\210\021\b\354\062Lz?)\367.\002 \031\233 \362w?\224\356K7\343\224 \002\037\364!+\266\371\277O`K\021\b\315:6: k/CN=AddTrust External CA Root", _IO_write_base = 0x1fe0e38 "proxy: Received invalid SSL certificate from \314-A\235q\210\021\b\354\062Lz?)\367.\002 \031\233 \362w?\224\356K7\343\224 \002\037\364!+\266\371\277O`K\021\b\315:6: k/CN=AddTrust External CA Root", _IO_write_ptr = 0x1fe0e99 "k/CN=AddTrust External CA Root", _IO_write_end = 0x1fe0f6d "oodvale.vic.au): disconnecting 127.0.0.1 (Disconnected by client: EOF(0s idle, in=217, out=796))", _IO_buf_base = 0x1fe0e38 "proxy: Received invalid SSL certificate from \314-A\235q\210\021\b\354\062Lz?)\367.\002 \031\233 \362w?\224\356K7\343\224 \002\037\364!+\266\371\277O`K\021\b\315:6: k/CN=AddTrust External CA Root", _IO_buf_end = 0x1fe0f6d "oodvale.vic.au): disconnecting 127.0.0.1 (Disconnected by client: EOF(0s idle, in=217, out=796))", _IO_save_base = 0x0, _IO_backup_base = 0x0, _IO_save_end = 0x0, _markers = 0x0, _chain = 0x0, _fileno = 1489493956, _flags2 = 4, _old_offset = 139751135368008, _cur_column = 0, _vtable_offset = -57 '\307', _shortbuf = "X", _lock = 0x0, _offset = 4294967673, _codecvt = 0x381f620, _wide_data = 0x7ffd537d09c0, _freeres_list = 0x0, _freeres_buf = 0x7ffd537d0ab0, _freeres_size = 140726004157144, _mode = -1, _unused2 = "\032\177\000\000\000\000\000\000\000\000\000\000\315\320e\207\000\000\000"}, vtable = 0x7f1a58966440}, _s = { _allocate_buffer = 0, _free_buffer = 0}}, overflow_buf = "\001", '\000' , "\001\000\000\000\000\000\000\000\350V\vY\032\177\000\000\200\302g\256\333\333;\221\330\n\311X\032\177\000\000\034\n}S\375\177\000\000\002\000\000\000\000\000\000"} ret = #2 0x00007f1a58a220ff in vsnprintf (format=0x7f1a58c90ad8 "proxy: Received invalid SSL certificate from %s:%u: %s", args=0x7ffd537d0a80, size_r=0x7ffd537d0a5c) at /usr/include/bits/stdio2.h:78 No locals. #3 t_noalloc_strdup_vprintf (format=0x7f1a58c90ad8 "proxy: Received invalid SSL certificate from %s:%u: %s", args=0x7ffd537d0a80, size_r=0x7ffd537d0a5c) at strfuncs.c:132 args2 = {{gp_offset = 8, fp_offset = 48, overflow_arg_area = 0x7ffd537d0b60, reg_save_area = 0x7ffd537d0aa0}} tmp = 0x1fe0e38 "proxy: Received invalid SSL certificate from \314-A\235q\210\021\b\354\062Lz?)\367.\002 \031\233 \362w?\224\356K7\343\224 \002\037\364!+\266\371\277O`K\021\b\315:6: k/CN=AddTrust External CA Root" init_size = 310 ret = __FUNCTION__ = "t_noalloc_strdup_vprintf" #4 0x00007f1a58a221d9 in p_strdup_vprintf (pool=0x7f1a58c76830, format=, args=) at strfuncs.c:156 tmp = buf = size = #5 0x00007f1a58a222ea in t_strdup_printf (format=) at strfuncs.c:263 args = {{gp_offset = 32, fp_offset = 48, overflow_arg_area = 0x7ffd537d0b60, reg_save_area = 0x7ffd537d0aa0}} ret = 0x0 #6 0x00007f1a58c88548 in login_proxy_ssl_handshaked (context=0x3141260) at login-proxy.c:760 proxy = 0x3141260 #7 0x00007f1a58c8de4b in ssl_handshake (proxy=0x3498970) at ssl-proxy-openssl.c:468 ret = #8 ssl_step (proxy=0x3498970) at ssl-proxy-openssl.c:519 No locals. #9 0x00007f1a58a0be0b in io_loop_call_io (io=0x322e900) at ioloop.c:564 ioloop = 0x1f817b0 t_id = 2 __FUNCTION__ = "io_loop_call_io" #10 0x00007f1a58a0d407 in io_loop_handler_run_internal (ioloop=) at ioloop-epoll.c:220 ctx = 0x1fdc8d0 events = event = 0x34a22e8 list = 0x317b250 io = tv = {tv_sec = 0, tv_usec = 614018} events_count = msecs = ret = 3 i = call = __FUNCTION__ = "io_loop_handler_run_internal" #11 0x00007f1a58a0beb5 in io_loop_handler_run (ioloop=0x1f817b0) at ioloop.c:612 No locals. #12 0x00007f1a58a0c058 in io_loop_run (ioloop=0x1f817b0) at ioloop.c:588 __FUNCTION__ = "io_loop_run" #13 0x00007f1a5899eb23 in master_service_run (service=0x1f81650, callback=) at master-service.c:640 No locals. #14 0x00007f1a58c8b593 in login_binary_run (binary=, argc=2, argv=0x1f81390) at main.c:486 set_pool = 0x1f81b80 login_socket = c = #15 0x00007f1a585f7d1d in __libc_start_main (main=0x402ac0
, argc=2, ubp_av=0x7ffd537d0de8, init=, fini=, rtld_fini=, stack_end=0x7ffd537d0dd8) at libc-start.c:226 result = unwind_buf = {cancel_jmp_buf = {{jmp_buf = {0, -6805578527007124004, 4204960, 140726004157920, 0, 0, 6806940805326362076, 6897556252474660316}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x404f70, 0x7ffd537d0de8}, data = { prev = 0x0, cleanup = 0x0, canceltype = 4214640}}} not_first_call = #16 0x00000000004029c9 in _start () No symbol table info available. Core was generated by `dovecot/imap-login -D'. Program terminated with signal 11, Segmentation fault. #0 t_strcut (str=0xffffffffffffffff
, cutchar=64 '@') at strfuncs.c:294 294 for (p = str; *p != '\0'; p++) { (gdb) bt full #0 t_strcut (str=0xffffffffffffffff
, cutchar=64 '@') at strfuncs.c:294 p = 0xffffffffffffffff
#1 0x00007f1afa1a7d2f in get_var_expand_users (tab=0x1c37e98, user=0xffffffffffffffff
) at client-common.c:523 i = #2 0x00007f1afa1a7f29 in get_var_expand_table (client=0x27b85b0, msg=0x1c37e38 "proxy: SSL certificate not received from \314-A\235q\210\021\b\354\062Lz?)\367.\002 \031\233 \362w?\224\356K7\343\224 \002\037\364!+\266\371\277O`K\021\b\315@:6") at client-common.c:541 tab = 0x1c37e98 #3 client_get_log_str (client=0x27b85b0, msg=0x1c37e38 "proxy: SSL certificate not received from \314-A\235q\210\021\b\354\062Lz?)\367.\002 \031\233 \362w?\224\356K7\343\224 \002\037\364!+\266\371\277O`K\021\b\315@:6") at client-common.c:644 static_tab = {{key = 115 's', value = 0x0, long_key = 0x0}, {key = 36 '$', value = 0x0, long_key = 0x0}, {key = 0 '\000', value = 0x0, long_key = 0x0}} func_table = {{key = 0x7f1afa1b2d0c "passdb", func = 0x7f1afa1a7c70 }, {key = 0x0, func = 0}} tab = e = str = str2 = pos = #4 0x00007f1afa1a847a in client_log_err (client=0x27b85b0, msg=0x1c37e38 "proxy: SSL certificate not received from \314-A\235q\210\021\b\354\062Lz?)\367.\002 \031\233 \362w?\224\356K7\343\224 \002\037\364!+\266\371\277O`K\021\b\315@:6") at client-common.c:692 _data_stack_cur_id = 3 #5 0x00007f1afa1ab51e in login_proxy_ssl_handshaked (context=0x237e910) at login-proxy.c:765 proxy = 0x237e910 #6 0x00007f1afa1b0e4b in ssl_handshake (proxy=0x23cb660) at ssl-proxy-openssl.c:468 ret = #7 ssl_step (proxy=0x23cb660) at ssl-proxy-openssl.c:519 No locals. #8 0x00007f1af9f2ee0b in io_loop_call_io (io=0x285cf20) at ioloop.c:564 ioloop = 0x1bd87b0 t_id = 2 __FUNCTION__ = "io_loop_call_io" #9 0x00007f1af9f30407 in io_loop_handler_run_internal (ioloop=) at ioloop-epoll.c:220 ctx = 0x1c338d0 events = event = 0x260eac0 list = 0x227b980 io = tv = {tv_sec = 0, tv_usec = 519697} events_count = msecs = ret = 1 i = call = __FUNCTION__ = "io_loop_handler_run_internal" #10 0x00007f1af9f2eeb5 in io_loop_handler_run (ioloop=0x1bd87b0) at ioloop.c:612 No locals. #11 0x00007f1af9f2f058 in io_loop_run (ioloop=0x1bd87b0) at ioloop.c:588 __FUNCTION__ = "io_loop_run" #12 0x00007f1af9ec1b23 in master_service_run (service=0x1bd8650, callback=) at master-service.c:640 No locals. #13 0x00007f1afa1ae593 in login_binary_run (binary=, argc=2, argv=0x1bd8390) at main.c:486 set_pool = 0x1bd8b80 login_socket = c = #14 0x00007f1af9b1ad1d in __libc_start_main (main=0x402ac0
, argc=2, ubp_av=0x7ffcdfc7cd68, init=, fini=, rtld_fini=, stack_end=0x7ffcdfc7cd58) at libc-start.c:226 result = unwind_buf = {cancel_jmp_buf = {{jmp_buf = {0, -5108975228267825424, 4204960, 140724062899552, 0, 0, 5107356402929858288, 5128673613026916080}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x404f70, 0x7ffcdfc7cd68}, data = { prev = 0x0, cleanup = 0x0, canceltype = 4214640}}} not_first_call = #15 0x00000000004029c9 in _start () No symbol table info available. Core was generated by `dovecot/imap-login -D'. Program terminated with signal 11, Segmentation fault. #0 ssl_proxy_is_handshaked (proxy=0x21a930a940a43715) at ssl-proxy-openssl.c:720 720 { (gdb) bt full #0 ssl_proxy_is_handshaked (proxy=0x21a930a940a43715) at ssl-proxy-openssl.c:720 No locals. #1 0x00007f0c84b63326 in get_var_expand_table (client=0xc49c50, msg=0x8b5e38 "proxy: SSL certificate not received from \314-A\235q\210\021\b\354\062Lz?)\367.\002 \031\233 \362w?\224\356K7\343\224 \002\037\364!+\266\371\277O`K\021\b\315@:6") at client-common.c:556 ssl_state = ssl_error = tab = 0x8b5e98 #2 client_get_log_str (client=0xc49c50, msg=0x8b5e38 "proxy: SSL certificate not received from \314-A\235q\210\021\b\354\062Lz?)\367.\002 \031\233 \362w?\224\356K7\343\224 \002\037\364!+\266\371\277O`K\021\b\315@:6") at client-common.c:644 static_tab = {{key = 115 's', value = 0x0, long_key = 0x0}, {key = 36 '$', value = 0x0, long_key = 0x0}, {key = 0 '\000', value = 0x0, long_key = 0x0}} func_table = {{key = 0x7f0c84b6dd0c "passdb", func = 0x7f0c84b62c70 }, {key = 0x0, func = 0}} tab = e = str = str2 = pos = #3 0x00007f0c84b6347a in client_log_err (client=0xc49c50, msg=0x8b5e38 "proxy: SSL certificate not received from \314-A\235q\210\021\b\354\062Lz?)\367.\002 \031\233 \362w?\224\356K7\343\224 \002\037\364!+\266\371\277O`K\021\b\315@:6") at client-common.c:692 _data_stack_cur_id = 3 #4 0x00007f0c84b6651e in login_proxy_ssl_handshaked (context=0xf464b0) at login-proxy.c:765 proxy = 0xf464b0 #5 0x00007f0c84b6be4b in ssl_handshake (proxy=0xd5d600) at ssl-proxy-openssl.c:468 ret = #6 ssl_step (proxy=0xd5d600) at ssl-proxy-openssl.c:519 No locals. #7 0x00007f0c848e9e0b in io_loop_call_io (io=0xdf5ea0) at ioloop.c:564 ioloop = 0x8567b0 t_id = 2 __FUNCTION__ = "io_loop_call_io" #8 0x00007f0c848eb407 in io_loop_handler_run_internal (ioloop=) at ioloop-epoll.c:220 ctx = 0x8b18d0 events = event = 0xf305f0 list = 0xc4a700 io = tv = {tv_sec = 0, tv_usec = 954174} events_count = msecs = ret = 1 i = call = __FUNCTION__ = "io_loop_handler_run_internal" #9 0x00007f0c848e9eb5 in io_loop_handler_run (ioloop=0x8567b0) at ioloop.c:612 No locals. #10 0x00007f0c848ea058 in io_loop_run (ioloop=0x8567b0) at ioloop.c:588 __FUNCTION__ = "io_loop_run" #11 0x00007f0c8487cb23 in master_service_run (service=0x856650, callback=) at master-service.c:640 No locals. #12 0x00007f0c84b69593 in login_binary_run (binary=, argc=2, argv=0x856390) at main.c:486 set_pool = 0x856b80 login_socket = c = #13 0x00007f0c844d5d1d in __libc_start_main (main=0x402ac0
, argc=2, ubp_av=0x7ffd41fc1f28, init=, fini=, rtld_fini=, stack_end=0x7ffd41fc1f18) at libc-start.c:226 result = unwind_buf = {cancel_jmp_buf = {{jmp_buf = {0, -3476376496289340868, 4204960, 140725710495520, 0, 0, 3475633251103023676, 3591732184888123964}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x404f70, 0x7ffd41fc1f28}, data = { prev = 0x0, cleanup = 0x0, canceltype = 4214640}}} not_first_call = #14 0x00000000004029c9 in _start () No symbol table info available. Core was generated by `dovecot/imap-login -D'. Program terminated with signal 11, Segmentation fault. #0 0x00007ff9e5e5f40b in p_malloc (pool=0x1f10c90, str=0x1f1d3d0 "4qyKMRI+AAAAAAAA") at mempool.h:76 76 return pool->v->malloc(pool, size); (gdb) bt full #0 0x00007ff9e5e5f40b in p_malloc (pool=0x1f10c90, str=0x1f1d3d0 "4qyKMRI+AAAAAAAA") at mempool.h:76 No locals. #1 p_strdup (pool=0x1f10c90, str=0x1f1d3d0 "4qyKMRI+AAAAAAAA") at strfuncs.c:43 mem = len = 17 #2 0x00007ff9e60c2e9f in client_get_session_id (client=0x1f10980) at client-common.c:482 buf = 0x1f1d328 base64_buf = 0x1f1d398 tv = {tv_sec = 1475622745, tv_usec = 58530} timestamp = 1475622745058530 i = 48 #3 0x00007ff9e60c302c in get_var_expand_table (client=0x1f10980, msg=0x1f1ce38 "proxy: SSL certificate not received from (null):0") at client-common.c:568 tab = 0x1f1ce70 #4 client_get_log_str (client=0x1f10980, msg=0x1f1ce38 "proxy: SSL certificate not received from (null):0") at client-common.c:644 static_tab = {{key = 115 's', value = 0x0, long_key = 0x0}, {key = 36 '$', value = 0x0, long_key = 0x0}, {key = 0 '\000', value = 0x0, long_key = 0x0}} func_table = {{key = 0x7ff9e60cdd0c "passdb", func = 0x7ff9e60c2c70 }, {key = 0x0, func = 0}} tab = e = str = str2 = pos = #5 0x00007ff9e60c347a in client_log_err (client=0x1f10980, msg=0x1f1ce38 "proxy: SSL certificate not received from (null):0") at client-common.c:692 _data_stack_cur_id = 3 #6 0x00007ff9e60c651e in login_proxy_ssl_handshaked (context=0x256cfb0) at login-proxy.c:765 proxy = 0x256cfb0 #7 0x00007ff9e60cbe4b in ssl_handshake (proxy=0x23f6710) at ssl-proxy-openssl.c:468 ret = #8 ssl_step (proxy=0x23f6710) at ssl-proxy-openssl.c:519 No locals. #9 0x00007ff9e5e49e0b in io_loop_call_io (io=0x256cc40) at ioloop.c:564 ioloop = 0x1ebd7b0 t_id = 2 __FUNCTION__ = "io_loop_call_io" #10 0x00007ff9e5e4b407 in io_loop_handler_run_internal (ioloop=) at ioloop-epoll.c:220 ctx = 0x1f188d0 events = event = 0x24d25f0 list = 0x2561f10 io = tv = {tv_sec = 0, tv_usec = 551105} events_count = msecs = ret = 1 i = call = __FUNCTION__ = "io_loop_handler_run_internal" #11 0x00007ff9e5e49eb5 in io_loop_handler_run (ioloop=0x1ebd7b0) at ioloop.c:612 No locals. #12 0x00007ff9e5e4a058 in io_loop_run (ioloop=0x1ebd7b0) at ioloop.c:588 __FUNCTION__ = "io_loop_run" #13 0x00007ff9e5ddcb23 in master_service_run (service=0x1ebd650, callback=) at master-service.c:640 No locals. #14 0x00007ff9e60c9593 in login_binary_run (binary=, argc=2, argv=0x1ebd390) at main.c:486 set_pool = 0x1ebdb80 login_socket = c = #15 0x00007ff9e5a35d1d in __libc_start_main (main=0x402ac0
, argc=2, ubp_av=0x7ffe367d2178, init=, fini=, rtld_fini=, stack_end=0x7ffe367d2168) at libc-start.c:226 result = unwind_buf = {cancel_jmp_buf = {{jmp_buf = {0, -3428671975511032229, 4204960, 140729812590960, 0, 0, 3429075480732761691, 3429810282407657051}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x404f70, 0x7ffe367d2178}, data = { prev = 0x0, cleanup = 0x0, canceltype = 4214640}}} not_first_call = #16 0x00000000004029c9 in _start () No symbol table info available. Core was generated by `dovecot/imap-login -D'. Program terminated with signal 11, Segmentation fault. #0 0x00007f029b173314 in str_sanitize_skip_start (src=0x2f6d6f632e61636f
, max_bytes=64) at str-sanitize.c:13 13 for (i = 0; i < max_bytes && src[i] != '\0'; ) { (gdb) bt full #0 0x00007f029b173314 in str_sanitize_skip_start (src=0x2f6d6f632e61636f
, max_bytes=64) at str-sanitize.c:13 chr = 0 i = 0 #1 str_sanitize (src=0x2f6d6f632e61636f
, max_bytes=64) at str-sanitize.c:88 str = i = #2 0x00007f029b3d7f9e in get_var_expand_table (client=0x221ee70, msg=0x187fe38 "proxy: SSL certificate not received from \314-A\235q\210\021\b\354\062Lz?)\367.\002 \031\233 \362w?\224\356K7\343\224 \002\037\364!+\266\371\277O`K\021\b?\a\202\001:6") at client-common.c:548 tab = 0x187fe98 #3 client_get_log_str (client=0x221ee70, msg=0x187fe38 "proxy: SSL certificate not received from \314-A\235q\210\021\b\354\062Lz?)\367.\002 \031\233 \362w?\224\356K7\343\224 \002\037\364!+\266\371\277O`K\021\b?\a\202\001:6") at client-common.c:644 static_tab = {{key = 115 's', value = 0x0, long_key = 0x0}, {key = 36 '$', value = 0x0, long_key = 0x0}, {key = 0 '\000', value = 0x0, long_key = 0x0}} func_table = {{key = 0x7f029b3e2d0c "passdb", func = 0x7f029b3d7c70 }, {key = 0x0, func = 0}} tab = e = str = str2 = pos = #4 0x00007f029b3d847a in client_log_err (client=0x221ee70, msg=0x187fe38 "proxy: SSL certificate not received from \314-A\235q\210\021\b\354\062Lz?)\367.\002 \031\233 \362w?\224\356K7\343\224 \002\037\364!+\266\371\277O`K\021\b?\a\202\001:6") at client-common.c:692 _data_stack_cur_id = 3 #5 0x00007f029b3db51e in login_proxy_ssl_handshaked (context=0x19b2530) at login-proxy.c:765 proxy = 0x19b2530 #6 0x00007f029b3e0e4b in ssl_handshake (proxy=0x195df70) at ssl-proxy-openssl.c:468 ret = #7 ssl_step (proxy=0x195df70) at ssl-proxy-openssl.c:519 No locals. #8 0x00007f029b15ee0b in io_loop_call_io (io=0x216d790) at ioloop.c:564 ioloop = 0x18207b0 t_id = 2 __FUNCTION__ = "io_loop_call_io" #9 0x00007f029b160407 in io_loop_handler_run_internal (ioloop=) at ioloop-epoll.c:220 ctx = 0x187b8d0 events = event = 0x1df4668 list = 0x2025710 io = tv = {tv_sec = 11, tv_usec = 323409} events_count = msecs = ret = 3 i = call = __FUNCTION__ = "io_loop_handler_run_internal" #10 0x00007f029b15eeb5 in io_loop_handler_run (ioloop=0x18207b0) at ioloop.c:612 No locals. #11 0x00007f029b15f058 in io_loop_run (ioloop=0x18207b0) at ioloop.c:588 __FUNCTION__ = "io_loop_run" #12 0x00007f029b0f1b23 in master_service_run (service=0x1820650, callback=) at master-service.c:640 No locals. #13 0x00007f029b3de593 in login_binary_run (binary=, argc=2, argv=0x1820390) at main.c:486 set_pool = 0x1820b80 login_socket = c = #14 0x00007f029ad4ad1d in __libc_start_main (main=0x402ac0
, argc=2, ubp_av=0x7ffd637fd608, init=, fini=, rtld_fini=, stack_end=0x7ffd637fd5f8) at libc-start.c:226 result = unwind_buf = {cancel_jmp_buf = {{jmp_buf = {0, -4141182239951058275, 4204960, 140726272775680, 0, 0, 4142562126330825373, 4071998539020864157}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x404f70, 0x7ffd637fd608}, data = { prev = 0x0, cleanup = 0x0, canceltype = 4214640}}} not_first_call = #15 0x00000000004029c9 in _start () No symbol table info available. -- Adi Pircalabu From aki.tuomi at dovecot.fi Thu Oct 6 05:02:21 2016 From: aki.tuomi at dovecot.fi (Aki Tuomi) Date: Thu, 6 Oct 2016 08:02:21 +0300 (EEST) Subject: [imap-login] SSL related crashes using the latest 2.2.25 In-Reply-To: <1234f8996ddd7278d94116ab17a4c4c9@ddns.com.au> References: <1234f8996ddd7278d94116ab17a4c4c9@ddns.com.au> Message-ID: <724415054.2623.1475730142743@appsuite-dev.open-xchange.com> It seems to error on ssl certificate not received. Can you post doveconf -n and logs? doveconf -a is usually not wanted. Aki > On October 6, 2016 at 7:27 AM Adi Pircalabu wrote: > > > I'm running Dovecot as proxy in front of some IMAP/POP3 Dovecot & > Courier-IMAP servers and in the last couple of days I've been seeing a > lot of imap-login crashes (signal 11) on both 2.2.18 and 2.2.25, all SSL > related. The following backtraces are taken running 2.2.25, built from > source on a test system similar to the live proxy servers. > OS: CentOS 6.8 64bit > Packages: openssl-1.0.1e-48.el6_8.3.x86_64, dovecot-2.2.25-2.el6.x86_64 > built from source RPM. > > Can post "doveconf -a" if required. > > Core was generated by `dovecot/imap-login -D'. > Program terminated with signal 11, Segmentation fault. > #0 ssl_proxy_has_broken_client_cert (proxy=0x0) at > ssl-proxy-openssl.c:677 > 677 { > (gdb) bt full > #0 ssl_proxy_has_broken_client_cert (proxy=0x0) at > ssl-proxy-openssl.c:677 > No locals. > #1 0x00007fdec4e6b489 in login_proxy_ssl_handshaked (context=0x14b4170) > at login-proxy.c:759 > proxy = 0x14b4170 > #2 0x00007fdec4e70e4b in ssl_handshake (proxy=0x169d7b0) at > ssl-proxy-openssl.c:468 > ret = > #3 ssl_step (proxy=0x169d7b0) at ssl-proxy-openssl.c:519 > No locals. > #4 0x00007fdec4beee0b in io_loop_call_io (io=0x13fdab0) at ioloop.c:564 > ioloop = 0x12a07b0 > t_id = 2 > __FUNCTION__ = "io_loop_call_io" > #5 0x00007fdec4bf0407 in io_loop_handler_run_internal (ioloop= optimized out>) at ioloop-epoll.c:220 > ctx = 0x12fb8d0 > events = > event = 0x171fb20 > list = 0x15f8c50 > io = > tv = {tv_sec = 46, tv_usec = 134490} > events_count = > msecs = > ret = 1 > i = > call = > __FUNCTION__ = "io_loop_handler_run_internal" > #6 0x00007fdec4beeeb5 in io_loop_handler_run (ioloop=0x12a07b0) at > ioloop.c:612 > No locals. > #7 0x00007fdec4bef058 in io_loop_run (ioloop=0x12a07b0) at ioloop.c:588 > __FUNCTION__ = "io_loop_run" > #8 0x00007fdec4b81b23 in master_service_run (service=0x12a0650, > callback=) at master-service.c:640 > No locals. > #9 0x00007fdec4e6e593 in login_binary_run (binary= out>, argc=2, argv=0x12a0390) at main.c:486 > set_pool = 0x12a0b80 > login_socket = > c = > #10 0x00007fdec47dad1d in __libc_start_main (main=0x402ac0
, > argc=2, ubp_av=0x7ffc53ee5688, init=, fini= optimized out>, rtld_fini=, > stack_end=0x7ffc53ee5678) at libc-start.c:226 > result = > unwind_buf = {cancel_jmp_buf = {{jmp_buf = {0, > 5496455093114277129, 4204960, 140721716614784, 0, 0, > -5494405746439844599, -5477823887334535927}, mask_was_saved = 0}}, priv > = {pad = {0x0, 0x0, 0x404f70, 0x7ffc53ee5688}, data = { > prev = 0x0, cleanup = 0x0, canceltype = 4214640}}} > not_first_call = > #11 0x00000000004029c9 in _start () > No symbol table info available. > > Core was generated by `dovecot/imap-login -D'. > Program terminated with signal 11, Segmentation fault. > #0 0x00007f1a58620dec in _IO_vfprintf_internal (s= out>, format=, ap=) at > vfprintf.c:1641 > 1641 process_string_arg (((struct printf_spec *) NULL)); > (gdb) bt full > #0 0x00007f1a58620dec in _IO_vfprintf_internal (s= out>, format=, ap=) at > vfprintf.c:1641 > len = > string_malloced = > step0_jumps = {0, -1285, -1198, 3818, 3910, 3206, 3307, 4086, > 1925, 2133, 2249, 3731, 4474, -4059, -1109, -1062, 868, 956, 968, 980, > -1505, -495, 665, 755, 827, -3962, 395, 4392, -4059, 3997} > space = 0 > is_short = 0 > use_outdigits = 0 > step1_jumps = {0, 0, 0, 0, 0, 0, 0, 0, 0, 2133, 2249, 3731, > 4474, -4059, -1109, -1062, 868, 956, 968, 980, -1505, -495, 665, 755, > 827, -3962, 395, 4392, -4059, 0} > group = 0 > prec = -1 > step2_jumps = {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2249, 3731, 4474, > -4059, -1109, -1062, 868, 956, 968, 980, -1505, -495, 665, 755, 827, > -3962, 395, 4392, -4059, 0} > string = > left = 0 > is_long_double = 0 > width = 0 > step3a_jumps = {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2336, 0, 0, 0, > -1109, -1062, 868, 956, 968, 0, 0, 0, 0, 755, 0, 0, 0, 0, 0, 0} > alt = 0 > showsign = 0 > is_long = 0 > is_char = 0 > pad = 32 ' ' > step3b_jumps = {0 , 4474, 0, 0, -1109, -1062, > 868, 956, 968, 980, -1505, -495, 665, 755, 827, -3962, 395, 0, 0, 0} > step4_jumps = {0 , -1109, -1062, 868, 956, > 968, 980, -1505, -495, 665, 755, 827, -3962, 395, 0, 0, 0} > is_negative = > base = > the_arg = {pa_wchar = 0 L'\000', pa_int = 0, pa_long_int = 0, > pa_long_long_int = 0, pa_u_int = 0, pa_u_long_int = 0, > pa_u_long_long_int = 0, pa_double = 0, pa_long_double = 0, pa_string = > 0x0, pa_wstring = 0x0, > pa_pointer = 0x0, pa_user = 0x0} > spec = 115 's' > _buffer = {__routine = 0, __arg = 0xf583a1d84, __canceltype = > 24, __prev = 0x0} > _avail = > thousands_sep = 0x0 > grouping = 0xffffffffffffffff
bounds> > done = 97 > f = 0x7f1a58c90b0d "s" > lead_str_end = 0x7f1a58c90b05 "%s:%u: %s" > end_of_spec = > work_buffer = > "\341v\361[\372\037\002\363\315\301y\017\302.\a\272\306B\267\001\377\244\276Uu\023\005\301\a\227\f\353\374\062V\002\000\000\000\000\320\006}S\375\177\000\000\340\062V\002\000\000\000\000\273\252\377W\032\177\000\000(\036f\002\000\000\000\000|\004}S\375\177\000\000\320\006}S\375\177\000\000a+\aX\032\177\000\000(\036f\002\000\000\000\000 > \036f\002\000\000\000\000\360\003}S\375\177\000\000\301\025\000X\000\000\000\000\210\033f\002\000\000\000\000(\036f\002\024\000\000\000\t`\v\217~0\"\"\\\200'\217X_\331q\325o\244\210\000\000\000\000\340\062V\002\000\000\000\000\n\000\000\000\000\000\000\000Wo\020X\000\000\000\000\b\024V\002\000\000\000\000\226$\377W\032\177\000\000\001\000\000\000\000\000\000\000\211\001\000\000\000\000\000\000\260\006}S\375\177\000\000\240\257\235\003\000\000\000\000\260\006}S\375\177\000\000\200"... > workstart = 0x0 > workend = 0x7ffd537d0748 "" > ap_save = {{gp_offset = 8, fp_offset = 48, overflow_arg_area = > 0x7ffd537d0b60, reg_save_area = 0x7ffd537d0aa0}} > nspecs_done = 2 > save_errno = 0 > readonly_format = 0 > args_malloced = 0x0 > specs = 0xa00000001 > specs_malloced = false > jump_table = > "\001\000\000\004\000\016\000\006\000\000\a\002\000\003\t\000\005\b\b\b\b\b\b\b\b\b\000\000\000\000\000\000\000\032\000\031\000\023\023\023\000\035\000\000\f\000\000\000\000\000\000\025\000\000\000\000\022\000\r\000\000\000\000\000\000\032\000\024\017\023\023\023\n\017\034\000\v\030\027\021\026\f\000\025\033\020\000\000\022\000\r" > #1 0x00007f1a586d8c50 in ___vsnprintf_chk ( > s=0x1fe0e38 "proxy: Received invalid SSL certificate from > \314-A\235q\210\021\b\354\062Lz?)\367.\002 \031\233 > \362w?\224\356K7\343\224 \002\037\364!+\266\371\277O`K\021\b\315:6: > k/CN=AddTrust External CA Root", > maxlen=, flags=1, slen=, > format=0x7f1a58c90ad8 "proxy: Received invalid SSL certificate from > %s:%u: %s", args=0x7ffd537d0a80) at vsnprintf_chk.c:65 > sf = {f = {_sbf = {_f = {_flags = -72515583, > _IO_read_ptr = 0x1fe0e38 "proxy: Received invalid SSL > certificate from \314-A\235q\210\021\b\354\062Lz?)\367.\002 \031\233 > \362w?\224\356K7\343\224 \002\037\364!+\266\371\277O`K\021\b\315:6: > k/CN=AddTrust External CA Root", _IO_read_end = 0x1fe0e38 "proxy: > Received invalid SSL certificate from > \314-A\235q\210\021\b\354\062Lz?)\367.\002 \031\233 > \362w?\224\356K7\343\224 \002\037\364!+\266\371\277O`K\021\b\315:6: > k/CN=AddTrust External CA Root", > _IO_read_base = 0x1fe0e38 "proxy: Received invalid SSL > certificate from \314-A\235q\210\021\b\354\062Lz?)\367.\002 \031\233 > \362w?\224\356K7\343\224 \002\037\364!+\266\371\277O`K\021\b\315:6: > k/CN=AddTrust External CA Root", _IO_write_base = 0x1fe0e38 "proxy: > Received invalid SSL certificate from > \314-A\235q\210\021\b\354\062Lz?)\367.\002 \031\233 > \362w?\224\356K7\343\224 \002\037\364!+\266\371\277O`K\021\b\315:6: > k/CN=AddTrust External CA Root", > _IO_write_ptr = 0x1fe0e99 "k/CN=AddTrust External CA > Root", _IO_write_end = 0x1fe0f6d "oodvale.vic.au): disconnecting > 127.0.0.1 (Disconnected by client: EOF(0s idle, in=217, out=796))", > _IO_buf_base = 0x1fe0e38 "proxy: Received invalid SSL > certificate from \314-A\235q\210\021\b\354\062Lz?)\367.\002 \031\233 > \362w?\224\356K7\343\224 \002\037\364!+\266\371\277O`K\021\b\315:6: > k/CN=AddTrust External CA Root", _IO_buf_end = 0x1fe0f6d > "oodvale.vic.au): disconnecting 127.0.0.1 (Disconnected by client: > EOF(0s idle, in=217, out=796))", _IO_save_base = 0x0, _IO_backup_base = > 0x0, _IO_save_end = 0x0, _markers = 0x0, _chain = 0x0, > _fileno = 1489493956, _flags2 = 4, _old_offset = > 139751135368008, _cur_column = 0, _vtable_offset = -57 '\307', _shortbuf > = "X", _lock = 0x0, _offset = 4294967673, _codecvt = 0x381f620, > _wide_data = 0x7ffd537d09c0, > _freeres_list = 0x0, _freeres_buf = 0x7ffd537d0ab0, > _freeres_size = 140726004157144, _mode = -1, _unused2 = > "\032\177\000\000\000\000\000\000\000\000\000\000\315\320e\207\000\000\000"}, > vtable = 0x7f1a58966440}, _s = { > _allocate_buffer = 0, _free_buffer = 0}}, > overflow_buf = "\001", '\000' , > "\001\000\000\000\000\000\000\000\350V\vY\032\177\000\000\200\302g\256\333\333;\221\330\n\311X\032\177\000\000\034\n}S\375\177\000\000\002\000\000\000\000\000\000"} > ret = > #2 0x00007f1a58a220ff in vsnprintf (format=0x7f1a58c90ad8 "proxy: > Received invalid SSL certificate from %s:%u: %s", args=0x7ffd537d0a80, > size_r=0x7ffd537d0a5c) at /usr/include/bits/stdio2.h:78 > No locals. > #3 t_noalloc_strdup_vprintf (format=0x7f1a58c90ad8 "proxy: Received > invalid SSL certificate from %s:%u: %s", args=0x7ffd537d0a80, > size_r=0x7ffd537d0a5c) at strfuncs.c:132 > args2 = {{gp_offset = 8, fp_offset = 48, overflow_arg_area = > 0x7ffd537d0b60, reg_save_area = 0x7ffd537d0aa0}} > tmp = 0x1fe0e38 "proxy: Received invalid SSL certificate from > \314-A\235q\210\021\b\354\062Lz?)\367.\002 \031\233 > \362w?\224\356K7\343\224 \002\037\364!+\266\371\277O`K\021\b\315:6: > k/CN=AddTrust External CA Root" > init_size = 310 > ret = > __FUNCTION__ = "t_noalloc_strdup_vprintf" > #4 0x00007f1a58a221d9 in p_strdup_vprintf (pool=0x7f1a58c76830, > format=, args=) at > strfuncs.c:156 > tmp = > buf = > size = > #5 0x00007f1a58a222ea in t_strdup_printf (format=) > at strfuncs.c:263 > args = {{gp_offset = 32, fp_offset = 48, overflow_arg_area = > 0x7ffd537d0b60, reg_save_area = 0x7ffd537d0aa0}} > ret = 0x0 > #6 0x00007f1a58c88548 in login_proxy_ssl_handshaked (context=0x3141260) > at login-proxy.c:760 > proxy = 0x3141260 > #7 0x00007f1a58c8de4b in ssl_handshake (proxy=0x3498970) at > ssl-proxy-openssl.c:468 > ret = > #8 ssl_step (proxy=0x3498970) at ssl-proxy-openssl.c:519 > No locals. > #9 0x00007f1a58a0be0b in io_loop_call_io (io=0x322e900) at ioloop.c:564 > ioloop = 0x1f817b0 > t_id = 2 > __FUNCTION__ = "io_loop_call_io" > #10 0x00007f1a58a0d407 in io_loop_handler_run_internal (ioloop= optimized out>) at ioloop-epoll.c:220 > ctx = 0x1fdc8d0 > events = > event = 0x34a22e8 > list = 0x317b250 > io = > tv = {tv_sec = 0, tv_usec = 614018} > events_count = > msecs = > ret = 3 > i = > call = > __FUNCTION__ = "io_loop_handler_run_internal" > #11 0x00007f1a58a0beb5 in io_loop_handler_run (ioloop=0x1f817b0) at > ioloop.c:612 > No locals. > #12 0x00007f1a58a0c058 in io_loop_run (ioloop=0x1f817b0) at ioloop.c:588 > __FUNCTION__ = "io_loop_run" > #13 0x00007f1a5899eb23 in master_service_run (service=0x1f81650, > callback=) at master-service.c:640 > No locals. > #14 0x00007f1a58c8b593 in login_binary_run (binary= out>, argc=2, argv=0x1f81390) at main.c:486 > set_pool = 0x1f81b80 > login_socket = > c = > #15 0x00007f1a585f7d1d in __libc_start_main (main=0x402ac0
, > argc=2, ubp_av=0x7ffd537d0de8, init=, fini= optimized out>, rtld_fini=, > stack_end=0x7ffd537d0dd8) at libc-start.c:226 > result = > unwind_buf = {cancel_jmp_buf = {{jmp_buf = {0, > -6805578527007124004, 4204960, 140726004157920, 0, 0, > 6806940805326362076, 6897556252474660316}, mask_was_saved = 0}}, priv = > {pad = {0x0, 0x0, 0x404f70, 0x7ffd537d0de8}, data = { > prev = 0x0, cleanup = 0x0, canceltype = 4214640}}} > not_first_call = > #16 0x00000000004029c9 in _start () > No symbol table info available. > > Core was generated by `dovecot/imap-login -D'. > Program terminated with signal 11, Segmentation fault. > #0 t_strcut (str=0xffffffffffffffff
bounds>, cutchar=64 '@') at strfuncs.c:294 > 294 for (p = str; *p != '\0'; p++) { > (gdb) bt full > #0 t_strcut (str=0xffffffffffffffff
bounds>, cutchar=64 '@') at strfuncs.c:294 > p = 0xffffffffffffffff
bounds> > #1 0x00007f1afa1a7d2f in get_var_expand_users (tab=0x1c37e98, > user=0xffffffffffffffff
) at > client-common.c:523 > i = > #2 0x00007f1afa1a7f29 in get_var_expand_table (client=0x27b85b0, > msg=0x1c37e38 "proxy: SSL certificate not received from > \314-A\235q\210\021\b\354\062Lz?)\367.\002 \031\233 > \362w?\224\356K7\343\224 \002\037\364!+\266\371\277O`K\021\b\315@:6") at > client-common.c:541 > tab = 0x1c37e98 > #3 client_get_log_str (client=0x27b85b0, msg=0x1c37e38 "proxy: SSL > certificate not received from \314-A\235q\210\021\b\354\062Lz?)\367.\002 > \031\233 \362w?\224\356K7\343\224 > \002\037\364!+\266\371\277O`K\021\b\315@:6") > at client-common.c:644 > static_tab = {{key = 115 's', value = 0x0, long_key = 0x0}, {key > = 36 '$', value = 0x0, long_key = 0x0}, {key = 0 '\000', value = 0x0, > long_key = 0x0}} > func_table = {{key = 0x7f1afa1b2d0c "passdb", func = > 0x7f1afa1a7c70 }, {key = 0x0, func = 0}} > tab = > e = > str = > str2 = > pos = > #4 0x00007f1afa1a847a in client_log_err (client=0x27b85b0, > msg=0x1c37e38 "proxy: SSL certificate not received from > \314-A\235q\210\021\b\354\062Lz?)\367.\002 \031\233 > \362w?\224\356K7\343\224 \002\037\364!+\266\371\277O`K\021\b\315@:6") at > client-common.c:692 > _data_stack_cur_id = 3 > #5 0x00007f1afa1ab51e in login_proxy_ssl_handshaked (context=0x237e910) > at login-proxy.c:765 > proxy = 0x237e910 > #6 0x00007f1afa1b0e4b in ssl_handshake (proxy=0x23cb660) at > ssl-proxy-openssl.c:468 > ret = > #7 ssl_step (proxy=0x23cb660) at ssl-proxy-openssl.c:519 > No locals. > #8 0x00007f1af9f2ee0b in io_loop_call_io (io=0x285cf20) at ioloop.c:564 > ioloop = 0x1bd87b0 > t_id = 2 > __FUNCTION__ = "io_loop_call_io" > #9 0x00007f1af9f30407 in io_loop_handler_run_internal (ioloop= optimized out>) at ioloop-epoll.c:220 > ctx = 0x1c338d0 > events = > event = 0x260eac0 > list = 0x227b980 > io = > tv = {tv_sec = 0, tv_usec = 519697} > events_count = > msecs = > ret = 1 > i = > call = > __FUNCTION__ = "io_loop_handler_run_internal" > #10 0x00007f1af9f2eeb5 in io_loop_handler_run (ioloop=0x1bd87b0) at > ioloop.c:612 > No locals. > #11 0x00007f1af9f2f058 in io_loop_run (ioloop=0x1bd87b0) at ioloop.c:588 > __FUNCTION__ = "io_loop_run" > #12 0x00007f1af9ec1b23 in master_service_run (service=0x1bd8650, > callback=) at master-service.c:640 > No locals. > #13 0x00007f1afa1ae593 in login_binary_run (binary= out>, argc=2, argv=0x1bd8390) at main.c:486 > set_pool = 0x1bd8b80 > login_socket = > c = > #14 0x00007f1af9b1ad1d in __libc_start_main (main=0x402ac0
, > argc=2, ubp_av=0x7ffcdfc7cd68, init=, fini= optimized out>, rtld_fini=, > stack_end=0x7ffcdfc7cd58) at libc-start.c:226 > result = > unwind_buf = {cancel_jmp_buf = {{jmp_buf = {0, > -5108975228267825424, 4204960, 140724062899552, 0, 0, > 5107356402929858288, 5128673613026916080}, mask_was_saved = 0}}, priv = > {pad = {0x0, 0x0, 0x404f70, 0x7ffcdfc7cd68}, data = { > prev = 0x0, cleanup = 0x0, canceltype = 4214640}}} > not_first_call = > #15 0x00000000004029c9 in _start () > No symbol table info available. > > Core was generated by `dovecot/imap-login -D'. > Program terminated with signal 11, Segmentation fault. > #0 ssl_proxy_is_handshaked (proxy=0x21a930a940a43715) at > ssl-proxy-openssl.c:720 > 720 { > (gdb) bt full > #0 ssl_proxy_is_handshaked (proxy=0x21a930a940a43715) at > ssl-proxy-openssl.c:720 > No locals. > #1 0x00007f0c84b63326 in get_var_expand_table (client=0xc49c50, > msg=0x8b5e38 "proxy: SSL certificate not received from > \314-A\235q\210\021\b\354\062Lz?)\367.\002 \031\233 > \362w?\224\356K7\343\224 \002\037\364!+\266\371\277O`K\021\b\315@:6") at > client-common.c:556 > ssl_state = > ssl_error = > tab = 0x8b5e98 > #2 client_get_log_str (client=0xc49c50, msg=0x8b5e38 "proxy: SSL > certificate not received from \314-A\235q\210\021\b\354\062Lz?)\367.\002 > \031\233 \362w?\224\356K7\343\224 > \002\037\364!+\266\371\277O`K\021\b\315@:6") > at client-common.c:644 > static_tab = {{key = 115 's', value = 0x0, long_key = 0x0}, {key > = 36 '$', value = 0x0, long_key = 0x0}, {key = 0 '\000', value = 0x0, > long_key = 0x0}} > func_table = {{key = 0x7f0c84b6dd0c "passdb", func = > 0x7f0c84b62c70 }, {key = 0x0, func = 0}} > tab = > e = > str = > str2 = > pos = > #3 0x00007f0c84b6347a in client_log_err (client=0xc49c50, > msg=0x8b5e38 "proxy: SSL certificate not received from > \314-A\235q\210\021\b\354\062Lz?)\367.\002 \031\233 > \362w?\224\356K7\343\224 \002\037\364!+\266\371\277O`K\021\b\315@:6") at > client-common.c:692 > _data_stack_cur_id = 3 > #4 0x00007f0c84b6651e in login_proxy_ssl_handshaked (context=0xf464b0) > at login-proxy.c:765 > proxy = 0xf464b0 > #5 0x00007f0c84b6be4b in ssl_handshake (proxy=0xd5d600) at > ssl-proxy-openssl.c:468 > ret = > #6 ssl_step (proxy=0xd5d600) at ssl-proxy-openssl.c:519 > No locals. > #7 0x00007f0c848e9e0b in io_loop_call_io (io=0xdf5ea0) at ioloop.c:564 > ioloop = 0x8567b0 > t_id = 2 > __FUNCTION__ = "io_loop_call_io" > #8 0x00007f0c848eb407 in io_loop_handler_run_internal (ioloop= optimized out>) at ioloop-epoll.c:220 > ctx = 0x8b18d0 > events = > event = 0xf305f0 > list = 0xc4a700 > io = > tv = {tv_sec = 0, tv_usec = 954174} > events_count = > msecs = > ret = 1 > i = > call = > __FUNCTION__ = "io_loop_handler_run_internal" > #9 0x00007f0c848e9eb5 in io_loop_handler_run (ioloop=0x8567b0) at > ioloop.c:612 > No locals. > #10 0x00007f0c848ea058 in io_loop_run (ioloop=0x8567b0) at ioloop.c:588 > __FUNCTION__ = "io_loop_run" > #11 0x00007f0c8487cb23 in master_service_run (service=0x856650, > callback=) at master-service.c:640 > No locals. > #12 0x00007f0c84b69593 in login_binary_run (binary= out>, argc=2, argv=0x856390) at main.c:486 > set_pool = 0x856b80 > login_socket = > c = > #13 0x00007f0c844d5d1d in __libc_start_main (main=0x402ac0
, > argc=2, ubp_av=0x7ffd41fc1f28, init=, fini= optimized out>, rtld_fini=, > stack_end=0x7ffd41fc1f18) at libc-start.c:226 > result = > unwind_buf = {cancel_jmp_buf = {{jmp_buf = {0, > -3476376496289340868, 4204960, 140725710495520, 0, 0, > 3475633251103023676, 3591732184888123964}, mask_was_saved = 0}}, priv = > {pad = {0x0, 0x0, 0x404f70, 0x7ffd41fc1f28}, data = { > prev = 0x0, cleanup = 0x0, canceltype = 4214640}}} > not_first_call = > #14 0x00000000004029c9 in _start () > No symbol table info available. > > Core was generated by `dovecot/imap-login -D'. > Program terminated with signal 11, Segmentation fault. > #0 0x00007ff9e5e5f40b in p_malloc (pool=0x1f10c90, str=0x1f1d3d0 > "4qyKMRI+AAAAAAAA") at mempool.h:76 > 76 return pool->v->malloc(pool, size); > (gdb) bt full > #0 0x00007ff9e5e5f40b in p_malloc (pool=0x1f10c90, str=0x1f1d3d0 > "4qyKMRI+AAAAAAAA") at mempool.h:76 > No locals. > #1 p_strdup (pool=0x1f10c90, str=0x1f1d3d0 "4qyKMRI+AAAAAAAA") at > strfuncs.c:43 > mem = > len = 17 > #2 0x00007ff9e60c2e9f in client_get_session_id (client=0x1f10980) at > client-common.c:482 > buf = 0x1f1d328 > base64_buf = 0x1f1d398 > tv = {tv_sec = 1475622745, tv_usec = 58530} > timestamp = 1475622745058530 > i = 48 > #3 0x00007ff9e60c302c in get_var_expand_table (client=0x1f10980, > msg=0x1f1ce38 "proxy: SSL certificate not received from (null):0") at > client-common.c:568 > tab = 0x1f1ce70 > #4 client_get_log_str (client=0x1f10980, msg=0x1f1ce38 "proxy: SSL > certificate not received from (null):0") at client-common.c:644 > static_tab = {{key = 115 's', value = 0x0, long_key = 0x0}, {key > = 36 '$', value = 0x0, long_key = 0x0}, {key = 0 '\000', value = 0x0, > long_key = 0x0}} > func_table = {{key = 0x7ff9e60cdd0c "passdb", func = > 0x7ff9e60c2c70 }, {key = 0x0, func = 0}} > tab = > e = > str = > str2 = > pos = > #5 0x00007ff9e60c347a in client_log_err (client=0x1f10980, > msg=0x1f1ce38 "proxy: SSL certificate not received from (null):0") at > client-common.c:692 > _data_stack_cur_id = 3 > #6 0x00007ff9e60c651e in login_proxy_ssl_handshaked (context=0x256cfb0) > at login-proxy.c:765 > proxy = 0x256cfb0 > #7 0x00007ff9e60cbe4b in ssl_handshake (proxy=0x23f6710) at > ssl-proxy-openssl.c:468 > ret = > #8 ssl_step (proxy=0x23f6710) at ssl-proxy-openssl.c:519 > No locals. > #9 0x00007ff9e5e49e0b in io_loop_call_io (io=0x256cc40) at ioloop.c:564 > ioloop = 0x1ebd7b0 > t_id = 2 > __FUNCTION__ = "io_loop_call_io" > #10 0x00007ff9e5e4b407 in io_loop_handler_run_internal (ioloop= optimized out>) at ioloop-epoll.c:220 > ctx = 0x1f188d0 > events = > event = 0x24d25f0 > list = 0x2561f10 > io = > tv = {tv_sec = 0, tv_usec = 551105} > events_count = > msecs = > ret = 1 > i = > call = > __FUNCTION__ = "io_loop_handler_run_internal" > #11 0x00007ff9e5e49eb5 in io_loop_handler_run (ioloop=0x1ebd7b0) at > ioloop.c:612 > No locals. > #12 0x00007ff9e5e4a058 in io_loop_run (ioloop=0x1ebd7b0) at ioloop.c:588 > __FUNCTION__ = "io_loop_run" > #13 0x00007ff9e5ddcb23 in master_service_run (service=0x1ebd650, > callback=) at master-service.c:640 > No locals. > #14 0x00007ff9e60c9593 in login_binary_run (binary= out>, argc=2, argv=0x1ebd390) at main.c:486 > set_pool = 0x1ebdb80 > login_socket = > c = > #15 0x00007ff9e5a35d1d in __libc_start_main (main=0x402ac0
, > argc=2, ubp_av=0x7ffe367d2178, init=, fini= optimized out>, rtld_fini=, > stack_end=0x7ffe367d2168) at libc-start.c:226 > result = > unwind_buf = {cancel_jmp_buf = {{jmp_buf = {0, > -3428671975511032229, 4204960, 140729812590960, 0, 0, > 3429075480732761691, 3429810282407657051}, mask_was_saved = 0}}, priv = > {pad = {0x0, 0x0, 0x404f70, 0x7ffe367d2178}, data = { > prev = 0x0, cleanup = 0x0, canceltype = 4214640}}} > not_first_call = > #16 0x00000000004029c9 in _start () > No symbol table info available. > > Core was generated by `dovecot/imap-login -D'. > Program terminated with signal 11, Segmentation fault. > #0 0x00007f029b173314 in str_sanitize_skip_start > (src=0x2f6d6f632e61636f
, > max_bytes=64) at str-sanitize.c:13 > 13 for (i = 0; i < max_bytes && src[i] != '\0'; ) { > (gdb) bt full > #0 0x00007f029b173314 in str_sanitize_skip_start > (src=0x2f6d6f632e61636f
, > max_bytes=64) at str-sanitize.c:13 > chr = 0 > i = 0 > #1 str_sanitize (src=0x2f6d6f632e61636f
of bounds>, max_bytes=64) at str-sanitize.c:88 > str = > i = > #2 0x00007f029b3d7f9e in get_var_expand_table (client=0x221ee70, > msg=0x187fe38 "proxy: SSL certificate not received from > \314-A\235q\210\021\b\354\062Lz?)\367.\002 \031\233 > \362w?\224\356K7\343\224 > \002\037\364!+\266\371\277O`K\021\b?\a\202\001:6") at > client-common.c:548 > tab = 0x187fe98 > #3 client_get_log_str (client=0x221ee70, msg=0x187fe38 "proxy: SSL > certificate not received from \314-A\235q\210\021\b\354\062Lz?)\367.\002 > \031\233 \362w?\224\356K7\343\224 > \002\037\364!+\266\371\277O`K\021\b?\a\202\001:6") > at client-common.c:644 > static_tab = {{key = 115 's', value = 0x0, long_key = 0x0}, {key > = 36 '$', value = 0x0, long_key = 0x0}, {key = 0 '\000', value = 0x0, > long_key = 0x0}} > func_table = {{key = 0x7f029b3e2d0c "passdb", func = > 0x7f029b3d7c70 }, {key = 0x0, func = 0}} > tab = > e = > str = > str2 = > pos = > #4 0x00007f029b3d847a in client_log_err (client=0x221ee70, > msg=0x187fe38 "proxy: SSL certificate not received from > \314-A\235q\210\021\b\354\062Lz?)\367.\002 \031\233 > \362w?\224\356K7\343\224 > \002\037\364!+\266\371\277O`K\021\b?\a\202\001:6") at > client-common.c:692 > _data_stack_cur_id = 3 > #5 0x00007f029b3db51e in login_proxy_ssl_handshaked (context=0x19b2530) > at login-proxy.c:765 > proxy = 0x19b2530 > #6 0x00007f029b3e0e4b in ssl_handshake (proxy=0x195df70) at > ssl-proxy-openssl.c:468 > ret = > #7 ssl_step (proxy=0x195df70) at ssl-proxy-openssl.c:519 > No locals. > #8 0x00007f029b15ee0b in io_loop_call_io (io=0x216d790) at ioloop.c:564 > ioloop = 0x18207b0 > t_id = 2 > __FUNCTION__ = "io_loop_call_io" > #9 0x00007f029b160407 in io_loop_handler_run_internal (ioloop= optimized out>) at ioloop-epoll.c:220 > ctx = 0x187b8d0 > events = > event = 0x1df4668 > list = 0x2025710 > io = > tv = {tv_sec = 11, tv_usec = 323409} > events_count = > msecs = > ret = 3 > i = > call = > __FUNCTION__ = "io_loop_handler_run_internal" > #10 0x00007f029b15eeb5 in io_loop_handler_run (ioloop=0x18207b0) at > ioloop.c:612 > No locals. > #11 0x00007f029b15f058 in io_loop_run (ioloop=0x18207b0) at ioloop.c:588 > __FUNCTION__ = "io_loop_run" > #12 0x00007f029b0f1b23 in master_service_run (service=0x1820650, > callback=) at master-service.c:640 > No locals. > #13 0x00007f029b3de593 in login_binary_run (binary= out>, argc=2, argv=0x1820390) at main.c:486 > set_pool = 0x1820b80 > login_socket = > c = > #14 0x00007f029ad4ad1d in __libc_start_main (main=0x402ac0
, > argc=2, ubp_av=0x7ffd637fd608, init=, fini= optimized out>, rtld_fini=, > stack_end=0x7ffd637fd5f8) at libc-start.c:226 > result = > unwind_buf = {cancel_jmp_buf = {{jmp_buf = {0, > -4141182239951058275, 4204960, 140726272775680, 0, 0, > 4142562126330825373, 4071998539020864157}, mask_was_saved = 0}}, priv = > {pad = {0x0, 0x0, 0x404f70, 0x7ffd637fd608}, data = { > prev = 0x0, cleanup = 0x0, canceltype = 4214640}}} > not_first_call = > #15 0x00000000004029c9 in _start () > No symbol table info available. > > > -- > Adi Pircalabu From kremels at kreme.com Thu Oct 6 05:05:13 2016 From: kremels at kreme.com (@lbutlr) Date: Wed, 5 Oct 2016 23:05:13 -0600 Subject: Auto-archiving Message-ID: I?d like to know if there is a way to tell dovecot to 1) move messages older than # days to the Archive folder 2) rebuild the indexes 3) remove any folders that are left with no mail Preferably, I?d like this to be a action I an schedule via crontab or something to fire off for any users that want it. So, I do not want it to do this across all users and mailboxes. From aki.tuomi at dovecot.fi Thu Oct 6 05:50:48 2016 From: aki.tuomi at dovecot.fi (Aki Tuomi) Date: Thu, 6 Oct 2016 08:50:48 +0300 (EEST) Subject: Auto-archiving In-Reply-To: References: Message-ID: <444220129.2635.1475733049859@appsuite-dev.open-xchange.com> > On October 6, 2016 at 8:05 AM "@lbutlr" wrote: > > > I?d like to know if there is a way to tell dovecot to > > 1) move messages older than # days to the Archive folder > 2) rebuild the indexes > 3) remove any folders that are left with no mail > > Preferably, I?d like this to be a action I an schedule via crontab or something to fire off for any users that want it. So, I do not want it to do this across all users and mailboxes. Have you tried doveadm move? Aki From stephan at rename-it.nl Thu Oct 6 09:19:03 2016 From: stephan at rename-it.nl (Stephan Bosch) Date: Thu, 6 Oct 2016 11:19:03 +0200 Subject: [feature suggestion] pigeonhole - sendmail path for outgoing email In-Reply-To: References: Message-ID: <1fe5b337-4a9f-52c0-4d2a-617fbcf428f4@rename-it.nl> Op 10/6/2016 om 12:46 AM schreef krzf83 at gmail.com : > pigeonhole seems to use /usr/sbin/sendmail for its outgoing emails - > even tought that is does not seem to be documented anywhere. How about > setting to specify diffrent sendmail program path and parameters? The sendmail_path setting is documented here (not Sieve-specific): http://wiki.dovecot.org/LDA Regards, Stephan. From stephan at rename-it.nl Thu Oct 6 09:20:29 2016 From: stephan at rename-it.nl (Stephan Bosch) Date: Thu, 6 Oct 2016 11:20:29 +0200 Subject: [feature suggestion] pigeonhole - sendmail path for outgoing email In-Reply-To: References: Message-ID: <930575af-592f-3afd-ef77-bfe10d5b0ee1@rename-it.nl> Op 10/6/2016 om 12:49 AM schreef krzf83 at gmail.com : > Possibility of adding custom header to outgoing sieve message would > also be nice feature. This is currently only supported from Sieve itself: https://tools.ietf.org/html/rfc5293 What do you need it for? Regards, Stephan. From p.heinlein at heinlein-support.de Thu Oct 6 10:07:44 2016 From: p.heinlein at heinlein-support.de (Peer Heinlein) Date: Thu, 6 Oct 2016 12:07:44 +0200 Subject: doveadm reload kicked proxy/director users Message-ID: <57F62270.7050106@heinlein-support.de> We just noticed, that a doveadm reload kicked all existing imap-sessions on our Dovecot Director. We're surprised about that, because shutdown_clients is set to NO. Is there any way to reload a config change without kicking all IMAP sessions? Peer -- Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-42 Fax: 030 / 405051-19 Zwangsangaben lt. ?35a GmbHG: HRB 93818 B / Amtsgericht Berlin-Charlottenburg, Gesch?ftsf?hrer: Peer Heinlein -- Sitz: Berlin From lists at tigertech.com Thu Oct 6 19:01:07 2016 From: lists at tigertech.com (Robert L Mathews) Date: Thu, 6 Oct 2016 12:01:07 -0700 Subject: [feature suggestion] pigeonhole - sendmail path for outgoing email In-Reply-To: <1fe5b337-4a9f-52c0-4d2a-617fbcf428f4@rename-it.nl> References: <1fe5b337-4a9f-52c0-4d2a-617fbcf428f4@rename-it.nl> Message-ID: <49cb0c08-ead4-04c8-af85-348966b691a3@tigertech.com> On 10/6/16 2:19 AM, Stephan Bosch wrote: > The sendmail_path setting is documented here (not Sieve-specific): > > http://wiki.dovecot.org/LDA And I can confirm that it works; we've been using this for a long time and it correctly affects Sieve: protocol lda { # used if sieve resends a message: sendmail_path = /usr/local/bin/dovecot-sendmail-wrapper } -- Robert L Mathews, Tiger Technologies, http://www.tigertech.net/ From adi at ddns.com.au Fri Oct 7 01:53:21 2016 From: adi at ddns.com.au (Adi Pircalabu) Date: Fri, 07 Oct 2016 12:53:21 +1100 Subject: [imap-login] SSL related crashes using the latest 2.2.25 In-Reply-To: <724415054.2623.1475730142743@appsuite-dev.open-xchange.com> References: <1234f8996ddd7278d94116ab17a4c4c9@ddns.com.au> <724415054.2623.1475730142743@appsuite-dev.open-xchange.com> Message-ID: Thanks. See the "sanitized" doveconf -n output below. Unfortunately I can't post log entries. Looking at the various data I'm collecting, the crashes are always occurring during busy periods, when the maximum numbers of connections configured on the backend IMAP servers is reached. As a side note, all the backend servers are running using valid SSL certificates. Perhaps under load, or when the per IP connections limit is reached, one of them is disconnecting unexpectedly, or doesn't send the certificate? # 2.2.25 (7be1766): /etc/dovecot/dovecot.conf # OS: Linux 2.6.32-642.4.2.el6.x86_64 x86_64 CentOS release 6.8 (Final) auth_cache_negative_ttl = 5 mins auth_cache_size = 16 M auth_cache_ttl = 18 hours default_client_limit = 6120 default_process_limit = 500 mbox_write_locks = fcntl namespace inbox { inbox = yes location = mailbox Drafts { special_use = \Drafts } mailbox Junk { special_use = \Junk } mailbox Sent { special_use = \Sent } mailbox "Sent Messages" { special_use = \Sent } mailbox Trash { special_use = \Trash } prefix = } passdb { args = /etc/dovecot/dovecot-sql.conf.ext driver = sql } service auth { client_limit = 6120 } service imap-login { process_limit = 2048 process_min_avail = 20 service_count = 0 vsz_limit = 256 M } service imap { process_limit = 2048 } service pop3 { process_limit = 1024 } ssl_cert = It seems to error on ssl certificate not received. > > Can you post doveconf -n and logs? > > doveconf -a is usually not wanted. > > Aki > >> On October 6, 2016 at 7:27 AM Adi Pircalabu wrote: >> >> >> I'm running Dovecot as proxy in front of some IMAP/POP3 Dovecot & >> Courier-IMAP servers and in the last couple of days I've been seeing a >> lot of imap-login crashes (signal 11) on both 2.2.18 and 2.2.25, all >> SSL >> related. The following backtraces are taken running 2.2.25, built from >> source on a test system similar to the live proxy servers. >> OS: CentOS 6.8 64bit >> Packages: openssl-1.0.1e-48.el6_8.3.x86_64, >> dovecot-2.2.25-2.el6.x86_64 >> built from source RPM. >> >> Can post "doveconf -a" if required. >> >> Core was generated by `dovecot/imap-login -D'. >> Program terminated with signal 11, Segmentation fault. >> #0 ssl_proxy_has_broken_client_cert (proxy=0x0) at >> ssl-proxy-openssl.c:677 >> 677 { >> (gdb) bt full >> #0 ssl_proxy_has_broken_client_cert (proxy=0x0) at >> ssl-proxy-openssl.c:677 >> No locals. >> #1 0x00007fdec4e6b489 in login_proxy_ssl_handshaked >> (context=0x14b4170) >> at login-proxy.c:759 >> proxy = 0x14b4170 >> #2 0x00007fdec4e70e4b in ssl_handshake (proxy=0x169d7b0) at >> ssl-proxy-openssl.c:468 >> ret = >> #3 ssl_step (proxy=0x169d7b0) at ssl-proxy-openssl.c:519 >> No locals. >> #4 0x00007fdec4beee0b in io_loop_call_io (io=0x13fdab0) at >> ioloop.c:564 >> ioloop = 0x12a07b0 >> t_id = 2 >> __FUNCTION__ = "io_loop_call_io" >> #5 0x00007fdec4bf0407 in io_loop_handler_run_internal (ioloop=> optimized out>) at ioloop-epoll.c:220 >> ctx = 0x12fb8d0 >> events = >> event = 0x171fb20 >> list = 0x15f8c50 >> io = >> tv = {tv_sec = 46, tv_usec = 134490} >> events_count = >> msecs = >> ret = 1 >> i = >> call = >> __FUNCTION__ = "io_loop_handler_run_internal" >> #6 0x00007fdec4beeeb5 in io_loop_handler_run (ioloop=0x12a07b0) at >> ioloop.c:612 >> No locals. >> #7 0x00007fdec4bef058 in io_loop_run (ioloop=0x12a07b0) at >> ioloop.c:588 >> __FUNCTION__ = "io_loop_run" >> #8 0x00007fdec4b81b23 in master_service_run (service=0x12a0650, >> callback=) at master-service.c:640 >> No locals. >> #9 0x00007fdec4e6e593 in login_binary_run (binary=> out>, argc=2, argv=0x12a0390) at main.c:486 >> set_pool = 0x12a0b80 >> login_socket = >> c = >> #10 0x00007fdec47dad1d in __libc_start_main (main=0x402ac0
, >> argc=2, ubp_av=0x7ffc53ee5688, init=, fini=> optimized out>, rtld_fini=, >> stack_end=0x7ffc53ee5678) at libc-start.c:226 >> result = >> unwind_buf = {cancel_jmp_buf = {{jmp_buf = {0, >> 5496455093114277129, 4204960, 140721716614784, 0, 0, >> -5494405746439844599, -5477823887334535927}, mask_was_saved = 0}}, >> priv >> = {pad = {0x0, 0x0, 0x404f70, 0x7ffc53ee5688}, data = { >> prev = 0x0, cleanup = 0x0, canceltype = 4214640}}} >> not_first_call = >> #11 0x00000000004029c9 in _start () >> No symbol table info available. >> >> Core was generated by `dovecot/imap-login -D'. >> Program terminated with signal 11, Segmentation fault. >> #0 0x00007f1a58620dec in _IO_vfprintf_internal (s=> out>, format=, ap=) at >> vfprintf.c:1641 >> 1641 process_string_arg (((struct printf_spec *) NULL)); >> (gdb) bt full >> #0 0x00007f1a58620dec in _IO_vfprintf_internal (s=> out>, format=, ap=) at >> vfprintf.c:1641 >> len = >> string_malloced = >> step0_jumps = {0, -1285, -1198, 3818, 3910, 3206, 3307, 4086, >> 1925, 2133, 2249, 3731, 4474, -4059, -1109, -1062, 868, 956, 968, 980, >> -1505, -495, 665, 755, 827, -3962, 395, 4392, -4059, 3997} >> space = 0 >> is_short = 0 >> use_outdigits = 0 >> step1_jumps = {0, 0, 0, 0, 0, 0, 0, 0, 0, 2133, 2249, 3731, >> 4474, -4059, -1109, -1062, 868, 956, 968, 980, -1505, -495, 665, 755, >> 827, -3962, 395, 4392, -4059, 0} >> group = 0 >> prec = -1 >> step2_jumps = {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2249, 3731, >> 4474, >> -4059, -1109, -1062, 868, 956, 968, 980, -1505, -495, 665, 755, 827, >> -3962, 395, 4392, -4059, 0} >> string = >> left = 0 >> is_long_double = 0 >> width = 0 >> step3a_jumps = {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2336, 0, 0, 0, >> -1109, -1062, 868, 956, 968, 0, 0, 0, 0, 755, 0, 0, 0, 0, 0, 0} >> alt = 0 >> showsign = 0 >> is_long = 0 >> is_char = 0 >> pad = 32 ' ' >> step3b_jumps = {0 , 4474, 0, 0, -1109, >> -1062, >> 868, 956, 968, 980, -1505, -495, 665, 755, 827, -3962, 395, 0, 0, 0} >> step4_jumps = {0 , -1109, -1062, 868, 956, >> 968, 980, -1505, -495, 665, 755, 827, -3962, 395, 0, 0, 0} >> is_negative = >> base = >> the_arg = {pa_wchar = 0 L'\000', pa_int = 0, pa_long_int = 0, >> pa_long_long_int = 0, pa_u_int = 0, pa_u_long_int = 0, >> pa_u_long_long_int = 0, pa_double = 0, pa_long_double = 0, pa_string = >> 0x0, pa_wstring = 0x0, >> pa_pointer = 0x0, pa_user = 0x0} >> spec = 115 's' >> _buffer = {__routine = 0, __arg = 0xf583a1d84, __canceltype = >> 24, __prev = 0x0} >> _avail = >> thousands_sep = 0x0 >> grouping = 0xffffffffffffffff
> of >> bounds> >> done = 97 >> f = 0x7f1a58c90b0d "s" >> lead_str_end = 0x7f1a58c90b05 "%s:%u: %s" >> end_of_spec = >> work_buffer = >> "\341v\361[\372\037\002\363\315\301y\017\302.\a\272\306B\267\001\377\244\276Uu\023\005\301\a\227\f\353\374\062V\002\000\000\000\000\320\006}S\375\177\000\000\340\062V\002\000\000\000\000\273\252\377W\032\177\000\000(\036f\002\000\000\000\000|\004}S\375\177\000\000\320\006}S\375\177\000\000a+\aX\032\177\000\000(\036f\002\000\000\000\000 >> \036f\002\000\000\000\000\360\003}S\375\177\000\000\301\025\000X\000\000\000\000\210\033f\002\000\000\000\000(\036f\002\024\000\000\000\t`\v\217~0\"\"\\\200'\217X_\331q\325o\244\210\000\000\000\000\340\062V\002\000\000\000\000\n\000\000\000\000\000\000\000Wo\020X\000\000\000\000\b\024V\002\000\000\000\000\226$\377W\032\177\000\000\001\000\000\000\000\000\000\000\211\001\000\000\000\000\000\000\260\006}S\375\177\000\000\240\257\235\003\000\000\000\000\260\006}S\375\177\000\000\200"... >> workstart = 0x0 >> workend = 0x7ffd537d0748 "" >> ap_save = {{gp_offset = 8, fp_offset = 48, overflow_arg_area >> = >> 0x7ffd537d0b60, reg_save_area = 0x7ffd537d0aa0}} >> nspecs_done = 2 >> save_errno = 0 >> readonly_format = 0 >> args_malloced = 0x0 >> specs = 0xa00000001 >> specs_malloced = false >> jump_table = >> "\001\000\000\004\000\016\000\006\000\000\a\002\000\003\t\000\005\b\b\b\b\b\b\b\b\b\000\000\000\000\000\000\000\032\000\031\000\023\023\023\000\035\000\000\f\000\000\000\000\000\000\025\000\000\000\000\022\000\r\000\000\000\000\000\000\032\000\024\017\023\023\023\n\017\034\000\v\030\027\021\026\f\000\025\033\020\000\000\022\000\r" >> #1 0x00007f1a586d8c50 in ___vsnprintf_chk ( >> s=0x1fe0e38 "proxy: Received invalid SSL certificate from >> \314-A\235q\210\021\b\354\062Lz?)\367.\002 \031\233 >> \362w?\224\356K7\343\224 \002\037\364!+\266\371\277O`K\021\b\315:6: >> k/CN=AddTrust External CA Root", >> maxlen=, flags=1, slen=> out>, >> format=0x7f1a58c90ad8 "proxy: Received invalid SSL certificate from >> %s:%u: %s", args=0x7ffd537d0a80) at vsnprintf_chk.c:65 >> sf = {f = {_sbf = {_f = {_flags = -72515583, >> _IO_read_ptr = 0x1fe0e38 "proxy: Received invalid SSL >> certificate from \314-A\235q\210\021\b\354\062Lz?)\367.\002 \031\233 >> \362w?\224\356K7\343\224 \002\037\364!+\266\371\277O`K\021\b\315:6: >> k/CN=AddTrust External CA Root", _IO_read_end = 0x1fe0e38 "proxy: >> Received invalid SSL certificate from >> \314-A\235q\210\021\b\354\062Lz?)\367.\002 \031\233 >> \362w?\224\356K7\343\224 \002\037\364!+\266\371\277O`K\021\b\315:6: >> k/CN=AddTrust External CA Root", >> _IO_read_base = 0x1fe0e38 "proxy: Received invalid >> SSL >> certificate from \314-A\235q\210\021\b\354\062Lz?)\367.\002 \031\233 >> \362w?\224\356K7\343\224 \002\037\364!+\266\371\277O`K\021\b\315:6: >> k/CN=AddTrust External CA Root", _IO_write_base = 0x1fe0e38 "proxy: >> Received invalid SSL certificate from >> \314-A\235q\210\021\b\354\062Lz?)\367.\002 \031\233 >> \362w?\224\356K7\343\224 \002\037\364!+\266\371\277O`K\021\b\315:6: >> k/CN=AddTrust External CA Root", >> _IO_write_ptr = 0x1fe0e99 "k/CN=AddTrust External CA >> Root", _IO_write_end = 0x1fe0f6d "oodvale.vic.au): disconnecting >> 127.0.0.1 (Disconnected by client: EOF(0s idle, in=217, out=796))", >> _IO_buf_base = 0x1fe0e38 "proxy: Received invalid SSL >> certificate from \314-A\235q\210\021\b\354\062Lz?)\367.\002 \031\233 >> \362w?\224\356K7\343\224 \002\037\364!+\266\371\277O`K\021\b\315:6: >> k/CN=AddTrust External CA Root", _IO_buf_end = 0x1fe0f6d >> "oodvale.vic.au): disconnecting 127.0.0.1 (Disconnected by client: >> EOF(0s idle, in=217, out=796))", _IO_save_base = 0x0, _IO_backup_base >> = >> 0x0, _IO_save_end = 0x0, _markers = 0x0, _chain = 0x0, >> _fileno = 1489493956, _flags2 = 4, _old_offset = >> 139751135368008, _cur_column = 0, _vtable_offset = -57 '\307', >> _shortbuf >> = "X", _lock = 0x0, _offset = 4294967673, _codecvt = 0x381f620, >> _wide_data = 0x7ffd537d09c0, >> _freeres_list = 0x0, _freeres_buf = 0x7ffd537d0ab0, >> _freeres_size = 140726004157144, _mode = -1, _unused2 = >> "\032\177\000\000\000\000\000\000\000\000\000\000\315\320e\207\000\000\000"}, >> vtable = 0x7f1a58966440}, _s = { >> _allocate_buffer = 0, _free_buffer = 0}}, >> overflow_buf = "\001", '\000' , >> "\001\000\000\000\000\000\000\000\350V\vY\032\177\000\000\200\302g\256\333\333;\221\330\n\311X\032\177\000\000\034\n}S\375\177\000\000\002\000\000\000\000\000\000"} >> ret = >> #2 0x00007f1a58a220ff in vsnprintf (format=0x7f1a58c90ad8 "proxy: >> Received invalid SSL certificate from %s:%u: %s", args=0x7ffd537d0a80, >> size_r=0x7ffd537d0a5c) at /usr/include/bits/stdio2.h:78 >> No locals. >> #3 t_noalloc_strdup_vprintf (format=0x7f1a58c90ad8 "proxy: Received >> invalid SSL certificate from %s:%u: %s", args=0x7ffd537d0a80, >> size_r=0x7ffd537d0a5c) at strfuncs.c:132 >> args2 = {{gp_offset = 8, fp_offset = 48, overflow_arg_area = >> 0x7ffd537d0b60, reg_save_area = 0x7ffd537d0aa0}} >> tmp = 0x1fe0e38 "proxy: Received invalid SSL certificate from >> \314-A\235q\210\021\b\354\062Lz?)\367.\002 \031\233 >> \362w?\224\356K7\343\224 \002\037\364!+\266\371\277O`K\021\b\315:6: >> k/CN=AddTrust External CA Root" >> init_size = 310 >> ret = >> __FUNCTION__ = "t_noalloc_strdup_vprintf" >> #4 0x00007f1a58a221d9 in p_strdup_vprintf (pool=0x7f1a58c76830, >> format=, args=) at >> strfuncs.c:156 >> tmp = >> buf = >> size = >> #5 0x00007f1a58a222ea in t_strdup_printf (format=> out>) >> at strfuncs.c:263 >> args = {{gp_offset = 32, fp_offset = 48, overflow_arg_area = >> 0x7ffd537d0b60, reg_save_area = 0x7ffd537d0aa0}} >> ret = 0x0 >> #6 0x00007f1a58c88548 in login_proxy_ssl_handshaked >> (context=0x3141260) >> at login-proxy.c:760 >> proxy = 0x3141260 >> #7 0x00007f1a58c8de4b in ssl_handshake (proxy=0x3498970) at >> ssl-proxy-openssl.c:468 >> ret = >> #8 ssl_step (proxy=0x3498970) at ssl-proxy-openssl.c:519 >> No locals. >> #9 0x00007f1a58a0be0b in io_loop_call_io (io=0x322e900) at >> ioloop.c:564 >> ioloop = 0x1f817b0 >> t_id = 2 >> __FUNCTION__ = "io_loop_call_io" >> #10 0x00007f1a58a0d407 in io_loop_handler_run_internal (ioloop=> optimized out>) at ioloop-epoll.c:220 >> ctx = 0x1fdc8d0 >> events = >> event = 0x34a22e8 >> list = 0x317b250 >> io = >> tv = {tv_sec = 0, tv_usec = 614018} >> events_count = >> msecs = >> ret = 3 >> i = >> call = >> __FUNCTION__ = "io_loop_handler_run_internal" >> #11 0x00007f1a58a0beb5 in io_loop_handler_run (ioloop=0x1f817b0) at >> ioloop.c:612 >> No locals. >> #12 0x00007f1a58a0c058 in io_loop_run (ioloop=0x1f817b0) at >> ioloop.c:588 >> __FUNCTION__ = "io_loop_run" >> #13 0x00007f1a5899eb23 in master_service_run (service=0x1f81650, >> callback=) at master-service.c:640 >> No locals. >> #14 0x00007f1a58c8b593 in login_binary_run (binary=> out>, argc=2, argv=0x1f81390) at main.c:486 >> set_pool = 0x1f81b80 >> login_socket = >> c = >> #15 0x00007f1a585f7d1d in __libc_start_main (main=0x402ac0
, >> argc=2, ubp_av=0x7ffd537d0de8, init=, fini=> optimized out>, rtld_fini=, >> stack_end=0x7ffd537d0dd8) at libc-start.c:226 >> result = >> unwind_buf = {cancel_jmp_buf = {{jmp_buf = {0, >> -6805578527007124004, 4204960, 140726004157920, 0, 0, >> 6806940805326362076, 6897556252474660316}, mask_was_saved = 0}}, priv >> = >> {pad = {0x0, 0x0, 0x404f70, 0x7ffd537d0de8}, data = { >> prev = 0x0, cleanup = 0x0, canceltype = 4214640}}} >> not_first_call = >> #16 0x00000000004029c9 in _start () >> No symbol table info available. >> >> Core was generated by `dovecot/imap-login -D'. >> Program terminated with signal 11, Segmentation fault. >> #0 t_strcut (str=0xffffffffffffffff
> of >> bounds>, cutchar=64 '@') at strfuncs.c:294 >> 294 for (p = str; *p != '\0'; p++) { >> (gdb) bt full >> #0 t_strcut (str=0xffffffffffffffff
> of >> bounds>, cutchar=64 '@') at strfuncs.c:294 >> p = 0xffffffffffffffff
> bounds> >> #1 0x00007f1afa1a7d2f in get_var_expand_users (tab=0x1c37e98, >> user=0xffffffffffffffff
) at >> client-common.c:523 >> i = >> #2 0x00007f1afa1a7f29 in get_var_expand_table (client=0x27b85b0, >> msg=0x1c37e38 "proxy: SSL certificate not received from >> \314-A\235q\210\021\b\354\062Lz?)\367.\002 \031\233 >> \362w?\224\356K7\343\224 \002\037\364!+\266\371\277O`K\021\b\315@:6") >> at >> client-common.c:541 >> tab = 0x1c37e98 >> #3 client_get_log_str (client=0x27b85b0, msg=0x1c37e38 "proxy: SSL >> certificate not received from >> \314-A\235q\210\021\b\354\062Lz?)\367.\002 >> \031\233 \362w?\224\356K7\343\224 >> \002\037\364!+\266\371\277O`K\021\b\315@:6") >> at client-common.c:644 >> static_tab = {{key = 115 's', value = 0x0, long_key = 0x0}, >> {key >> = 36 '$', value = 0x0, long_key = 0x0}, {key = 0 '\000', value = 0x0, >> long_key = 0x0}} >> func_table = {{key = 0x7f1afa1b2d0c "passdb", func = >> 0x7f1afa1a7c70 }, {key = 0x0, func = >> 0}} >> tab = >> e = >> str = >> str2 = >> pos = >> #4 0x00007f1afa1a847a in client_log_err (client=0x27b85b0, >> msg=0x1c37e38 "proxy: SSL certificate not received from >> \314-A\235q\210\021\b\354\062Lz?)\367.\002 \031\233 >> \362w?\224\356K7\343\224 \002\037\364!+\266\371\277O`K\021\b\315@:6") >> at >> client-common.c:692 >> _data_stack_cur_id = 3 >> #5 0x00007f1afa1ab51e in login_proxy_ssl_handshaked >> (context=0x237e910) >> at login-proxy.c:765 >> proxy = 0x237e910 >> #6 0x00007f1afa1b0e4b in ssl_handshake (proxy=0x23cb660) at >> ssl-proxy-openssl.c:468 >> ret = >> #7 ssl_step (proxy=0x23cb660) at ssl-proxy-openssl.c:519 >> No locals. >> #8 0x00007f1af9f2ee0b in io_loop_call_io (io=0x285cf20) at >> ioloop.c:564 >> ioloop = 0x1bd87b0 >> t_id = 2 >> __FUNCTION__ = "io_loop_call_io" >> #9 0x00007f1af9f30407 in io_loop_handler_run_internal (ioloop=> optimized out>) at ioloop-epoll.c:220 >> ctx = 0x1c338d0 >> events = >> event = 0x260eac0 >> list = 0x227b980 >> io = >> tv = {tv_sec = 0, tv_usec = 519697} >> events_count = >> msecs = >> ret = 1 >> i = >> call = >> __FUNCTION__ = "io_loop_handler_run_internal" >> #10 0x00007f1af9f2eeb5 in io_loop_handler_run (ioloop=0x1bd87b0) at >> ioloop.c:612 >> No locals. >> #11 0x00007f1af9f2f058 in io_loop_run (ioloop=0x1bd87b0) at >> ioloop.c:588 >> __FUNCTION__ = "io_loop_run" >> #12 0x00007f1af9ec1b23 in master_service_run (service=0x1bd8650, >> callback=) at master-service.c:640 >> No locals. >> #13 0x00007f1afa1ae593 in login_binary_run (binary=> out>, argc=2, argv=0x1bd8390) at main.c:486 >> set_pool = 0x1bd8b80 >> login_socket = >> c = >> #14 0x00007f1af9b1ad1d in __libc_start_main (main=0x402ac0
, >> argc=2, ubp_av=0x7ffcdfc7cd68, init=, fini=> optimized out>, rtld_fini=, >> stack_end=0x7ffcdfc7cd58) at libc-start.c:226 >> result = >> unwind_buf = {cancel_jmp_buf = {{jmp_buf = {0, >> -5108975228267825424, 4204960, 140724062899552, 0, 0, >> 5107356402929858288, 5128673613026916080}, mask_was_saved = 0}}, priv >> = >> {pad = {0x0, 0x0, 0x404f70, 0x7ffcdfc7cd68}, data = { >> prev = 0x0, cleanup = 0x0, canceltype = 4214640}}} >> not_first_call = >> #15 0x00000000004029c9 in _start () >> No symbol table info available. >> >> Core was generated by `dovecot/imap-login -D'. >> Program terminated with signal 11, Segmentation fault. >> #0 ssl_proxy_is_handshaked (proxy=0x21a930a940a43715) at >> ssl-proxy-openssl.c:720 >> 720 { >> (gdb) bt full >> #0 ssl_proxy_is_handshaked (proxy=0x21a930a940a43715) at >> ssl-proxy-openssl.c:720 >> No locals. >> #1 0x00007f0c84b63326 in get_var_expand_table (client=0xc49c50, >> msg=0x8b5e38 "proxy: SSL certificate not received from >> \314-A\235q\210\021\b\354\062Lz?)\367.\002 \031\233 >> \362w?\224\356K7\343\224 \002\037\364!+\266\371\277O`K\021\b\315@:6") >> at >> client-common.c:556 >> ssl_state = >> ssl_error = >> tab = 0x8b5e98 >> #2 client_get_log_str (client=0xc49c50, msg=0x8b5e38 "proxy: SSL >> certificate not received from >> \314-A\235q\210\021\b\354\062Lz?)\367.\002 >> \031\233 \362w?\224\356K7\343\224 >> \002\037\364!+\266\371\277O`K\021\b\315@:6") >> at client-common.c:644 >> static_tab = {{key = 115 's', value = 0x0, long_key = 0x0}, >> {key >> = 36 '$', value = 0x0, long_key = 0x0}, {key = 0 '\000', value = 0x0, >> long_key = 0x0}} >> func_table = {{key = 0x7f0c84b6dd0c "passdb", func = >> 0x7f0c84b62c70 }, {key = 0x0, func = >> 0}} >> tab = >> e = >> str = >> str2 = >> pos = >> #3 0x00007f0c84b6347a in client_log_err (client=0xc49c50, >> msg=0x8b5e38 "proxy: SSL certificate not received from >> \314-A\235q\210\021\b\354\062Lz?)\367.\002 \031\233 >> \362w?\224\356K7\343\224 \002\037\364!+\266\371\277O`K\021\b\315@:6") >> at >> client-common.c:692 >> _data_stack_cur_id = 3 >> #4 0x00007f0c84b6651e in login_proxy_ssl_handshaked >> (context=0xf464b0) >> at login-proxy.c:765 >> proxy = 0xf464b0 >> #5 0x00007f0c84b6be4b in ssl_handshake (proxy=0xd5d600) at >> ssl-proxy-openssl.c:468 >> ret = >> #6 ssl_step (proxy=0xd5d600) at ssl-proxy-openssl.c:519 >> No locals. >> #7 0x00007f0c848e9e0b in io_loop_call_io (io=0xdf5ea0) at >> ioloop.c:564 >> ioloop = 0x8567b0 >> t_id = 2 >> __FUNCTION__ = "io_loop_call_io" >> #8 0x00007f0c848eb407 in io_loop_handler_run_internal (ioloop=> optimized out>) at ioloop-epoll.c:220 >> ctx = 0x8b18d0 >> events = >> event = 0xf305f0 >> list = 0xc4a700 >> io = >> tv = {tv_sec = 0, tv_usec = 954174} >> events_count = >> msecs = >> ret = 1 >> i = >> call = >> __FUNCTION__ = "io_loop_handler_run_internal" >> #9 0x00007f0c848e9eb5 in io_loop_handler_run (ioloop=0x8567b0) at >> ioloop.c:612 >> No locals. >> #10 0x00007f0c848ea058 in io_loop_run (ioloop=0x8567b0) at >> ioloop.c:588 >> __FUNCTION__ = "io_loop_run" >> #11 0x00007f0c8487cb23 in master_service_run (service=0x856650, >> callback=) at master-service.c:640 >> No locals. >> #12 0x00007f0c84b69593 in login_binary_run (binary=> out>, argc=2, argv=0x856390) at main.c:486 >> set_pool = 0x856b80 >> login_socket = >> c = >> #13 0x00007f0c844d5d1d in __libc_start_main (main=0x402ac0
, >> argc=2, ubp_av=0x7ffd41fc1f28, init=, fini=> optimized out>, rtld_fini=, >> stack_end=0x7ffd41fc1f18) at libc-start.c:226 >> result = >> unwind_buf = {cancel_jmp_buf = {{jmp_buf = {0, >> -3476376496289340868, 4204960, 140725710495520, 0, 0, >> 3475633251103023676, 3591732184888123964}, mask_was_saved = 0}}, priv >> = >> {pad = {0x0, 0x0, 0x404f70, 0x7ffd41fc1f28}, data = { >> prev = 0x0, cleanup = 0x0, canceltype = 4214640}}} >> not_first_call = >> #14 0x00000000004029c9 in _start () >> No symbol table info available. >> >> Core was generated by `dovecot/imap-login -D'. >> Program terminated with signal 11, Segmentation fault. >> #0 0x00007ff9e5e5f40b in p_malloc (pool=0x1f10c90, str=0x1f1d3d0 >> "4qyKMRI+AAAAAAAA") at mempool.h:76 >> 76 return pool->v->malloc(pool, size); >> (gdb) bt full >> #0 0x00007ff9e5e5f40b in p_malloc (pool=0x1f10c90, str=0x1f1d3d0 >> "4qyKMRI+AAAAAAAA") at mempool.h:76 >> No locals. >> #1 p_strdup (pool=0x1f10c90, str=0x1f1d3d0 "4qyKMRI+AAAAAAAA") at >> strfuncs.c:43 >> mem = >> len = 17 >> #2 0x00007ff9e60c2e9f in client_get_session_id (client=0x1f10980) at >> client-common.c:482 >> buf = 0x1f1d328 >> base64_buf = 0x1f1d398 >> tv = {tv_sec = 1475622745, tv_usec = 58530} >> timestamp = 1475622745058530 >> i = 48 >> #3 0x00007ff9e60c302c in get_var_expand_table (client=0x1f10980, >> msg=0x1f1ce38 "proxy: SSL certificate not received from (null):0") at >> client-common.c:568 >> tab = 0x1f1ce70 >> #4 client_get_log_str (client=0x1f10980, msg=0x1f1ce38 "proxy: SSL >> certificate not received from (null):0") at client-common.c:644 >> static_tab = {{key = 115 's', value = 0x0, long_key = 0x0}, >> {key >> = 36 '$', value = 0x0, long_key = 0x0}, {key = 0 '\000', value = 0x0, >> long_key = 0x0}} >> func_table = {{key = 0x7ff9e60cdd0c "passdb", func = >> 0x7ff9e60c2c70 }, {key = 0x0, func = >> 0}} >> tab = >> e = >> str = >> str2 = >> pos = >> #5 0x00007ff9e60c347a in client_log_err (client=0x1f10980, >> msg=0x1f1ce38 "proxy: SSL certificate not received from (null):0") at >> client-common.c:692 >> _data_stack_cur_id = 3 >> #6 0x00007ff9e60c651e in login_proxy_ssl_handshaked >> (context=0x256cfb0) >> at login-proxy.c:765 >> proxy = 0x256cfb0 >> #7 0x00007ff9e60cbe4b in ssl_handshake (proxy=0x23f6710) at >> ssl-proxy-openssl.c:468 >> ret = >> #8 ssl_step (proxy=0x23f6710) at ssl-proxy-openssl.c:519 >> No locals. >> #9 0x00007ff9e5e49e0b in io_loop_call_io (io=0x256cc40) at >> ioloop.c:564 >> ioloop = 0x1ebd7b0 >> t_id = 2 >> __FUNCTION__ = "io_loop_call_io" >> #10 0x00007ff9e5e4b407 in io_loop_handler_run_internal (ioloop=> optimized out>) at ioloop-epoll.c:220 >> ctx = 0x1f188d0 >> events = >> event = 0x24d25f0 >> list = 0x2561f10 >> io = >> tv = {tv_sec = 0, tv_usec = 551105} >> events_count = >> msecs = >> ret = 1 >> i = >> call = >> __FUNCTION__ = "io_loop_handler_run_internal" >> #11 0x00007ff9e5e49eb5 in io_loop_handler_run (ioloop=0x1ebd7b0) at >> ioloop.c:612 >> No locals. >> #12 0x00007ff9e5e4a058 in io_loop_run (ioloop=0x1ebd7b0) at >> ioloop.c:588 >> __FUNCTION__ = "io_loop_run" >> #13 0x00007ff9e5ddcb23 in master_service_run (service=0x1ebd650, >> callback=) at master-service.c:640 >> No locals. >> #14 0x00007ff9e60c9593 in login_binary_run (binary=> out>, argc=2, argv=0x1ebd390) at main.c:486 >> set_pool = 0x1ebdb80 >> login_socket = >> c = >> #15 0x00007ff9e5a35d1d in __libc_start_main (main=0x402ac0
, >> argc=2, ubp_av=0x7ffe367d2178, init=, fini=> optimized out>, rtld_fini=, >> stack_end=0x7ffe367d2168) at libc-start.c:226 >> result = >> unwind_buf = {cancel_jmp_buf = {{jmp_buf = {0, >> -3428671975511032229, 4204960, 140729812590960, 0, 0, >> 3429075480732761691, 3429810282407657051}, mask_was_saved = 0}}, priv >> = >> {pad = {0x0, 0x0, 0x404f70, 0x7ffe367d2178}, data = { >> prev = 0x0, cleanup = 0x0, canceltype = 4214640}}} >> not_first_call = >> #16 0x00000000004029c9 in _start () >> No symbol table info available. >> >> Core was generated by `dovecot/imap-login -D'. >> Program terminated with signal 11, Segmentation fault. >> #0 0x00007f029b173314 in str_sanitize_skip_start >> (src=0x2f6d6f632e61636f
, >> max_bytes=64) at str-sanitize.c:13 >> 13 for (i = 0; i < max_bytes && src[i] != '\0'; ) { >> (gdb) bt full >> #0 0x00007f029b173314 in str_sanitize_skip_start >> (src=0x2f6d6f632e61636f
, >> max_bytes=64) at str-sanitize.c:13 >> chr = 0 >> i = 0 >> #1 str_sanitize (src=0x2f6d6f632e61636f
> out >> of bounds>, max_bytes=64) at str-sanitize.c:88 >> str = >> i = >> #2 0x00007f029b3d7f9e in get_var_expand_table (client=0x221ee70, >> msg=0x187fe38 "proxy: SSL certificate not received from >> \314-A\235q\210\021\b\354\062Lz?)\367.\002 \031\233 >> \362w?\224\356K7\343\224 >> \002\037\364!+\266\371\277O`K\021\b?\a\202\001:6") at >> client-common.c:548 >> tab = 0x187fe98 >> #3 client_get_log_str (client=0x221ee70, msg=0x187fe38 "proxy: SSL >> certificate not received from >> \314-A\235q\210\021\b\354\062Lz?)\367.\002 >> \031\233 \362w?\224\356K7\343\224 >> \002\037\364!+\266\371\277O`K\021\b?\a\202\001:6") >> at client-common.c:644 >> static_tab = {{key = 115 's', value = 0x0, long_key = 0x0}, >> {key >> = 36 '$', value = 0x0, long_key = 0x0}, {key = 0 '\000', value = 0x0, >> long_key = 0x0}} >> func_table = {{key = 0x7f029b3e2d0c "passdb", func = >> 0x7f029b3d7c70 }, {key = 0x0, func = >> 0}} >> tab = >> e = >> str = >> str2 = >> pos = >> #4 0x00007f029b3d847a in client_log_err (client=0x221ee70, >> msg=0x187fe38 "proxy: SSL certificate not received from >> \314-A\235q\210\021\b\354\062Lz?)\367.\002 \031\233 >> \362w?\224\356K7\343\224 >> \002\037\364!+\266\371\277O`K\021\b?\a\202\001:6") at >> client-common.c:692 >> _data_stack_cur_id = 3 >> #5 0x00007f029b3db51e in login_proxy_ssl_handshaked >> (context=0x19b2530) >> at login-proxy.c:765 >> proxy = 0x19b2530 >> #6 0x00007f029b3e0e4b in ssl_handshake (proxy=0x195df70) at >> ssl-proxy-openssl.c:468 >> ret = >> #7 ssl_step (proxy=0x195df70) at ssl-proxy-openssl.c:519 >> No locals. >> #8 0x00007f029b15ee0b in io_loop_call_io (io=0x216d790) at >> ioloop.c:564 >> ioloop = 0x18207b0 >> t_id = 2 >> __FUNCTION__ = "io_loop_call_io" >> #9 0x00007f029b160407 in io_loop_handler_run_internal (ioloop=> optimized out>) at ioloop-epoll.c:220 >> ctx = 0x187b8d0 >> events = >> event = 0x1df4668 >> list = 0x2025710 >> io = >> tv = {tv_sec = 11, tv_usec = 323409} >> events_count = >> msecs = >> ret = 3 >> i = >> call = >> __FUNCTION__ = "io_loop_handler_run_internal" >> #10 0x00007f029b15eeb5 in io_loop_handler_run (ioloop=0x18207b0) at >> ioloop.c:612 >> No locals. >> #11 0x00007f029b15f058 in io_loop_run (ioloop=0x18207b0) at >> ioloop.c:588 >> __FUNCTION__ = "io_loop_run" >> #12 0x00007f029b0f1b23 in master_service_run (service=0x1820650, >> callback=) at master-service.c:640 >> No locals. >> #13 0x00007f029b3de593 in login_binary_run (binary=> out>, argc=2, argv=0x1820390) at main.c:486 >> set_pool = 0x1820b80 >> login_socket = >> c = >> #14 0x00007f029ad4ad1d in __libc_start_main (main=0x402ac0
, >> argc=2, ubp_av=0x7ffd637fd608, init=, fini=> optimized out>, rtld_fini=, >> stack_end=0x7ffd637fd5f8) at libc-start.c:226 >> result = >> unwind_buf = {cancel_jmp_buf = {{jmp_buf = {0, >> -4141182239951058275, 4204960, 140726272775680, 0, 0, >> 4142562126330825373, 4071998539020864157}, mask_was_saved = 0}}, priv >> = >> {pad = {0x0, 0x0, 0x404f70, 0x7ffd637fd608}, data = { >> prev = 0x0, cleanup = 0x0, canceltype = 4214640}}} >> not_first_call = >> #15 0x00000000004029c9 in _start () >> No symbol table info available. >> >> >> -- >> Adi Pircalabu From kremels at kreme.com Fri Oct 7 13:13:15 2016 From: kremels at kreme.com (@lbutlr) Date: Fri, 7 Oct 2016 07:13:15 -0600 Subject: Auto-archiving In-Reply-To: <444220129.2635.1475733049859@appsuite-dev.open-xchange.com> References: <444220129.2635.1475733049859@appsuite-dev.open-xchange.com> Message-ID: <7F43A200-346F-4068-B6B6-E9A9197CCA67@kreme.com> On 05 Oct 2016, at 23:50, Aki Tuomi wrote: >> On October 6, 2016 at 8:05 AM "@lbutlr" wrote: >> >> >> I?d like to know if there is a way to tell dovecot to >> >> 1) move messages older than # days to the Archive folder >> 2) rebuild the indexes >> 3) remove any folders that are left with no mail >> >> Preferably, I?d like this to be a action I an schedule via crontab or something to fire off for any users that want it. So, I do not want it to do this across all users and mailboxes. > > Have you tried doveadm move? I don?t see anything in doveadm move that supports the age of the message. at least according to doveadm-move (and buy, that was fun o find since it is not mentioned anywhere in man doveadm unless you notice the (1) might indicate a man page!) it only supports dates, which makes simply doing this in a crontab (or something similarly straightforward) more problematic since I would have to generate a date string. Ah, wait, I just found the DATE SPECIFICATION section of doveadm-search-query. Nothing quite like having three man pages to get useful information on one command. Thanks for the forcing me to look again though. doveadm move -u jane Archive mailbox INBOX BEFORE 30d And then I have to do that for all folders one at a time, yes? From jlambot at gmail.com Fri Oct 7 14:53:48 2016 From: jlambot at gmail.com (Julien Lambot) Date: Fri, 7 Oct 2016 16:53:48 +0200 Subject: Auto-archiving In-Reply-To: <7F43A200-346F-4068-B6B6-E9A9197CCA67@kreme.com> References: <444220129.2635.1475733049859@appsuite-dev.open-xchange.com> <7F43A200-346F-4068-B6B6-E9A9197CCA67@kreme.com> Message-ID: Hi List We use this for archiving, through cronjob, https://gist.github.com/pkern/3730543 Works prety fine. On Fri, Oct 7, 2016 at 3:13 PM, @lbutlr wrote: > On 05 Oct 2016, at 23:50, Aki Tuomi wrote: > >> On October 6, 2016 at 8:05 AM "@lbutlr" wrote: > >> > >> > >> I?d like to know if there is a way to tell dovecot to > >> > >> 1) move messages older than # days to the Archive folder > >> 2) rebuild the indexes > >> 3) remove any folders that are left with no mail > >> > >> Preferably, I?d like this to be a action I an schedule via crontab or > something to fire off for any users that want it. So, I do not want it to > do this across all users and mailboxes. > > > > Have you tried doveadm move? > > I don?t see anything in doveadm move that supports the age of the message. > at least according to doveadm-move (and buy, that was fun o find since it > is not mentioned anywhere in man doveadm unless you notice the (1) might > indicate a man page!) it only supports dates, which makes simply doing this > in a crontab (or something similarly straightforward) more problematic > since I would have to generate a date string. > > Ah, wait, I just found the DATE SPECIFICATION section of > doveadm-search-query. > > Nothing quite like having three man pages to get useful information on one > command. > > Thanks for the forcing me to look again though. > > doveadm move -u jane Archive mailbox INBOX BEFORE 30d > > And then I have to do that for all folders one at a time, yes? > From aki.tuomi at dovecot.fi Fri Oct 7 15:18:09 2016 From: aki.tuomi at dovecot.fi (Aki Tuomi) Date: Fri, 7 Oct 2016 18:18:09 +0300 (EEST) Subject: Auto-archiving In-Reply-To: <7F43A200-346F-4068-B6B6-E9A9197CCA67@kreme.com> References: <444220129.2635.1475733049859@appsuite-dev.open-xchange.com> <7F43A200-346F-4068-B6B6-E9A9197CCA67@kreme.com> Message-ID: <19414568.577.1475853490931@appsuite-dev.open-xchange.com> > On October 7, 2016 at 4:13 PM "@lbutlr" wrote: > > > On 05 Oct 2016, at 23:50, Aki Tuomi wrote: > >> On October 6, 2016 at 8:05 AM "@lbutlr" wrote: > >> > >> > >> I?d like to know if there is a way to tell dovecot to > >> > >> 1) move messages older than # days to the Archive folder > >> 2) rebuild the indexes > >> 3) remove any folders that are left with no mail > >> > >> Preferably, I?d like this to be a action I an schedule via crontab or something to fire off for any users that want it. So, I do not want it to do this across all users and mailboxes. > > > > Have you tried doveadm move? > > I don?t see anything in doveadm move that supports the age of the message. at least according to doveadm-move (and buy, that was fun o find since it is not mentioned anywhere in man doveadm unless you notice the (1) might indicate a man page!) it only supports dates, which makes simply doing this in a crontab (or something similarly straightforward) more problematic since I would have to generate a date string. > > Ah, wait, I just found the DATE SPECIFICATION section of doveadm-search-query. > > Nothing quite like having three man pages to get useful information on one command. > > Thanks for the forcing me to look again though. > > doveadm move -u jane Archive mailbox INBOX BEFORE 30d > > And then I have to do that for all folders one at a time, yes? doveadm move -u jane Archive ALL BEFORE 30d ? Aki From jkamp at amazon.nl Fri Oct 7 15:59:43 2016 From: jkamp at amazon.nl (=?UTF-8?Q?John_van_der_Kamp?=) Date: Fri, 7 Oct 2016 15:59:43 +0000 Subject: Subscription not immediately reflected Message-ID: <010001579fdf22e0-5a61f58e-71ee-4634-a302-828c62fd5453-000000@email.amazonses.com> Hello, ? I noticed that somewhere between 2.2.22 and 2.2.25 the workings of subscriptions seem to have changed. In version 2.2.25, when a client subscribes to a folder, and then does an LSUB command, it will not see that subscribed folder. If you retry the LSUB command, the change is there. Same with unsubscribes. In version 2.2.22 I did not see this weird behavior. ? John From bunkertor at tiscali.it Fri Oct 7 16:14:39 2016 From: bunkertor at tiscali.it (bunkertor) Date: Fri, 07 Oct 2016 16:14:39 -0000 Subject: =?utf-8?B?cXVlbHF1ZXMgbm91dmVsbGVzIGluZm9z?= Message-ID: <0000499cec27$91a10466$5bc1cb03$@tiscali.it> Hey, Voici ce que je viens de lire et c?est quelque chose de vraiment nouveau et int?ressant, vous pouvez en lire plus ? Grosses bises, bunkertor From leithner at itronic.at Fri Oct 7 20:01:26 2016 From: leithner at itronic.at (Harald Leithner) Date: Fri, 07 Oct 2016 22:01:26 +0200 Subject: latest 2.2.25 Hibernation patch raises 100% cpu usage In-Reply-To: <08e69023987597761261030ef6fd586d@itronic.at> References: <08e69023987597761261030ef6fd586d@itronic.at> Message-ID: <360a28d46b89a79fdcb5293fa4c02b1c@itronic.at> Hi, I use the xi Debian packages, the current 2.2.25 auto 47 build seams to include the commit 351aa01c6e I think this commit cases a 100% cpu usage. After some time 1-5 minutes the hibernate process uses 100% and strace floods the screen with: epoll_wait(9, {{EPOLLIN|EPOLLHUP, {u32=2609346112, u64=140044312999488}}, {EPOLLIN|EPOLLHUP, {u32=2609364288, u64=140044313017664}}, {EPOLLIN|EPOLLHUP, {u32=2609380304, u64=140044313033680}}}, 34, 164437) = 3 epoll_wait(9, {{EPOLLIN|EPOLLHUP, {u32=2609346112, u64=140044312999488}}, {EPOLLIN|EPOLLHUP, {u32=2609364288, u64=140044313017664}}, {EPOLLIN|EPOLLHUP, {u32=2609380304, u64=140044313033680}}}, 34, 164437) = 3 epoll_wait(9, {{EPOLLIN|EPOLLHUP, {u32=2609346112, u64=140044312999488}}, {EPOLLIN|EPOLLHUP, {u32=2609364288, u64=140044313017664}}, {EPOLLIN|EPOLLHUP, {u32=2609380304, u64=140044313033680}}}, 34, 164436) = 3 epoll_wait(9, {{EPOLLIN|EPOLLHUP, {u32=2609346112, u64=140044312999488}}, {EPOLLIN|EPOLLHUP, {u32=2609364288, u64=140044313017664}}, {EPOLLIN|EPOLLHUP, {u32=2609380304, u64=140044313033680}}}, 34, 164436) = 3 epoll_wait(9, {{EPOLLIN|EPOLLHUP, {u32=2609346112, u64=140044312999488}}, {EPOLLIN|EPOLLHUP, {u32=2609364288, u64=140044313017664}}, {EPOLLIN|EPOLLHUP, {u32=2609380304, u64=140044313033680}}}, 34, 164436) = 3 epoll_wait(9, {{EPOLLIN|EPOLLHUP, {u32=2609346112, u64=140044312999488}}, {EPOLLIN|EPOLLHUP, {u32=2609364288, u64=140044313017664}}, {EPOLLIN|EPOLLHUP, {u32=2609380304, u64=140044313033680}}}, 34, 164436) = 3 epoll_wait(9, {{EPOLLIN|EPOLLHUP, {u32=2609346112, u64=140044312999488}}, {EPOLLIN|EPOLLHUP, {u32=2609364288, u64=140044313017664}}, {EPOLLIN|EPOLLHUP, {u32=2609380304, u64=140044313033680}}}, 34, 164436) = 3 epoll_wait(9, {{EPOLLIN|EPOLLHUP, {u32=2609346112, u64=140044312999488}}, {EPOLLIN|EPOLLHUP, {u32=2609364288, u64=140044313017664}}, {EPOLLIN|EPOLLHUP, {u32=2609380304, u64=140044313033680}}}, 34, 164436) = 3 epoll_wait(9, {{EPOLLIN|EPOLLHUP, {u32=2609346112, u64=140044312999488}}, {EPOLLIN|EPOLLHUP, {u32=2609364288, u64=140044313017664}}, {EPOLLIN|EPOLLHUP, {u32=2609380304, u64=140044313033680}}}, 34, 164435) = 3 Maybe its a client thats incompatible, downgrading to version +46 solve the problem. bye Harald From aki.tuomi at dovecot.fi Fri Oct 7 20:07:45 2016 From: aki.tuomi at dovecot.fi (Aki Tuomi) Date: Fri, 7 Oct 2016 23:07:45 +0300 (EEST) Subject: latest 2.2.25 Hibernation patch raises 100% cpu usage In-Reply-To: <360a28d46b89a79fdcb5293fa4c02b1c@itronic.at> References: <08e69023987597761261030ef6fd586d@itronic.at> <360a28d46b89a79fdcb5293fa4c02b1c@itronic.at> Message-ID: <1720839447.735.1475870866053@appsuite-dev.open-xchange.com> > On October 7, 2016 at 11:01 PM Harald Leithner wrote: > > > Hi, > > I use the xi Debian packages, the current 2.2.25 auto 47 build seams to > include the commit 351aa01c6e I think this commit cases a 100% cpu > usage. > > After some time 1-5 minutes the hibernate process uses 100% and strace > floods the screen with: > > epoll_wait(9, {{EPOLLIN|EPOLLHUP, {u32=2609346112, > u64=140044312999488}}, {EPOLLIN|EPOLLHUP, {u32=2609364288, > u64=140044313017664}}, {EPOLLIN|EPOLLHUP, {u32=2609380304, > u64=140044313033680}}}, 34, 164437) = 3 > epoll_wait(9, {{EPOLLIN|EPOLLHUP, {u32=2609346112, > u64=140044312999488}}, {EPOLLIN|EPOLLHUP, {u32=2609364288, > u64=140044313017664}}, {EPOLLIN|EPOLLHUP, {u32=2609380304, > u64=140044313033680}}}, 34, 164437) = 3 > epoll_wait(9, {{EPOLLIN|EPOLLHUP, {u32=2609346112, > u64=140044312999488}}, {EPOLLIN|EPOLLHUP, {u32=2609364288, > u64=140044313017664}}, {EPOLLIN|EPOLLHUP, {u32=2609380304, > u64=140044313033680}}}, 34, 164436) = 3 > epoll_wait(9, {{EPOLLIN|EPOLLHUP, {u32=2609346112, > u64=140044312999488}}, {EPOLLIN|EPOLLHUP, {u32=2609364288, > u64=140044313017664}}, {EPOLLIN|EPOLLHUP, {u32=2609380304, > u64=140044313033680}}}, 34, 164436) = 3 > epoll_wait(9, {{EPOLLIN|EPOLLHUP, {u32=2609346112, > u64=140044312999488}}, {EPOLLIN|EPOLLHUP, {u32=2609364288, > u64=140044313017664}}, {EPOLLIN|EPOLLHUP, {u32=2609380304, > u64=140044313033680}}}, 34, 164436) = 3 > epoll_wait(9, {{EPOLLIN|EPOLLHUP, {u32=2609346112, > u64=140044312999488}}, {EPOLLIN|EPOLLHUP, {u32=2609364288, > u64=140044313017664}}, {EPOLLIN|EPOLLHUP, {u32=2609380304, > u64=140044313033680}}}, 34, 164436) = 3 > epoll_wait(9, {{EPOLLIN|EPOLLHUP, {u32=2609346112, > u64=140044312999488}}, {EPOLLIN|EPOLLHUP, {u32=2609364288, > u64=140044313017664}}, {EPOLLIN|EPOLLHUP, {u32=2609380304, > u64=140044313033680}}}, 34, 164436) = 3 > epoll_wait(9, {{EPOLLIN|EPOLLHUP, {u32=2609346112, > u64=140044312999488}}, {EPOLLIN|EPOLLHUP, {u32=2609364288, > u64=140044313017664}}, {EPOLLIN|EPOLLHUP, {u32=2609380304, > u64=140044313033680}}}, 34, 164436) = 3 > epoll_wait(9, {{EPOLLIN|EPOLLHUP, {u32=2609346112, > u64=140044312999488}}, {EPOLLIN|EPOLLHUP, {u32=2609364288, > u64=140044313017664}}, {EPOLLIN|EPOLLHUP, {u32=2609380304, > u64=140044313033680}}}, 34, 164435) = 3 > > Maybe its a client thats incompatible, downgrading to version +46 solve > the problem. > > > Harald There is a patch waiting for fixing hibernation. Aki From mkliewe at gmx.de Sat Oct 8 00:51:17 2016 From: mkliewe at gmx.de (Michael Kliewe) Date: Sat, 8 Oct 2016 02:51:17 +0200 Subject: Quota-status service on Director In-Reply-To: References: Message-ID: <99d4432d-dab1-df22-5245-57e1f316afb1@gmx.de> Hello, any news on this topic? I tried it again with Dovecot 2.2.25, but it's still not possible to run the quota-status services on the directors. They try to access the mailbox of the user, which they obviously cannot. I'm not sure why Dovecot tries to open the mailbox, I would have expected just a dict-query (SQL) to check the quota. If the mailbox has to be opened, it has to be done on the correct backend Dovecot of the user. Is there any chance to fix this problem? Or am I doing something wrong here? Kind regards Michael Am 23.02.2015 um 03:09 schrieb Michael Kliewe: > Hello, > > I'm trying to configure the quota-status service, but it seems I'm not successful with my director setup (2.2.9). I activate the quota-status service like this on my director server: > > $ cat 91-quota-status.conf > ## > ## Quota-Status configuration. > ## > # Load Module quota-status and listen on TCP/IP Port for connections. > service quota-status { > executable = quota-status -p postfix > inet_listener { > address = 10.0.1.44 > port = 12340 > } > client_limit = 1 > } > # Plugin configuration. > # Return messages for requests by quota status: success, nouser and overquota. > plugin { > quota_status_success = DUNNO > quota_status_nouser = DUNNO > quota_status_overquota = "552 5.2.2 Mailbox is over quota" > } > > After restarting the director service I try to query the quota status service: > > printf "recipient=user at domain.de\nsize=100000\n\n" | nc 10.0.1.44 12340 > > The output is: > > action=DEFER_IF_PERMIT Invalid user settings. Refer to server log for more information. > > In the debug log of the director I see this: > > Feb 23 03:03:09 director01 dovecot: auth: Debug: userdb out: USER 1 user at domain.de mail=mdbox:/mnt/data01/domain.de/user/maildir home=/mnt/data01/domain.de/user proxy=Y master= pass= uid=5000 gid=1 quota_rule=*:storage=60593 quota_rule2=*:messages=100000 > Feb 23 03:03:09 director01 dovecot: quota-status(user at domain.de): Error: user user at domain.de: Initialization failed: Namespace '': mkdir(/mnt/data01/domain.de/user/maildir/mailboxes) failed: Permission denied (euid=5000(vmail) egid=1(daemon) missing +w perm: /mnt, dir owned by 0:0 mode=0755) > > So the quota status service tries to access the mailbox of the user ON THE DIRECTOR. But the director has not mounted the mailboxes of the users, that's what the backend dovecots are for (proxy=Y). So the quota-status query is not proxied to the dovecot backend server I would assume. > > Does that mean I have to start the quota-status service on the dovecot backend servers and access it from the Postfix server directly? Currently the Postfixes can only reach the directors, not the backend servers. > > Is it possible to use the quota-status service on the director? > > Thanks for any hints and help > Michael From michael at felt.demon.nl Sun Oct 9 19:48:54 2016 From: michael at felt.demon.nl (Michael Felt) Date: Sun, 9 Oct 2016 21:48:54 +0200 Subject: Pacaging/build issues with AIX and vac (dovecot-2.2.25) Message-ID: <98f43cf6-e19b-0cf8-421d-d7b60bcf60de@felt.demon.nl> Hi. I finally decided it was really time to stop being lazy and really move away from gmail. After I have a server in my basement using power, etc. So I turned on the imap provided - and did not quite cry - it will have to do for now, but imap2 is wanting. A real server yes, but not Linux. (Using linux would require another server AND I would feel I am being lazy again). So, I downloaded dovecot-2.2.25 and tried to build. Configure (messages to stderr) xlc is /usr/vacpp/bin/xlc + CPPFLAGS="-I/opt/include -I/opt/buildaix/include" CFLAGS="-I/opt/include -O2 -qmaxmem=-1 -qarch=pwr5 -I/opt/buildaix/includes ./configure\ --prefix=/opt \ --sysconfdir=/var/dovecot/etc\ --sharedstatedir=/var/dovecot/com\ --localstatedir=/var/dovecot\ --mandir=/usr/share/man\ --infodir=/opt/share/info/dovecot \ > .buildaix/configure.out xlc_r: 1501-216 (W) command option -dM is not recognized - passed to ld xlc_r: 1501-228 (W) input file c not found 1506-297 (S) Unable to open input file null. No such file or directory. ./configure[25617]: rpcgen: not found messages to stderr by make: + make > .buildaix/make.out ./update-version.sh[42]: git: not found. "askpass.c", line 59.18: 1506-359 (I) Automatic variable str contains a const member and is not initialized. It will be initia "guid.c", line 113.18: 1506-359 (I) Automatic variable buf contains a const member and is not initialized. It will be initiali "iostream-rawlog.c", line 28.18: 1506-359 (I) Automatic variable buf contains a const member and is not initialized. It will b "istream-base64-decoder.c", line 42.18: 1506-359 (I) Automatic variable buf contains a const member and is not initialized. It "istream-base64-encoder.c", line 47.18: 1506-359 (I) Automatic variable buf contains a const member and is not initialized. It "istream-jsonstr.c", line 70.26: 1506-359 (I) Automatic variable buf contains a const member and is not initialized. It will b "mountpoint.c", line 222.39: 1506-068 (W) Operation between types "char*" and "const char*" is not allowed. "istream-decrypt.c", line 68.18: 1506-359 (I) Automatic variable ephemeral_key contains a const member and is not initialized. "istream-decrypt.c", line 276.18: 1506-359 (I) Automatic variable buf contains a const member and is not initialized. It will "istream-decrypt.c", line 369.26: 1506-359 (I) Automatic variable peer_key contains a const member and is not initialized. It "istream-decrypt.c", line 745.42: 1506-359 (I) Automatic variable db contains a const member and is not initialized. It will b "ostream-encrypt.c", line 135.65: 1506-359 (I) Automatic variable buf contains a const member and is not initialized. It will "ostream-encrypt.c", line 454.18: 1506-359 (I) Automatic variable buf contains a const member and is not initialized. It will "dcrypt-openssl.c", line 787.36: 1506-359 (I) Automatic variable key contains a const member and is not initialized. It will b "dcrypt-openssl.c", line 1099.33: 1506-359 (I) Automatic variable secret contains a const member and is not initialized. It wi "dcrypt-openssl.c", line 1295.18: 1506-359 (I) Automatic variable tmp contains a const member and is not initialized. It will "dcrypt-openssl.c", line 1365.18: 1506-359 (I) Automatic variable saltbuf contains a const member and is not initialized. It w "istream-decrypt.c", line 68.18: 1506-359 (I) Automatic variable ephemeral_key contains a const member and is not initialized. "istream-decrypt.c", line 276.18: 1506-359 (I) Automatic variable buf contains a const member and is not initialized. It will "istream-decrypt.c", line 369.26: 1506-359 (I) Automatic variable peer_key contains a const member and is not initialized. It "istream-decrypt.c", line 745.42: 1506-359 (I) Automatic variable db contains a const member and is not initialized. It will b "ostream-encrypt.c", line 135.65: 1506-359 (I) Automatic variable buf contains a const member and is not initialized. It will "ostream-encrypt.c", line 454.18: 1506-359 (I) Automatic variable buf contains a const member and is not initialized. It will "istream-decrypt.c", line 68.18: 1506-359 (I) Automatic variable ephemeral_key contains a const member and is not initialized. "istream-decrypt.c", line 276.18: 1506-359 (I) Automatic variable buf contains a const member and is not initialized. It will "istream-decrypt.c", line 369.26: 1506-359 (I) Automatic variable peer_key contains a const member and is not initialized. It "istream-decrypt.c", line 745.42: 1506-359 (I) Automatic variable db contains a const member and is not initialized. It will b "ostream-encrypt.c", line 135.65: 1506-359 (I) Automatic variable buf contains a const member and is not initialized. It will "ostream-encrypt.c", line 454.18: 1506-359 (I) Automatic variable buf contains a const member and is not initialized. It will "test-http-auth.c", line 27.27: 1506-022 (S) "scheme" is not a member of "const struct http_auth_challenges_test". "test-http-auth.c", line 27.37: 1506-196 (W) Initialization between types "struct http_auth_challenge_test* const" and "char*" "test-http-auth.c", line 28.33: 1506-022 (S) "data" is not a member of "const struct http_auth_challenges_test". "test-http-auth.c", line 28.41: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 29.33: 1506-022 (S) "params" is not a member of "const struct http_auth_challenges_test". "test-http-auth.c", line 30.43: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 30.52: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 30.70: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 30.76: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 33.33: 1506-022 (S) "scheme" is not a member of "const struct http_auth_challenges_test". "test-http-auth.c", line 33.43: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 43.27: 1506-022 (S) "scheme" is not a member of "const struct http_auth_challenges_test". "test-http-auth.c", line 43.37: 1506-196 (W) Initialization between types "struct http_auth_challenge_test* const" and "char*" "test-http-auth.c", line 44.33: 1506-022 (S) "data" is not a member of "const struct http_auth_challenges_test". "test-http-auth.c", line 44.41: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 45.33: 1506-022 (S) "params" is not a member of "const struct http_auth_challenges_test". "test-http-auth.c", line 46.43: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 46.52: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 47.43: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 47.50: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 48.43: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 48.52: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 49.43: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 49.53: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 50.43: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 50.49: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 53.33: 1506-022 (S) "scheme" is not a member of "const struct http_auth_challenges_test". "test-http-auth.c", line 53.43: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 60.27: 1506-022 (S) "scheme" is not a member of "const struct http_auth_challenges_test". "test-http-auth.c", line 60.37: 1506-196 (W) Initialization between types "struct http_auth_challenge_test* const" and "char*" "test-http-auth.c", line 61.33: 1506-022 (S) "data" is not a member of "const struct http_auth_challenges_test". "test-http-auth.c", line 61.41: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 62.33: 1506-022 (S) "params" is not a member of "const struct http_auth_challenges_test". "test-http-auth.c", line 63.43: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 63.52: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 64.43: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 64.51: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 65.43: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 65.52: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 66.43: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 66.49: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 69.33: 1506-022 (S) "scheme" is not a member of "const struct http_auth_challenges_test". "test-http-auth.c", line 69.43: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 70.33: 1506-022 (S) "data" is not a member of "const struct http_auth_challenges_test". "test-http-auth.c", line 70.41: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 71.33: 1506-022 (S) "params" is not a member of "const struct http_auth_challenges_test". "test-http-auth.c", line 72.43: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 72.52: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 73.43: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 73.49: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 76.33: 1506-022 (S) "scheme" is not a member of "const struct http_auth_challenges_test". "test-http-auth.c", line 76.43: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 187.27: 1506-196 (W) Initialization between types "struct http_auth_param* const" and "char*" is not a "test-http-auth.c", line 187.39: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 188.27: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 188.36: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 189.27: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 189.36: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 190.27: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 190.34: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 191.27: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 191.34: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 192.27: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 192.33: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 193.27: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 193.37: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 194.27: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 194.39: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 195.27: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 195.37: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 196.27: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 196.33: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. make: 1254-004 The error code from the last command is 1. I am quite willing to dig and dig - but I want to be know if there will be any interest and/or support for dovecot on AIX. Sincerely, Michael From aki.tuomi at dovecot.fi Sun Oct 9 20:51:17 2016 From: aki.tuomi at dovecot.fi (Aki Tuomi) Date: Sun, 9 Oct 2016 23:51:17 +0300 Subject: Pacaging/build issues with AIX and vac (dovecot-2.2.25) In-Reply-To: <98f43cf6-e19b-0cf8-421d-d7b60bcf60de@felt.demon.nl> References: <98f43cf6-e19b-0cf8-421d-d7b60bcf60de@felt.demon.nl> Message-ID: <5db1bb35-f79e-c814-f935-7484af5c77c8@dovecot.fi> On 09.10.2016 22:48, Michael Felt wrote: > Hi. > > I finally decided it was really time to stop being lazy and really > move away from gmail. After I have a server in my basement using > power, etc. > > So I turned on the imap provided - and did not quite cry - it will > have to do for now, but imap2 is wanting. > > A real server yes, but not Linux. (Using linux would require another > server AND I would feel I am being lazy again). > > So, I downloaded dovecot-2.2.25 and tried to build. > > Configure (messages to stderr) > > xlc is /usr/vacpp/bin/xlc > + CPPFLAGS="-I/opt/include -I/opt/buildaix/include" > CFLAGS="-I/opt/include -O2 -qmaxmem=-1 -qarch=pwr5 > -I/opt/buildaix/includes > ./configure\ > --prefix=/opt \ > --sysconfdir=/var/dovecot/etc\ > --sharedstatedir=/var/dovecot/com\ > --localstatedir=/var/dovecot\ > --mandir=/usr/share/man\ > --infodir=/opt/share/info/dovecot \ > > .buildaix/configure.out > xlc_r: 1501-216 (W) command option -dM is not recognized - passed to ld > xlc_r: 1501-228 (W) input file c not found > 1506-297 (S) Unable to open input file null. No such file or directory. > ./configure[25617]: rpcgen: not found > > messages to stderr by make: > > I am quite willing to dig and dig - but I want to be know if there > will be any interest and/or support for dovecot on AIX. > > > Sincerely, > > Michael Does the compiler support C99 standard? Aki From michael at felt.demon.nl Mon Oct 10 00:57:59 2016 From: michael at felt.demon.nl (Michael Felt) Date: Mon, 10 Oct 2016 02:57:59 +0200 Subject: Pacaging/build issues with AIX and vac (dovecot-2.2.25) In-Reply-To: <5db1bb35-f79e-c814-f935-7484af5c77c8@dovecot.fi> References: <98f43cf6-e19b-0cf8-421d-d7b60bcf60de@felt.demon.nl> <5db1bb35-f79e-c814-f935-7484af5c77c8@dovecot.fi> Message-ID: <557db634-938f-bc4b-f723-1d3c377897ce@felt.demon.nl> On 09-Oct-16 22:51, Aki Tuomi wrote: >> >> Michael > > Does the compiler support C99 standard? > > Aki Yes. Plus extended features. Key difference with GCC, e.g., are the flags to the compiler, but autotools general manages those well. Key difference with platform (well, of of) is that it is not GNU, and how shared libraries are built. Again, libtool in particular, handles this well. From aki.tuomi at dovecot.fi Mon Oct 10 04:45:05 2016 From: aki.tuomi at dovecot.fi (Aki Tuomi) Date: Mon, 10 Oct 2016 07:45:05 +0300 (EEST) Subject: Pacaging/build issues with AIX and vac (dovecot-2.2.25) In-Reply-To: <557db634-938f-bc4b-f723-1d3c377897ce@felt.demon.nl> References: <98f43cf6-e19b-0cf8-421d-d7b60bcf60de@felt.demon.nl> <5db1bb35-f79e-c814-f935-7484af5c77c8@dovecot.fi> <557db634-938f-bc4b-f723-1d3c377897ce@felt.demon.nl> Message-ID: <1708593123.1321.1476074706364@appsuite-dev.open-xchange.com> > On October 10, 2016 at 3:57 AM Michael Felt wrote: > > > On 09-Oct-16 22:51, Aki Tuomi wrote: > >> > >> Michael > > > > Does the compiler support C99 standard? > > > > Aki > > Yes. Plus extended features. Key difference with GCC, e.g., are the > flags to the compiler, but autotools general manages those well. > > Key difference with platform (well, of of) is that it is not GNU, and > how shared libraries are built. Again, libtool in particular, handles > this well. We do already support various non-GNU platforms, but our code does expect C99 conforming compiler these days. We also use autotools and libtool. rpcgen should be available, at least according to http://www.ibm.com/support/knowledgecenter/ssw_aix_61/com.ibm.aix.cmds4/rpcgen.htm Does your build end at some particular point? Aki From aki.tuomi at dovecot.fi Mon Oct 10 06:06:58 2016 From: aki.tuomi at dovecot.fi (Aki Tuomi) Date: Mon, 10 Oct 2016 09:06:58 +0300 Subject: Quota-status service on Director In-Reply-To: <99d4432d-dab1-df22-5245-57e1f316afb1@gmx.de> References: <99d4432d-dab1-df22-5245-57e1f316afb1@gmx.de> Message-ID: <54e1e631-1830-d992-9432-412fd63567cf@dovecot.fi> Hi! quota-status is not supported in proxy configuration. You should use quota_warning and quota_over_flag scripts instead. Aki On 08.10.2016 03:51, Michael Kliewe wrote: > Hello, > any news on this topic? I tried it again with Dovecot 2.2.25, but it's > still not possible to run the quota-status services on the directors. > They try to access the mailbox of the user, which they obviously > cannot. I'm not sure why Dovecot tries to open the mailbox, I would > have expected just a dict-query (SQL) to check the quota. If the > mailbox has to be opened, it has to be done on the correct backend > Dovecot of the user. > Is there any chance to fix this problem? Or am I doing something wrong > here? > Kind regards > Michael > > Am 23.02.2015 um 03:09 schrieb Michael Kliewe: >> Hello, >> >> I'm trying to configure the quota-status service, but it seems I'm >> not successful with my director setup (2.2.9). I activate the >> quota-status service like this on my director server: >> >> $ cat 91-quota-status.conf >> ## >> ## Quota-Status configuration. >> ## >> # Load Module quota-status and listen on TCP/IP Port for connections. >> service quota-status { >> executable = quota-status -p postfix >> inet_listener { >> address = 10.0.1.44 >> port = 12340 >> } >> client_limit = 1 >> } >> # Plugin configuration. >> # Return messages for requests by quota status: success, nouser and >> overquota. >> plugin { >> quota_status_success = DUNNO >> quota_status_nouser = DUNNO >> quota_status_overquota = "552 5.2.2 Mailbox is over quota" >> } >> >> After restarting the director service I try to query the quota status >> service: >> >> printf "recipient=user at domain.de\nsize=100000\n\n" | nc 10.0.1.44 12340 >> >> The output is: >> >> action=DEFER_IF_PERMIT Invalid user settings. Refer to server log for >> more information. >> >> In the debug log of the director I see this: >> >> Feb 23 03:03:09 director01 dovecot: auth: Debug: userdb out: USER >> 1 user at domain.de >> mail=mdbox:/mnt/data01/domain.de/user/maildir >> home=/mnt/data01/domain.de/user proxy=Y master= >> pass= uid=5000 gid=1 >> quota_rule=*:storage=60593 quota_rule2=*:messages=100000 >> Feb 23 03:03:09 director01 dovecot: quota-status(user at domain.de): >> Error: user user at domain.de: Initialization failed: Namespace '': >> mkdir(/mnt/data01/domain.de/user/maildir/mailboxes) failed: >> Permission denied (euid=5000(vmail) egid=1(daemon) missing +w perm: >> /mnt, dir owned by 0:0 mode=0755) >> >> So the quota status service tries to access the mailbox of the user >> ON THE DIRECTOR. But the director has not mounted the mailboxes of >> the users, that's what the backend dovecots are for (proxy=Y). So the >> quota-status query is not proxied to the dovecot backend server I >> would assume. >> >> Does that mean I have to start the quota-status service on the >> dovecot backend servers and access it from the Postfix server >> directly? Currently the Postfixes can only reach the directors, not >> the backend servers. >> >> Is it possible to use the quota-status service on the director? >> >> Thanks for any hints and help >> Michael From ximo at openmomo.com Mon Oct 10 08:49:52 2016 From: ximo at openmomo.com (Ximo Mira) Date: Mon, 10 Oct 2016 10:49:52 +0200 (CEST) Subject: problem with quota warning script execution, error 75 In-Reply-To: <1579349671.985724.1476088751359.JavaMail.zimbra@openmomo.com> Message-ID: <1612131597.985801.1476089392334.JavaMail.zimbra@openmomo.com> Hi, Im quite new to dovecot and im trying to run quota warning script with no success. Using "quota = count:User quota" and this script: ________________________ #!/bin/sh PERCENT=$1 USER=$2 cat << EOF | /usr/libexec/dovecot/dovecot-lda -d $USER -o "plugin/quota=count:User quota:noenforcing" From: support at company.com To: $USER Subject: Quota alert Quota usage is $PERCENT% Bye EOF ________________________ If I run the script manually from command line it works and message is delivered. If user reaches first configured limit (85%) Im getting this error. Oct 10 10:38:01 auth: Error: userdb(USER at DOMAIN.com): client doesn't have lookup permissions for this user: userdb reply doesn't contain uid (to bypass this check, set: service auth { unix_listener /var/run/dovecot/auth-userdb { mode=0777 } }) Oct 10 10:38:01 lda(USER at DOMAIN.com): Error: user USER at DOMAIN.com: Auth USER lookup failed Oct 10 10:38:01 lda: Fatal: Internal error occurred. Refer to server log for more information. Oct 10 10:38:01 quota-warning: Fatal: master: service(quota-warning): child 24515 returned error 75 Auth is LDAP based. From aki.tuomi at dovecot.fi Mon Oct 10 09:14:01 2016 From: aki.tuomi at dovecot.fi (Aki Tuomi) Date: Mon, 10 Oct 2016 12:14:01 +0300 Subject: problem with quota warning script execution, error 75 In-Reply-To: <1612131597.985801.1476089392334.JavaMail.zimbra@openmomo.com> References: <1612131597.985801.1476089392334.JavaMail.zimbra@openmomo.com> Message-ID: <55a3cb03-c3bb-658e-a806-084f2ebee26e@dovecot.fi> On 10.10.2016 11:49, Ximo Mira wrote: > Hi, > > Im quite new to dovecot and im trying to run quota warning script with no success. Using "quota = count:User quota" and this script: > ________________________ > #!/bin/sh > PERCENT=$1 > USER=$2 > cat << EOF | /usr/libexec/dovecot/dovecot-lda -d $USER -o "plugin/quota=count:User quota:noenforcing" > From: support at company.com > To: $USER > Subject: Quota alert > > Quota usage is $PERCENT% > Bye > > EOF > ________________________ > > If I run the script manually from command line it works and message is delivered. If user reaches first configured limit (85%) Im getting this error. > > Oct 10 10:38:01 auth: Error: userdb(USER at DOMAIN.com): client doesn't have lookup permissions for this user: userdb reply doesn't contain uid (to bypass this check, set: service auth { unix_listener /var/run/dovecot/auth-userdb { mode=0777 } }) > Oct 10 10:38:01 lda(USER at DOMAIN.com): Error: user USER at DOMAIN.com: Auth USER lookup failed > Oct 10 10:38:01 lda: Fatal: Internal error occurred. Refer to server log for more information. > Oct 10 10:38:01 quota-warning: Fatal: master: service(quota-warning): child 24515 returned error 75 > > Auth is LDAP based. Hi can you run the script by hand so that you do ./script params ; echo $? Aki From ximo at openmomo.com Mon Oct 10 09:21:08 2016 From: ximo at openmomo.com (Ximo Mira) Date: Mon, 10 Oct 2016 11:21:08 +0200 (CEST) Subject: problem with quota warning script execution, error 75 In-Reply-To: <55a3cb03-c3bb-658e-a806-084f2ebee26e@dovecot.fi> References: <1612131597.985801.1476089392334.JavaMail.zimbra@openmomo.com> <55a3cb03-c3bb-658e-a806-084f2ebee26e@dovecot.fi> Message-ID: <2046126021.986220.1476091268290.JavaMail.zimbra@openmomo.com> Like this? [root at server quota]# ./quota-warning.sh 85 existing_mailbox at domain.com ; echo $ $ Got message succesfully delivered. ----- Mensaje original ----- De: "Aki Tuomi" Para: dovecot at dovecot.org Enviados: Lunes, 10 de Octubre 2016 11:14:01 Asunto: Re: problem with quota warning script execution, error 75 On 10.10.2016 11:49, Ximo Mira wrote: > Hi, > > Im quite new to dovecot and im trying to run quota warning script with no success. Using "quota = count:User quota" and this script: > ________________________ > #!/bin/sh > PERCENT=$1 > USER=$2 > cat << EOF | /usr/libexec/dovecot/dovecot-lda -d $USER -o "plugin/quota=count:User quota:noenforcing" > From: support at company.com > To: $USER > Subject: Quota alert > > Quota usage is $PERCENT% > Bye > > EOF > ________________________ > > If I run the script manually from command line it works and message is delivered. If user reaches first configured limit (85%) Im getting this error. > > Oct 10 10:38:01 auth: Error: userdb(USER at DOMAIN.com): client doesn't have lookup permissions for this user: userdb reply doesn't contain uid (to bypass this check, set: service auth { unix_listener /var/run/dovecot/auth-userdb { mode=0777 } }) > Oct 10 10:38:01 lda(USER at DOMAIN.com): Error: user USER at DOMAIN.com: Auth USER lookup failed > Oct 10 10:38:01 lda: Fatal: Internal error occurred. Refer to server log for more information. > Oct 10 10:38:01 quota-warning: Fatal: master: service(quota-warning): child 24515 returned error 75 > > Auth is LDAP based. Hi can you run the script by hand so that you do ./script params ; echo $? Aki From aki.tuomi at dovecot.fi Mon Oct 10 09:37:26 2016 From: aki.tuomi at dovecot.fi (Aki Tuomi) Date: Mon, 10 Oct 2016 12:37:26 +0300 Subject: problem with quota warning script execution, error 75 In-Reply-To: <2046126021.986220.1476091268290.JavaMail.zimbra@openmomo.com> References: <1612131597.985801.1476089392334.JavaMail.zimbra@openmomo.com> <55a3cb03-c3bb-658e-a806-084f2ebee26e@dovecot.fi> <2046126021.986220.1476091268290.JavaMail.zimbra@openmomo.com> Message-ID: <7379554a-e6af-6cac-c41d-2198cac3df5b@dovecot.fi> No, ./quota-warning.sh 85 existing_mailbox at domain.com ; echo $? the '?' is part of the cmdline. On 10.10.2016 12:21, Ximo Mira wrote: > Like this? > > [root at server quota]# ./quota-warning.sh 85 existing_mailbox at domain.com ; echo $ > $ > > Got message succesfully delivered. > > > ----- Mensaje original ----- > > De: "Aki Tuomi" > Para: dovecot at dovecot.org > Enviados: Lunes, 10 de Octubre 2016 11:14:01 > Asunto: Re: problem with quota warning script execution, error 75 > > > > On 10.10.2016 11:49, Ximo Mira wrote: >> Hi, >> >> Im quite new to dovecot and im trying to run quota warning script with no success. Using "quota = count:User quota" and this script: >> ________________________ >> #!/bin/sh >> PERCENT=$1 >> USER=$2 >> cat << EOF | /usr/libexec/dovecot/dovecot-lda -d $USER -o "plugin/quota=count:User quota:noenforcing" >> From: support at company.com >> To: $USER >> Subject: Quota alert >> >> Quota usage is $PERCENT% >> Bye >> >> EOF >> ________________________ >> >> If I run the script manually from command line it works and message is delivered. If user reaches first configured limit (85%) Im getting this error. >> >> Oct 10 10:38:01 auth: Error: userdb(USER at DOMAIN.com): client doesn't have lookup permissions for this user: userdb reply doesn't contain uid (to bypass this check, set: service auth { unix_listener /var/run/dovecot/auth-userdb { mode=0777 } }) >> Oct 10 10:38:01 lda(USER at DOMAIN.com): Error: user USER at DOMAIN.com: Auth USER lookup failed >> Oct 10 10:38:01 lda: Fatal: Internal error occurred. Refer to server log for more information. >> Oct 10 10:38:01 quota-warning: Fatal: master: service(quota-warning): child 24515 returned error 75 >> >> Auth is LDAP based. > Hi > > can you run the script by hand so that you do > ./script params ; echo $? > > Aki From ximo at openmomo.com Mon Oct 10 09:41:43 2016 From: ximo at openmomo.com (Ximo Mira) Date: Mon, 10 Oct 2016 11:41:43 +0200 (CEST) Subject: problem with quota warning script execution, error 75 In-Reply-To: <7379554a-e6af-6cac-c41d-2198cac3df5b@dovecot.fi> References: <1612131597.985801.1476089392334.JavaMail.zimbra@openmomo.com> <55a3cb03-c3bb-658e-a806-084f2ebee26e@dovecot.fi> <2046126021.986220.1476091268290.JavaMail.zimbra@openmomo.com> <7379554a-e6af-6cac-c41d-2198cac3df5b@dovecot.fi> Message-ID: <1779021969.986566.1476092503354.JavaMail.zimbra@openmomo.com> Output is 0 and mail is delivered. [root at server quota]# ./quota-warning.sh 85 existing_mailbox at domain.com ; echo $? 0 ----- Mensaje original ----- De: "Aki Tuomi" Para: dovecot at dovecot.org Enviados: Lunes, 10 de Octubre 2016 11:37:26 Asunto: Re: problem with quota warning script execution, error 75 No, ./quota-warning.sh 85 existing_mailbox at domain.com ; echo $? the '?' is part of the cmdline. On 10.10.2016 12:21, Ximo Mira wrote: > Like this? > > [root at server quota]# ./quota-warning.sh 85 existing_mailbox at domain.com ; echo $ > $ > > Got message succesfully delivered. > > > ----- Mensaje original ----- > > De: "Aki Tuomi" > Para: dovecot at dovecot.org > Enviados: Lunes, 10 de Octubre 2016 11:14:01 > Asunto: Re: problem with quota warning script execution, error 75 > > > > On 10.10.2016 11:49, Ximo Mira wrote: >> Hi, >> >> Im quite new to dovecot and im trying to run quota warning script with no success. Using "quota = count:User quota" and this script: >> ________________________ >> #!/bin/sh >> PERCENT=$1 >> USER=$2 >> cat << EOF | /usr/libexec/dovecot/dovecot-lda -d $USER -o "plugin/quota=count:User quota:noenforcing" >> From: support at company.com >> To: $USER >> Subject: Quota alert >> >> Quota usage is $PERCENT% >> Bye >> >> EOF >> ________________________ >> >> If I run the script manually from command line it works and message is delivered. If user reaches first configured limit (85%) Im getting this error. >> >> Oct 10 10:38:01 auth: Error: userdb(USER at DOMAIN.com): client doesn't have lookup permissions for this user: userdb reply doesn't contain uid (to bypass this check, set: service auth { unix_listener /var/run/dovecot/auth-userdb { mode=0777 } }) >> Oct 10 10:38:01 lda(USER at DOMAIN.com): Error: user USER at DOMAIN.com: Auth USER lookup failed >> Oct 10 10:38:01 lda: Fatal: Internal error occurred. Refer to server log for more information. >> Oct 10 10:38:01 quota-warning: Fatal: master: service(quota-warning): child 24515 returned error 75 >> >> Auth is LDAP based. > Hi > > can you run the script by hand so that you do > ./script params ; echo $? > > Aki From aki.tuomi at dovecot.fi Mon Oct 10 09:45:41 2016 From: aki.tuomi at dovecot.fi (Aki Tuomi) Date: Mon, 10 Oct 2016 12:45:41 +0300 Subject: problem with quota warning script execution, error 75 In-Reply-To: <1779021969.986566.1476092503354.JavaMail.zimbra@openmomo.com> References: <1612131597.985801.1476089392334.JavaMail.zimbra@openmomo.com> <55a3cb03-c3bb-658e-a806-084f2ebee26e@dovecot.fi> <2046126021.986220.1476091268290.JavaMail.zimbra@openmomo.com> <7379554a-e6af-6cac-c41d-2198cac3df5b@dovecot.fi> <1779021969.986566.1476092503354.JavaMail.zimbra@openmomo.com> Message-ID: <52bc120f-3a4b-b23d-f23c-ccd564f59b7c@dovecot.fi> You are running LDA directly, from within user's context. You need to let your users access auth-userdb, as explained in the error log entry: service auth { unix_listener /var/run/dovecot/auth-userdb { mode=0777 } } Aki On 10.10.2016 12:41, Ximo Mira wrote: > Output is 0 and mail is delivered. > > [root at server quota]# ./quota-warning.sh 85 existing_mailbox at domain.com ; echo $? > 0 > > ----- Mensaje original ----- > > De: "Aki Tuomi" > Para: dovecot at dovecot.org > Enviados: Lunes, 10 de Octubre 2016 11:37:26 > Asunto: Re: problem with quota warning script execution, error 75 > > No, > > ./quota-warning.sh 85 existing_mailbox at domain.com ; echo $? > > the '?' is part of the cmdline. > > On 10.10.2016 12:21, Ximo Mira wrote: >> Like this? >> >> [root at server quota]# ./quota-warning.sh 85 existing_mailbox at domain.com ; echo $ >> $ >> >> Got message succesfully delivered. >> >> >> ----- Mensaje original ----- >> >> De: "Aki Tuomi" >> Para: dovecot at dovecot.org >> Enviados: Lunes, 10 de Octubre 2016 11:14:01 >> Asunto: Re: problem with quota warning script execution, error 75 >> >> >> >> On 10.10.2016 11:49, Ximo Mira wrote: >>> Hi, >>> >>> Im quite new to dovecot and im trying to run quota warning script with no success. Using "quota = count:User quota" and this script: >>> ________________________ >>> #!/bin/sh >>> PERCENT=$1 >>> USER=$2 >>> cat << EOF | /usr/libexec/dovecot/dovecot-lda -d $USER -o "plugin/quota=count:User quota:noenforcing" >>> From: support at company.com >>> To: $USER >>> Subject: Quota alert >>> >>> Quota usage is $PERCENT% >>> Bye >>> >>> EOF >>> ________________________ >>> >>> If I run the script manually from command line it works and message is delivered. If user reaches first configured limit (85%) Im getting this error. >>> >>> Oct 10 10:38:01 auth: Error: userdb(USER at DOMAIN.com): client doesn't have lookup permissions for this user: userdb reply doesn't contain uid (to bypass this check, set: service auth { unix_listener /var/run/dovecot/auth-userdb { mode=0777 } }) >>> Oct 10 10:38:01 lda(USER at DOMAIN.com): Error: user USER at DOMAIN.com: Auth USER lookup failed >>> Oct 10 10:38:01 lda: Fatal: Internal error occurred. Refer to server log for more information. >>> Oct 10 10:38:01 quota-warning: Fatal: master: service(quota-warning): child 24515 returned error 75 >>> >>> Auth is LDAP based. >> Hi >> >> can you run the script by hand so that you do >> ./script params ; echo $? >> >> Aki From michael at felt.demon.nl Mon Oct 10 11:53:36 2016 From: michael at felt.demon.nl (Michael Felt) Date: Mon, 10 Oct 2016 13:53:36 +0200 Subject: Pacaging/build issues with AIX and vac (dovecot-2.2.25) In-Reply-To: <1708593123.1321.1476074706364@appsuite-dev.open-xchange.com> References: <98f43cf6-e19b-0cf8-421d-d7b60bcf60de@felt.demon.nl> <5db1bb35-f79e-c814-f935-7484af5c77c8@dovecot.fi> <557db634-938f-bc4b-f723-1d3c377897ce@felt.demon.nl> <1708593123.1321.1476074706364@appsuite-dev.open-xchange.com> Message-ID: On 10-Oct-16 06:45, Aki Tuomi wrote: > We do already support various non-GNU platforms, but our code does expect C99 conforming compiler these days. We also use autotools and libtool. rpcgen should be available, at least according to > http://www.ibm.com/support/knowledgecenter/ssw_aix_61/com.ibm.aix.cmds4/rpcgen.htm > > Does your build end at some particular point? a) found rpcgen - not installed by default (it is included in bos.net.tcp.adt - recognizable for AIX admins). Thanks for the pointer! FYI, although the documentation is AIX 6.1, the program has been around much longer - only the web documentation is non existent. b) yes, it ended at some point (was in first post), but I shall try again with rpcgen installed - see if that goes better. From michael at felt.demon.nl Mon Oct 10 12:00:08 2016 From: michael at felt.demon.nl (Michael Felt) Date: Mon, 10 Oct 2016 14:00:08 +0200 Subject: Pacaging/build issues with AIX and vac (dovecot-2.2.25) In-Reply-To: <1708593123.1321.1476074706364@appsuite-dev.open-xchange.com> References: <98f43cf6-e19b-0cf8-421d-d7b60bcf60de@felt.demon.nl> <5db1bb35-f79e-c814-f935-7484af5c77c8@dovecot.fi> <557db634-938f-bc4b-f723-1d3c377897ce@felt.demon.nl> <1708593123.1321.1476074706364@appsuite-dev.open-xchange.com> Message-ID: <7af99966-88b9-3436-79df-76d5de2f550a@felt.demon.nl> On 10-Oct-16 06:45, Aki Tuomi wrote: > We do already support various non-GNU platforms, but our code does expect C99 conforming compiler these days. We also use autotools and libtool. rpcgen should be available, at least according to > http://www.ibm.com/support/knowledgecenter/ssw_aix_61/com.ibm.aix.cmds4/rpcgen.htm oops - this is in bos.net.nfs.server! FYI: root at x064:[/data/prj/aixtools/dovecot/dovecot-2.2.25]lslpp -w /usr/bin/rpcgen File Fileset Type ---------------------------------------------------------------------------- /usr/bin/rpcgen bos.net.nfs.server File From michael at felt.demon.nl Mon Oct 10 12:39:04 2016 From: michael at felt.demon.nl (Michael Felt) Date: Mon, 10 Oct 2016 14:39:04 +0200 Subject: Pacaging/build issues with AIX and vac (dovecot-2.2.25) In-Reply-To: <1708593123.1321.1476074706364@appsuite-dev.open-xchange.com> References: <98f43cf6-e19b-0cf8-421d-d7b60bcf60de@felt.demon.nl> <5db1bb35-f79e-c814-f935-7484af5c77c8@dovecot.fi> <557db634-938f-bc4b-f723-1d3c377897ce@felt.demon.nl> <1708593123.1321.1476074706364@appsuite-dev.open-xchange.com> Message-ID: <3eec8368-0d95-b67d-e7c7-987d3e50bd53@felt.demon.nl> On 10-Oct-16 06:45, Aki Tuomi wrote: > Does your build end at some particular point? See **** DETAILS **** for in depth (I hope enough!) study/report. > > Aki I would guess this is not "c99" way... Making all in lib-http source='test-http-auth.c' object='test-http-auth.o' libtool=no DEPDIR=.deps depmode=xlc /bin/sh ../../depcomp xlc_r -DHAVE_CONFIG_H -I. -I../.. -I../../src/lib -I../../src/lib-test -I../../src/lib-dns -I../../src/lib-ssl-iostream -I../../src/lib-master -I/opt/include -I/opt/buildaix/include -I/opt/include -O2 -qmaxmem=-1 -qarch=pwr5 -I/opt/buildaix/includes -c -o test-http-auth.o test-http-auth.c "test-http-auth.c", line 27.27: 1506-022 (S) "scheme" is not a member of "const struct http_auth_challenges_test". "test-http-auth.c", line 27.37: 1506-196 (W) Initialization between types "struct http_auth_challenge_test* const" and "char*" is not allowed. "test-http-auth.c", line 28.33: 1506-022 (S) "data" is not a member of "const struct http_auth_challenges_test". "test-http-auth.c", line 28.41: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 29.33: 1506-022 (S) "params" is not a member of "const struct http_auth_challenges_test". "test-http-auth.c", line 30.43: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 30.52: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 30.70: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 30.76: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 33.33: 1506-022 (S) "scheme" is not a member of "const struct http_auth_challenges_test". "test-http-auth.c", line 33.43: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 43.27: 1506-022 (S) "scheme" is not a member of "const struct http_auth_challenges_test". "test-http-auth.c", line 43.37: 1506-196 (W) Initialization between types "struct http_auth_challenge_test* const" and "char*" is not allowed. "test-http-auth.c", line 44.33: 1506-022 (S) "data" is not a member of "const struct http_auth_challenges_test". "test-http-auth.c", line 44.41: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 45.33: 1506-022 (S) "params" is not a member of "const struct http_auth_challenges_test". "test-http-auth.c", line 46.43: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 46.52: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 47.43: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 47.50: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 48.43: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 48.52: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 49.43: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 49.53: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 50.43: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 50.49: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 53.33: 1506-022 (S) "scheme" is not a member of "const struct http_auth_challenges_test". "test-http-auth.c", line 53.43: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 60.27: 1506-022 (S) "scheme" is not a member of "const struct http_auth_challenges_test". "test-http-auth.c", line 60.37: 1506-196 (W) Initialization between types "struct http_auth_challenge_test* const" and "char*" is not allowed. "test-http-auth.c", line 61.33: 1506-022 (S) "data" is not a member of "const struct http_auth_challenges_test". "test-http-auth.c", line 61.41: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 62.33: 1506-022 (S) "params" is not a member of "const struct http_auth_challenges_test". "test-http-auth.c", line 63.43: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 63.52: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 64.43: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 64.51: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 65.43: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 65.52: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 66.43: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 66.49: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 69.33: 1506-022 (S) "scheme" is not a member of "const struct http_auth_challenges_test". "test-http-auth.c", line 69.43: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 70.33: 1506-022 (S) "data" is not a member of "const struct http_auth_challenges_test". "test-http-auth.c", line 70.41: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 71.33: 1506-022 (S) "params" is not a member of "const struct http_auth_challenges_test". "test-http-auth.c", line 72.43: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 72.52: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 73.43: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 73.49: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 76.33: 1506-022 (S) "scheme" is not a member of "const struct http_auth_challenges_test". "test-http-auth.c", line 76.43: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 187.27: 1506-196 (W) Initialization between types "struct http_auth_param* const" and "char*" is not allowed. "test-http-auth.c", line 187.39: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 188.27: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 188.36: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 189.27: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 189.36: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 190.27: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 190.34: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 191.27: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 191.34: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 192.27: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 192.33: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 193.27: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 193.37: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 194.27: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 194.39: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 195.27: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 195.37: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 196.27: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 196.33: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. make: 1254-004 The error code from the last command is 1. Stop. make: 1254-004 The error code from the last command is 1. **** DETAILS ********* Looking at the first error (I think is "killing") see line 27 through line 30 and the message: "test-http-auth.c", line 27.27: 1506-022 (S) "scheme" is not a member of "const struct http_auth_challenges_test". "test-http-auth.c", line 27.37: 1506-196 (W) Initialization between types "struct http_auth_challenge_test* const" and "char*" is not allowed. "test-http-auth.c", line 28.33: 1506-022 (S) "data" is not a member of "const struct http_auth_challenges_test". "test-http-auth.c", line 28.41: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 29.33: 1506-022 (S) "params" is not a member of "const struct http_auth_challenges_test". "test-http-auth.c", line 30.43: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 30.52: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 30.70: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 30.76: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. +21 /* Valid auth challenges tests */ +22 static const struct http_auth_challenges_test +23 valid_auth_challenges_tests[] = { +24 { +25 .challenges_in = "Basic realm=\"WallyWorld\"", +26 .challenges = (struct http_auth_challenge_test []) { +27 { .scheme = "Basic", +28 .data = NULL, +29 .params = (struct http_auth_param []) { +30 { "realm", "WallyWorld" }, { NULL, NULL } +31 } +32 },{ +33 .scheme = NULL +34 } +35 } +36 },{ Adding -E to the compile command gives the following extraction from the .i file: source='test-http-auth.c' object='test-http-auth.o' libtool=no DEPDIR=.deps depmode=xlc /bin/sh ../../depcomp xlc_r -E -DHAVE_CONFIG_H -I. -I../.. -I../../src/lib -I.. /../src/lib-test -I../../src/lib-dns -I../../src/lib-ssl-iostream -I../../src/lib-master -I/opt/include -I/opt/buildaix/include -I/opt/include -O2 -qmaxmem=-1 -qarch= pwr5 -I/opt/buildaix/includes -c -o test-http-auth.o test-http-auth.c >test-http-auth.i #line 6 "http-auth.h" struct http_auth_param; struct http_auth_challenge; struct http_auth_credentials; union array__http_auth_param { struct array arr; struct http_auth_param const *const *v; struct http_auth_param **v_modifiable; }; union array__http_auth_challenge { struct array arr; struct http_auth_challenge const *const *v; struct http_auth_challenge **v_modifiable; }; struct http_auth_param { const char *name; const char *value; }; struct http_auth_challenge { const char *scheme; const char *data; union array__http_auth_param params; }; struct http_auth_credentials { const char *scheme; const char *data; union array__http_auth_param params; }; #line 34 int http_auth_parse_challenges(const unsigned char *data, size_t size, union array__http_auth_challenge *chlngs); int http_auth_parse_credentials(const unsigned char *data, size_t size, struct http_auth_credentials *crdts); #line 43 I do not see any "const struct" block. So, a different approach is the -qinfo=all (and divert output to nohup.out!) source='test-http-auth.c' object='test-http-auth.o' libtool=no DEPDIR=.deps depmode=xlc nohup /bin/sh ../../depcomp xlc_r -E -DHAVE_CONFIG_H -I. -I../.. -I../../src/lib -I../../src/lib-test -I../../src/lib-dns -I../../src/lib-ssl-iostream -I../../src/lib-master -I/opt/include -I/opt/buildaix/include -I/opt/include -O2 -qmaxmem=-1 -qarch=pwr5 -I/opt/buildaix/includes -c -o test-http-auth.o test-http-auth.c >test-http-auth.info "test-http-auth.c", line 26.31: 1506-221 (I) Initializer must be a valid constant expression. "test-http-auth.c", line 26.31: 1506-444 (I) The opening brace is redundant. "test-http-auth.c", line 27.25: 1506-444 (I) The opening brace is redundant. "test-http-auth.c", line 27.27: 1506-022 (S) "scheme" is not a member of "const struct http_auth_challenges_test". "test-http-auth.c", line 27.37: 1506-196 (W) Initialization between types "struct http_auth_challenge_test* const" and "char*" is not allowed. "test-http-auth.c", line 28.33: 1506-022 (S) "data" is not a member of "const struct http_auth_challenges_test". "test-http-auth.c", line 28.41: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 29.33: 1506-022 (S) "params" is not a member of "const struct http_auth_challenges_test". "test-http-auth.c", line 29.43: 1506-221 (I) Initializer must be a valid constant expression. "test-http-auth.c", line 29.43: 1506-444 (I) The opening brace is redundant. "test-http-auth.c", line 30.41: 1506-444 (I) The opening brace is redundant. "test-http-auth.c", line 30.43: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 30.52: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 30.65: 1506-445 (I) The closing brace is redundant. "test-http-auth.c", line 30.68: 1506-444 (I) The opening brace is redundant. "test-http-auth.c", line 30.70: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 30.76: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 30.81: 1506-445 (I) The closing brace is redundant. "test-http-auth.c", line 31.33: 1506-445 (I) The closing brace is redundant. "test-http-auth.c", line 32.25: 1506-445 (I) The closing brace is redundant. "test-http-auth.c", line 32.27: 1506-444 (I) The opening brace is redundant. "test-http-auth.c", line 33.33: 1506-022 (S) "scheme" is not a member of "const struct http_auth_challenges_test". "test-http-auth.c", line 33.43: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 34.25: 1506-445 (I) The closing brace is redundant. "test-http-auth.c", line 35.17: 1506-445 (I) The closing brace is redundant. "test-http-auth.c", line 38.18: 1506-467 (I) String literals concatenated. "test-http-auth.c", line 39.18: 1506-467 (I) String literals concatenated. "test-http-auth.c", line 40.18: 1506-467 (I) String literals concatenated. "test-http-auth.c", line 41.18: 1506-467 (I) String literals concatenated. "test-http-auth.c", line 42.31: 1506-221 (I) Initializer must be a valid constant expression. "test-http-auth.c", line 42.31: 1506-444 (I) The opening brace is redundant. "test-http-auth.c", line 43.25: 1506-444 (I) The opening brace is redundant. "test-http-auth.c", line 43.27: 1506-022 (S) "scheme" is not a member of "const struct http_auth_challenges_test". "test-http-auth.c", line 43.37: 1506-196 (W) Initialization between types "struct http_auth_challenge_test* const" and "char*" is not allowed. "test-http-auth.c", line 44.33: 1506-022 (S) "data" is not a member of "const struct http_auth_challenges_test". "test-http-auth.c", line 44.41: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 45.33: 1506-022 (S) "params" is not a member of "const struct http_auth_challenges_test". "test-http-auth.c", line 45.43: 1506-221 (I) Initializer must be a valid constant expression. "test-http-auth.c", line 45.43: 1506-444 (I) The opening brace is redundant. "test-http-auth.c", line 46.41: 1506-444 (I) The opening brace is redundant. "test-http-auth.c", line 46.43: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 46.52: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 46.73: 1506-445 (I) The closing brace is redundant. "test-http-auth.c", line 47.41: 1506-444 (I) The opening brace is redundant. I can send the complete .i and .info files if you need more info to understand what is happening. Michael From stephan at rename-it.nl Mon Oct 10 12:59:35 2016 From: stephan at rename-it.nl (Stephan Bosch) Date: Mon, 10 Oct 2016 14:59:35 +0200 Subject: Pacaging/build issues with AIX and vac (dovecot-2.2.25) In-Reply-To: <3eec8368-0d95-b67d-e7c7-987d3e50bd53@felt.demon.nl> References: <98f43cf6-e19b-0cf8-421d-d7b60bcf60de@felt.demon.nl> <5db1bb35-f79e-c814-f935-7484af5c77c8@dovecot.fi> <557db634-938f-bc4b-f723-1d3c377897ce@felt.demon.nl> <1708593123.1321.1476074706364@appsuite-dev.open-xchange.com> <3eec8368-0d95-b67d-e7c7-987d3e50bd53@felt.demon.nl> Message-ID: <0ee249e7-3807-51e2-9e1c-ca0b7e8f5f11@rename-it.nl> Op 10-10-2016 om 14:39 schreef Michael Felt: > On 10-Oct-16 06:45, Aki Tuomi wrote: >> Does your build end at some particular point? > See **** DETAILS **** for in depth (I hope enough!) study/report. >> >> Aki > > I would guess this is not "c99" way... It seems to fail on a C99 feature called Compound Literal (see http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1256.pdf, Section 6.5.2.5). It should be supported by AIX: https://www.ibm.com/support/knowledgecenter/SSGH3R_13.1.3/com.ibm.xlcpp1313.aix.doc/language_ref/compound_literals.html I have no idea why it would fail here. Regards, Stephan. From michael at felt.demon.nl Mon Oct 10 15:16:23 2016 From: michael at felt.demon.nl (Michael Felt) Date: Mon, 10 Oct 2016 17:16:23 +0200 Subject: Pacaging/build issues with AIX and vac (dovecot-2.2.25) In-Reply-To: <0ee249e7-3807-51e2-9e1c-ca0b7e8f5f11@rename-it.nl> References: <98f43cf6-e19b-0cf8-421d-d7b60bcf60de@felt.demon.nl> <5db1bb35-f79e-c814-f935-7484af5c77c8@dovecot.fi> <557db634-938f-bc4b-f723-1d3c377897ce@felt.demon.nl> <1708593123.1321.1476074706364@appsuite-dev.open-xchange.com> <3eec8368-0d95-b67d-e7c7-987d3e50bd53@felt.demon.nl> <0ee249e7-3807-51e2-9e1c-ca0b7e8f5f11@rename-it.nl> Message-ID: <0ca7561c-2dc3-4df3-255d-d807d4b5e733@felt.demon.nl> On 10/10/2016 14:59, Stephan Bosch wrote: > > > Op 10-10-2016 om 14:39 schreef Michael Felt: >> On 10-Oct-16 06:45, Aki Tuomi wrote: >>> Does your build end at some particular point? >> See **** DETAILS **** for in depth (I hope enough!) study/report. >>> >>> Aki >> >> I would guess this is not "c99" way... > > It seems to fail on a C99 feature called Compound Literal (see > http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1256.pdf, Section > 6.5.2.5). > > It should be supported by AIX: > > https://www.ibm.com/support/knowledgecenter/SSGH3R_13.1.3/com.ibm.xlcpp1313.aix.doc/language_ref/compound_literals.html > > > I have no idea why it would fail here. > > Regards, > > Stephan. Well, if I had the budget to buy the latest version (version 13 is your doclink) - then maybe it would work for me. I do not have the resources to upgrade from v11. Sad day for me I guess. Or lucky for me that "Compound Literal" is not used much - this is the first time I have run into it. From stephan at rename-it.nl Mon Oct 10 15:29:04 2016 From: stephan at rename-it.nl (Stephan Bosch) Date: Mon, 10 Oct 2016 17:29:04 +0200 Subject: Pacaging/build issues with AIX and vac (dovecot-2.2.25) In-Reply-To: <0ca7561c-2dc3-4df3-255d-d807d4b5e733@felt.demon.nl> References: <98f43cf6-e19b-0cf8-421d-d7b60bcf60de@felt.demon.nl> <5db1bb35-f79e-c814-f935-7484af5c77c8@dovecot.fi> <557db634-938f-bc4b-f723-1d3c377897ce@felt.demon.nl> <1708593123.1321.1476074706364@appsuite-dev.open-xchange.com> <3eec8368-0d95-b67d-e7c7-987d3e50bd53@felt.demon.nl> <0ee249e7-3807-51e2-9e1c-ca0b7e8f5f11@rename-it.nl> <0ca7561c-2dc3-4df3-255d-d807d4b5e733@felt.demon.nl> Message-ID: <568a469a-e967-7332-a271-fd4285fbe0cf@rename-it.nl> Op 10-10-2016 om 17:16 schreef Michael Felt: > On 10/10/2016 14:59, Stephan Bosch wrote: >> >> >> Op 10-10-2016 om 14:39 schreef Michael Felt: >>> On 10-Oct-16 06:45, Aki Tuomi wrote: >>>> Does your build end at some particular point? >>> See **** DETAILS **** for in depth (I hope enough!) study/report. >>>> >>>> Aki >>> >>> I would guess this is not "c99" way... >> >> It seems to fail on a C99 feature called Compound Literal (see >> http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1256.pdf, Section >> 6.5.2.5). >> >> It should be supported by AIX: >> >> https://www.ibm.com/support/knowledgecenter/SSGH3R_13.1.3/com.ibm.xlcpp1313.aix.doc/language_ref/compound_literals.html >> >> >> I have no idea why it would fail here. >> >> Regards, >> >> Stephan. > > Well, if I had the budget to buy the latest version (version 13 is > your doclink) - then maybe it would work for me. I do not have the > resources to upgrade from v11. Sad day for me I guess. > > Or lucky for me that "Compound Literal" is not used much - this is the > first time I have run into it. Well, older versions are supposed to support it too: https://www.ibm.com/support/knowledgecenter/SSGH3R_11.1.0/com.ibm.xlcpp111.aix.doc/language_ref/compound_literals.html Regards, Stephan. From michael at felt.demon.nl Mon Oct 10 16:14:29 2016 From: michael at felt.demon.nl (Michael Felt) Date: Mon, 10 Oct 2016 18:14:29 +0200 Subject: Pacaging/build issues with AIX and vac (dovecot-2.2.25) In-Reply-To: <568a469a-e967-7332-a271-fd4285fbe0cf@rename-it.nl> References: <98f43cf6-e19b-0cf8-421d-d7b60bcf60de@felt.demon.nl> <5db1bb35-f79e-c814-f935-7484af5c77c8@dovecot.fi> <557db634-938f-bc4b-f723-1d3c377897ce@felt.demon.nl> <1708593123.1321.1476074706364@appsuite-dev.open-xchange.com> <3eec8368-0d95-b67d-e7c7-987d3e50bd53@felt.demon.nl> <0ee249e7-3807-51e2-9e1c-ca0b7e8f5f11@rename-it.nl> <0ca7561c-2dc3-4df3-255d-d807d4b5e733@felt.demon.nl> <568a469a-e967-7332-a271-fd4285fbe0cf@rename-it.nl> Message-ID: <89564ea4-a1c3-5dec-99d1-3a4fcac4e0d6@felt.demon.nl> On 10/10/2016 17:29, Stephan Bosch wrote: > > > Op 10-10-2016 om 17:16 schreef Michael Felt: >> On 10/10/2016 14:59, Stephan Bosch wrote: >>> >>> >>> Op 10-10-2016 om 14:39 schreef Michael Felt: >>>> On 10-Oct-16 06:45, Aki Tuomi wrote: >>>>> Does your build end at some particular point? >>>> See **** DETAILS **** for in depth (I hope enough!) study/report. >>>>> >>>>> Aki >>>> >>>> I would guess this is not "c99" way... >>> >>> It seems to fail on a C99 feature called Compound Literal (see >>> http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1256.pdf, Section >>> 6.5.2.5). >>> >>> It should be supported by AIX: >>> >>> https://www.ibm.com/support/knowledgecenter/SSGH3R_13.1.3/com.ibm.xlcpp1313.aix.doc/language_ref/compound_literals.html >>> >>> >>> I have no idea why it would fail here. >>> >>> Regards, >>> >>> Stephan. >> >> Well, if I had the budget to buy the latest version (version 13 is >> your doclink) - then maybe it would work for me. I do not have the >> resources to upgrade from v11. Sad day for me I guess. >> >> Or lucky for me that "Compound Literal" is not used much - this is >> the first time I have run into it. > > Well, older versions are supposed to support it too: > > https://www.ibm.com/support/knowledgecenter/SSGH3R_11.1.0/com.ibm.xlcpp111.aix.doc/language_ref/compound_literals.html > > > Regards, > > Stephan. I am trying to work on it. Hard to read until you know what you are looking at. I had already seen that 11.1 also shows the link - I guess it does not like the nested form. And, it looks as if you have too many {} pairs (one too many outside pairs) - the .info report was mentioning they were more than needed and 'skipping' iirc. From michael at felt.demon.nl Mon Oct 10 16:46:26 2016 From: michael at felt.demon.nl (Michael Felt) Date: Mon, 10 Oct 2016 18:46:26 +0200 Subject: Pacaging/build issues with AIX and vac (dovecot-2.2.25) In-Reply-To: <0ee249e7-3807-51e2-9e1c-ca0b7e8f5f11@rename-it.nl> References: <98f43cf6-e19b-0cf8-421d-d7b60bcf60de@felt.demon.nl> <5db1bb35-f79e-c814-f935-7484af5c77c8@dovecot.fi> <557db634-938f-bc4b-f723-1d3c377897ce@felt.demon.nl> <1708593123.1321.1476074706364@appsuite-dev.open-xchange.com> <3eec8368-0d95-b67d-e7c7-987d3e50bd53@felt.demon.nl> <0ee249e7-3807-51e2-9e1c-ca0b7e8f5f11@rename-it.nl> Message-ID: On 10/10/2016 14:59, Stephan Bosch wrote: > It should be supported by AIX: > > https://www.ibm.com/support/knowledgecenter/SSGH3R_13.1.3/com.ibm.xlcpp1313.aix.doc/language_ref/compound_literals.html > > > I have no idea why it would fail here. I see it is also in version 11 - so, maybe still syntax: This is the doc: The following example passes a constant structure variable of type point containing two integer members to the function drawline: drawline((struct point){6,7}); While the code is: .challenges = (struct http_auth_challenge_test []) { { .scheme = "Basic", .data = NULL, .params = (struct http_auth_param []) { { "realm", "WallyWorld" }, { NULL, NULL } } },{ .scheme = NULL } The difference I notice is that, much prettier btw, you are also specifying the struct .names, and perhaps, in Compound literal syntax .-,----------------. V | >>-(--/type_name/--)--{----/initializer_list/-+--}----------------->< "initializer_list" is exclusive of (additional) declarers. The messages seem to indicate the parser does not like them being there... "test-http-auth.c", line 27.27: 1506-022 (S) "scheme" is not a member of "const struct http_auth_challenges_test". "test-http-auth.c", line 27.37: 1506-196 (W) Initialization between types "struct http_auth_challenge_test* const" and "char*" is not allowed. "test-http-auth.c", line 28.33: 1506-022 (S) "data" is not a member of "const struct http_auth_challenges_test". "test-http-auth.c", line 28.41: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 29.33: 1506-022 (S) "params" is not a member of "const struct http_auth_challenges_test". "test-http-auth.c", line 30.43: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 30.52: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 30.70: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 30.76: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. To understand|study it, I simplified it to: +7 #include "http-auth.h" +8 +9 struct http_auth_challenge_test { +10 const char *scheme; +11 const char *data; +12 struct http_auth_param *params; +13 }; +14 +15 struct http_auth_challenges_test { +16 const char *challenges_in; +17 struct http_auth_challenge_test *challenges; +18 }; +19 +20 /* Valid auth challenges tests */ +21 static struct http_auth_challenges_test +22 valid_auth_challenges_tests[] = { +23 { "Basic realm=\"WallyWorld\"", +24 "Basic", +25 NULL, +26 "realm", "WallyWorld", +27 NULL, NULL +28 },{ +29 NULL, +30 NULL, +31 NULL, NULL +32 } +33 }; (lots of experimenting!) I got it down to these messages: "test-http-auth.c", line 24.25: 1506-196 (W) Initialization between types "struct http_auth_challenge_test*" and "char*" is not allowed. "test-http-auth.c", line 25.25: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 26.26: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 26.35: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 27.26: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 27.32: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 31.25: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "test-http-auth.c", line 31.31: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. As 'it' kept complaining about the unnecessary opening { I am thinking that their design does not leave space fot nesting arrarys of initialization. And I would tend to agree with there being 'lazy'. That does not fix my problem. Going to look for a - maybe less elegant - but workable (and if found I hope acceptable) work-around. From michael at felt.demon.nl Mon Oct 10 17:44:50 2016 From: michael at felt.demon.nl (Michael Felt) Date: Mon, 10 Oct 2016 19:44:50 +0200 Subject: Pacaging/build issues with AIX and vac (dovecot-2.2.25) In-Reply-To: <568a469a-e967-7332-a271-fd4285fbe0cf@rename-it.nl> References: <98f43cf6-e19b-0cf8-421d-d7b60bcf60de@felt.demon.nl> <5db1bb35-f79e-c814-f935-7484af5c77c8@dovecot.fi> <557db634-938f-bc4b-f723-1d3c377897ce@felt.demon.nl> <1708593123.1321.1476074706364@appsuite-dev.open-xchange.com> <3eec8368-0d95-b67d-e7c7-987d3e50bd53@felt.demon.nl> <0ee249e7-3807-51e2-9e1c-ca0b7e8f5f11@rename-it.nl> <0ca7561c-2dc3-4df3-255d-d807d4b5e733@felt.demon.nl> <568a469a-e967-7332-a271-fd4285fbe0cf@rename-it.nl> Message-ID: On 10/10/2016 17:29, Stephan Bosch wrote: > > > Op 10-10-2016 om 17:16 schreef Michael Felt: >> On 10/10/2016 14:59, Stephan Bosch wrote: >>> >>> >>> Op 10-10-2016 om 14:39 schreef Michael Felt: >>>> On 10-Oct-16 06:45, Aki Tuomi wrote: >>>>> Does your build end at some particular point? >>>> See **** DETAILS **** for in depth (I hope enough!) study/report. >>>>> >>>>> Aki >>>> >>>> I would guess this is not "c99" way... >>> >>> It seems to fail on a C99 feature called Compound Literal (see >>> http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1256.pdf, Section >>> 6.5.2.5). >>> >>> It should be supported by AIX: >>> >>> https://www.ibm.com/support/knowledgecenter/SSGH3R_13.1.3/com.ibm.xlcpp1313.aix.doc/language_ref/compound_literals.html >>> >>> >>> I have no idea why it would fail here. >>> >>> Regards, >>> >>> Stephan. >> >> Well, if I had the budget to buy the latest version (version 13 is >> your doclink) - then maybe it would work for me. I do not have the >> resources to upgrade from v11. Sad day for me I guess. >> >> Or lucky for me that "Compound Literal" is not used much - this is >> the first time I have run into it. > > Well, older versions are supposed to support it too: > > https://www.ibm.com/support/knowledgecenter/SSGH3R_11.1.0/com.ibm.xlcpp111.aix.doc/language_ref/compound_literals.html > As I said, or implied - reading the code was new - as actually, normally I saw the C89 way to do things. I wrote a simple test for myself to come to grips on the syntax expected - nothing nested, but seems to be passing test #1 +1 typedef struct { +2 char * p1; +3 char * p2; +4 } http_auth_param_t; +5 +6 http_auth_param_t a[] = +7 { "a1", "a2", +8 "b1", "b2" +9 }; +10 +11 main() +12 { +13 http_auth_param_t b[] = { +14 (http_auth_param_t) { .p1 = "c1" }, +15 (http_auth_param_t) { .p2 = "e2" } +16 }; +17 +18 printf("%s\n", a[0].p1); +19 printf("%s\n", b[1].p2); +20 } returns: !cc c99_comp_literal.c; ./a.out a1 e2 > > Regards, > > Stephan. From michael at felt.demon.nl Mon Oct 10 18:20:38 2016 From: michael at felt.demon.nl (Michael Felt) Date: Mon, 10 Oct 2016 20:20:38 +0200 Subject: Pacaging/build issues with AIX and vac (dovecot-2.2.25) In-Reply-To: References: <98f43cf6-e19b-0cf8-421d-d7b60bcf60de@felt.demon.nl> <5db1bb35-f79e-c814-f935-7484af5c77c8@dovecot.fi> <557db634-938f-bc4b-f723-1d3c377897ce@felt.demon.nl> <1708593123.1321.1476074706364@appsuite-dev.open-xchange.com> <3eec8368-0d95-b67d-e7c7-987d3e50bd53@felt.demon.nl> <0ee249e7-3807-51e2-9e1c-ca0b7e8f5f11@rename-it.nl> <0ca7561c-2dc3-4df3-255d-d807d4b5e733@felt.demon.nl> <568a469a-e967-7332-a271-fd4285fbe0cf@rename-it.nl> Message-ID: <8e0cac6e-ed0b-4852-3fb7-5da7c4e07c35@felt.demon.nl> On 10/10/2016 19:44, Michael Felt wrote: > +11 main() > +12 { > +13 http_auth_param_t b[] = { > +14 (http_auth_param_t) { .p1 = "c1" }, > +15 (http_auth_param_t) { .p2 = "e2" } > +16 }; > +17 > +18 printf("%s\n", a[0].p1); > +19 printf("%s\n", b[1].p2); > +20 } Updated to: +1 typedef struct { +2 char * p1; +3 char * p2; +4 } http_auth_param_t; +5 +6 http_auth_param_t a[] = +7 { "a1", "a2", +8 "b1", "b2" +9 }; +10 +11 struct xxx { +12 char *lbl; +13 http_auth_param_t a[]; +14 }; +15 struct xxx X = (struct xxx) { +16 .lbl = "labelX", +17 .a = { +18 (http_auth_param_t) { .p1 = "c1" }, +19 (http_auth_param_t) { .p2 = "g2" }, +20 (http_auth_param_t) { } +21 } +22 }; +23 main() +24 { +25 http_auth_param_t b[] = { +26 (http_auth_param_t) { .p1 = "c1" }, +27 (http_auth_param_t) { .p2 = "e2" }, +28 (http_auth_param_t) { } +29 }; +30 +31 printf("%s\n", a[0].p1); +32 printf("%s\n", b[1].p2); +33 printf("%s\n", X.a[1].p2); +34 } !cc c99_comp_literal.c; ./a.out a1 e2 g2 The key element seems to be in the struct definition: this works: +11 struct xxx { +12 char *lbl; +13 http_auth_param_t a[]; +14 }; but +11 struct xxx { +12 char *lbl; +13 http_auth_param_t *a; +14 }; !cc c99_comp_literal.c; ./a.out "c99_comp_literal.c", line 18.20: 1506-196 (S) Initialization between types "struct {...}*" and "struct {...}" is not allowed. "c99_comp_literal.c", line 19.20: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. "c99_comp_literal.c", line 20.20: 1506-026 (S) Number of initializers cannot be greater than the number of aggregate members. I (am guessing) think the reason is because *a could be an array of "random"(ized) pointers pointing to single instance, while [] says it will be an array of structs. From michael at felt.demon.nl Mon Oct 10 19:55:59 2016 From: michael at felt.demon.nl (Michael Felt) Date: Mon, 10 Oct 2016 21:55:59 +0200 Subject: Pacaging/build issues with AIX and vac (dovecot-2.2.25) In-Reply-To: <0ee249e7-3807-51e2-9e1c-ca0b7e8f5f11@rename-it.nl> References: <98f43cf6-e19b-0cf8-421d-d7b60bcf60de@felt.demon.nl> <5db1bb35-f79e-c814-f935-7484af5c77c8@dovecot.fi> <557db634-938f-bc4b-f723-1d3c377897ce@felt.demon.nl> <1708593123.1321.1476074706364@appsuite-dev.open-xchange.com> <3eec8368-0d95-b67d-e7c7-987d3e50bd53@felt.demon.nl> <0ee249e7-3807-51e2-9e1c-ca0b7e8f5f11@rename-it.nl> Message-ID: On 10/10/2016 14:59, Stephan Bosch wrote: > It should be supported by AIX: > > https://www.ibm.com/support/knowledgecenter/SSGH3R_13.1.3/com.ibm.xlcpp1313.aix.doc/language_ref/compound_literals.html > OK - it is supported, but "not in the same way as gcc". Getting it to simplified cases: No GO is stated as: flexible array member cannot be used as a member of a structure - line25 +23 struct yyy { +24 char *newLBL; +25 http_auth_param_t auth[]; +26 }; +27 +28 struct yyy +29 YYY[] = { +30 (struct yyy) { +31 .newLBL = "LBL1" +32 }, +33 (struct yyy) { +34 .newLBL = "LBL2" +35 } +36 }; !cc c99_comp_literal.c; "c99_comp_literal.c", line 29.1: 1506-995 (S) An aggregate containing a flexible array member cannot be used as a member of a structure or as an array element. So, to get it to work with a pointer "inside" the data needs to be initialized more like this: (what was line 25, is now line 32) +11 struct xxx { +12 char *lbl; +13 http_auth_param_t a[]; +14 }; +15 struct xxx X1 = (struct xxx) { +16 .lbl = "labelX", +17 .a = { +18 (http_auth_param_t) { .p1 = "c1" }, +19 (http_auth_param_t) { .p2 = "g2" }, +20 (http_auth_param_t) { } +21 } +22 }; +23 struct xxx X2 = (struct xxx) { +24 .lbl = "labelX", +25 .a = { +26 (http_auth_param_t) { .p1 = "z1" }, +27 (http_auth_param_t) { } +28 } +29 }; +30 struct yyy { +31 char *newLBL; +32 http_auth_param_t *auth; +33 }; +34 +35 struct yyy +36 YYY[] = { +37 (struct yyy) { +38 .newLBL = "LBL1", +39 .auth = X1.a +40 }, +41 (struct yyy) { +42 .newLBL = "LBL2", +43 .auth = X2.a +44 }, +45 { } +46 }; Shall work on a 'patch' asap (which might be in 24+ hours) Michael From simeon.ott at onnet.ch Mon Oct 10 22:06:52 2016 From: simeon.ott at onnet.ch (Simeon Ott) Date: Tue, 11 Oct 2016 00:06:52 +0200 Subject: Hierarchy separator and LAYOUT=FS change Message-ID: Hello, I stumbled across a 5-year-old post on the dovecot list about changing the dovecot hierarchy separator to enable shared mailboxes (http://www.dovecot.org/list/dovecot/2011-January/056201.html ). At the moment I?m stuck in a pretty similar situation. Migrated from courier to dovecot 2 years ago and preserved the dot-separator. Because I?m using the e-mail adress as a username, the dots for folder separation and the dots in the email adresses getting messed up - I do have a pretty small mailserver with about 150 accounts. The Maildir filestructur of a typical mail account looks like this: drwx------ 2 vmail vmail 4096 Oct 10 20:02 cur drwx------ 5 vmail vmail 4096 Oct 3 07:48 .Daten.Administration drwx------ 5 vmail vmail 4096 Oct 3 09:51 .Daten.Anfragen, Werbung drwx------ 5 vmail vmail 4096 Oct 3 08:02 .Daten drwx------ 5 vmail vmail 4096 Oct 6 09:57 .Daten.Intern drwx------ 5 vmail vmail 4096 Oct 3 08:03 .Daten.Intern.Fahrzeuge drwx------ 5 vmail vmail 4096 Oct 6 12:57 .Daten.Intern.Infos, FileMaker etc drwx------ 5 vmail vmail 4096 Oct 3 09:19 .Daten.Intern.Sonstiges drwx------ 5 vmail vmail 4096 Oct 3 07:47 .Daten.Kunden drwx------ 5 vmail vmail 4096 Sep 16 08:29 .Daten.Lieferanten drwx------ 5 vmail vmail 4096 Oct 3 08:28 .Daten.Marketing drwx------ 2 vmail vmail 4096 Oct 10 20:02 new drwx------ 5 vmail vmail 4096 Oct 10 18:00 .Sent drwx------ 5 vmail vmail 4096 Oct 10 18:00 .Spam drwx------ 5 vmail vmail 4096 Oct 10 18:00 .Trash When changing the separator in my inbox namespace the documentation mentions that the filestructur doesn?t change. This means I will get the same problems when using shared boxes with email adresses as usernames. I definitly need to change to maildir:~/Maildir:LAYOUT=fs When changing to LAYOUT=fs i need to convert all the mailboxes manually, is that correct? Is dsync is the way to go? Or is it better to leave the separator and change to a different username schema (without dots in it) and advise the clients to change their credentials? I know there are people out there who successfully converted this - but I can?f find that many information about this subject. doveconf -n: # 2.1.7: /etc/dovecot/dovecot.conf # OS: Linux 3.2.0-4-amd64 x86_64 Debian 7.11 auth_mechanisms = plain login auth_verbose = yes lda_mailbox_autocreate = yes lda_mailbox_autosubscribe = yes listen = * login_log_format_elements = user=<%u> method=%m rip=%r lip=%l mpid=%e %c mail_gid = 5000 mail_location = maildir:~/Maildir mail_plugins = zlib quota acl mail_uid = 5000 managesieve_notify_capability = mailto managesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy include variables body enotify environment mailbox date ihave namespace inbox { inbox = yes location = mailbox Drafts { auto = subscribe special_use = \Drafts } mailbox Sent { auto = subscribe special_use = \Sent } mailbox "Sent Messages" { special_use = \Sent } mailbox Spam { auto = subscribe special_use = \Junk } mailbox Trash { auto = subscribe special_use = \Trash } prefix = INBOX. separator = . } passdb { args = /etc/dovecot/dovecot-ldap.conf driver = ldap } plugin { acl = vfile acl_shared_dict = file:/var/spool/postfix/virtual/shared-mailboxes quota = maildir:User quota quota_exceeded_message = 4.2.2 Mailbox full quota_rule = *:storage=1G quota_rule2 = INBOX.Trash:storage=+100M quota_rule3 = INBOX.Spam:ignore quota_warning = storage=95%% quota-warning 95 %u sieve = ~/.dovecot.sieve sieve_before = /var/lib/dovecot/sieve/default.sieve sieve_dir = ~/sieve sieve_max_actions = 32 sieve_max_redirects = 4 sieve_max_script_size = 1M sieve_quota_max_scripts = 0 sieve_quota_max_storage = 0 } protocols = " imap lmtp sieve pop3" service auth { group = dovecot unix_listener /var/spool/postfix/private/auth { group = postfix mode = 0660 user = postfix } unix_listener auth-master { group = vmail mode = 0660 user = vmail } user = dovecot } service lmtp { unix_listener lmtp { mode = 0666 } } service managesieve-login { inet_listener sieve { port = 4190 } inet_listener sieve_deprecated { port = 2000 } process_min_avail = 1 service_count = 1 vsz_limit = 64 M } ssl_cert = -chain.crt ssl_cipher_list = ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA:AES256-SHA:DHE-RSA-CAMELLIA128-SHA:DHE-RSA-CAMELLIA256-SHA:CAMELLIA128-SHA:CAMELLIA256-SHA:ECDHE-RSA-DES-CBC3-SHA:DES-CBC3-SHA:!SSLv2 ssl_key = } protocol imap { mail_plugins = zlib quota acl imap_quota imap_acl } protocol sieve { info_log_path = /var/log/sieve.log log_path = /var/log/sieve.log mail_max_userip_connections = 10 managesieve_implementation_string = Dovecot Pigeonhole managesieve_logout_format = bytes=%i/%o managesieve_max_compile_errors = 5 managesieve_max_line_length = 65536 } ? parts of the ldap config user_attrs = homeDirectory=home=/var/spool/postfix/virtual/%$,uidNumber=uid,gidNumber=gid,quota=quota_rule=*:bytes=%$ user_filter = (&(objectClass=CourierMailAccount)(mail=%u)) ? my shared configuration is currently commented out. # namespace { # type = shared # separator = . # prefix = shared.%%u. # location = maildir:%h/Maildir:INDEX=~/Maildir/shared/%%u # subscriptions = yes # list = children #} thanks in advance for any help Sincerely, Simeon - onnet.ch From juha.koho at trineco.fi Tue Oct 11 07:13:10 2016 From: juha.koho at trineco.fi (Juha Koho) Date: Tue, 11 Oct 2016 09:13:10 +0200 Subject: Problems with GSSAPI and LDAP Message-ID: <43b4213568d7ca31dca5452ba9020f09@trineco.fi> Hello, I have a Dovecot 2.2.25 set up with OpenLDAP back end. I was trying to set up a GSSAPI Kerberos authentication with the LDAP server but with little success. Seems no matter what I try I end up with the following error message: dovecot: auth: Error: LDAP: binding failed (dn (imap/host.example.com at EXAMPLE.COM)): Local error, SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information (No Kerberos credentials available (default cache: FILE:/tmp/dovecot.krb5.ccache)) I have set the import_environment in dovecot.conf: import_environment = TZ CORE_OUTOFMEM CORE_ERROR LISTEN_PID LISTEN_FDS KRB5CCNAME=FILE:/tmp/dovecot.krb5.ccache And these in LDAP configuration: dn = imap/host.example.com at EXAMPLE.COM sasl_bind = yes sasl_mech = gssapi sasl_realm = EXAMPLE.COM sasl_authz_id = imap/host.example.com at EXAMPLE.COM I have tried with different values in dn and sasl_authz_id and also leaving them out completely but I always end up with the error message above. Using simple bind without GSSAPI works just fine. The credentials cache file exists and is valid for the principal imap/host.example.com at EXAMPLE.COM. The file is owned by dovecot user so it shouldn't be a permission problem either. GSSAPI in OpenLDAP works but I suppose it is irrelevant here since the connection attempt never reaches the LDAP server due to the error. I also have similar setup for Postfix and it works fine. Any ideas what to try next? Best regards, Juha From aki.tuomi at dovecot.fi Tue Oct 11 07:18:32 2016 From: aki.tuomi at dovecot.fi (Aki Tuomi) Date: Tue, 11 Oct 2016 10:18:32 +0300 Subject: Problems with GSSAPI and LDAP In-Reply-To: <43b4213568d7ca31dca5452ba9020f09@trineco.fi> References: <43b4213568d7ca31dca5452ba9020f09@trineco.fi> Message-ID: On 11.10.2016 10:13, Juha Koho wrote: > Hello, > > I have a Dovecot 2.2.25 set up with OpenLDAP back end. I was trying to > set up a GSSAPI Kerberos authentication with the LDAP server but with > little success. Seems no matter what I try I end up with the following > error message: > > dovecot: auth: Error: LDAP: binding failed (dn > (imap/host.example.com at EXAMPLE.COM)): Local error, SASL(-1): generic > failure: GSSAPI Error: Unspecified GSS failure. Minor code may > provide more information (No Kerberos credentials available (default > cache: FILE:/tmp/dovecot.krb5.ccache)) > > I have set the import_environment in dovecot.conf: > > import_environment = TZ CORE_OUTOFMEM CORE_ERROR LISTEN_PID LISTEN_FDS > KRB5CCNAME=FILE:/tmp/dovecot.krb5.ccache > > And these in LDAP configuration: > > dn = imap/host.example.com at EXAMPLE.COM > sasl_bind = yes > sasl_mech = gssapi > sasl_realm = EXAMPLE.COM > sasl_authz_id = imap/host.example.com at EXAMPLE.COM > > I have tried with different values in dn and sasl_authz_id and also > leaving them out completely but I always end up with the error message > above. Using simple bind without GSSAPI works just fine. > > The credentials cache file exists and is valid for the principal > imap/host.example.com at EXAMPLE.COM. The file is owned by dovecot user > so it shouldn't be a permission problem either. > > GSSAPI in OpenLDAP works but I suppose it is irrelevant here since the > connection attempt never reaches the LDAP server due to the error. I > also have similar setup for Postfix and it works fine. > > Any ideas what to try next? > > Best regards, > Juha Can you provide klist output for the cache file? Also, it should be readable by dovenull user, or whatever is configured as default_login_user. Aki From ximo at openmomo.com Tue Oct 11 07:37:44 2016 From: ximo at openmomo.com (Ximo Mira) Date: Tue, 11 Oct 2016 09:37:44 +0200 (CEST) Subject: problem with quota warning script execution, error 75 In-Reply-To: <52bc120f-3a4b-b23d-f23c-ccd564f59b7c@dovecot.fi> References: <1612131597.985801.1476089392334.JavaMail.zimbra@openmomo.com> <55a3cb03-c3bb-658e-a806-084f2ebee26e@dovecot.fi> <2046126021.986220.1476091268290.JavaMail.zimbra@openmomo.com> <7379554a-e6af-6cac-c41d-2198cac3df5b@dovecot.fi> <1779021969.986566.1476092503354.JavaMail.zimbra@openmomo.com> <52bc120f-3a4b-b23d-f23c-ccd564f59b7c@dovecot.fi> Message-ID: <520690746.999627.1476171464939.JavaMail.zimbra@openmomo.com> Its working now. I thought that i have taken that step before with no success, but looks like I did it wrong. Thanks a lot. ----- Mensaje original ----- De: "Aki Tuomi" Para: dovecot at dovecot.org Enviados: Lunes, 10 de Octubre 2016 11:45:41 Asunto: Re: problem with quota warning script execution, error 75 You are running LDA directly, from within user's context. You need to let your users access auth-userdb, as explained in the error log entry: service auth { unix_listener /var/run/dovecot/auth-userdb { mode=0777 } } Aki On 10.10.2016 12:41, Ximo Mira wrote: > Output is 0 and mail is delivered. > > [root at server quota]# ./quota-warning.sh 85 existing_mailbox at domain.com ; echo $? > 0 > > ----- Mensaje original ----- > > De: "Aki Tuomi" > Para: dovecot at dovecot.org > Enviados: Lunes, 10 de Octubre 2016 11:37:26 > Asunto: Re: problem with quota warning script execution, error 75 > > No, > > ./quota-warning.sh 85 existing_mailbox at domain.com ; echo $? > > the '?' is part of the cmdline. > > On 10.10.2016 12:21, Ximo Mira wrote: >> Like this? >> >> [root at server quota]# ./quota-warning.sh 85 existing_mailbox at domain.com ; echo $ >> $ >> >> Got message succesfully delivered. >> >> >> ----- Mensaje original ----- >> >> De: "Aki Tuomi" >> Para: dovecot at dovecot.org >> Enviados: Lunes, 10 de Octubre 2016 11:14:01 >> Asunto: Re: problem with quota warning script execution, error 75 >> >> >> >> On 10.10.2016 11:49, Ximo Mira wrote: >>> Hi, >>> >>> Im quite new to dovecot and im trying to run quota warning script with no success. Using "quota = count:User quota" and this script: >>> ________________________ >>> #!/bin/sh >>> PERCENT=$1 >>> USER=$2 >>> cat << EOF | /usr/libexec/dovecot/dovecot-lda -d $USER -o "plugin/quota=count:User quota:noenforcing" >>> From: support at company.com >>> To: $USER >>> Subject: Quota alert >>> >>> Quota usage is $PERCENT% >>> Bye >>> >>> EOF >>> ________________________ >>> >>> If I run the script manually from command line it works and message is delivered. If user reaches first configured limit (85%) Im getting this error. >>> >>> Oct 10 10:38:01 auth: Error: userdb(USER at DOMAIN.com): client doesn't have lookup permissions for this user: userdb reply doesn't contain uid (to bypass this check, set: service auth { unix_listener /var/run/dovecot/auth-userdb { mode=0777 } }) >>> Oct 10 10:38:01 lda(USER at DOMAIN.com): Error: user USER at DOMAIN.com: Auth USER lookup failed >>> Oct 10 10:38:01 lda: Fatal: Internal error occurred. Refer to server log for more information. >>> Oct 10 10:38:01 quota-warning: Fatal: master: service(quota-warning): child 24515 returned error 75 >>> >>> Auth is LDAP based. >> Hi >> >> can you run the script by hand so that you do >> ./script params ; echo $? >> >> Aki From juha.koho at trineco.fi Tue Oct 11 07:43:59 2016 From: juha.koho at trineco.fi (Juha Koho) Date: Tue, 11 Oct 2016 09:43:59 +0200 Subject: Problems with GSSAPI and LDAP In-Reply-To: References: <43b4213568d7ca31dca5452ba9020f09@trineco.fi> Message-ID: <35e251782e3bca91e4fce84a2804a59c@trineco.fi> On 2016-10-11 09:18, Aki Tuomi wrote: > On 11.10.2016 10:13, Juha Koho wrote: >> Hello, >> >> I have a Dovecot 2.2.25 set up with OpenLDAP back end. I was trying to >> set up a GSSAPI Kerberos authentication with the LDAP server but with >> little success. Seems no matter what I try I end up with the following >> error message: >> >> dovecot: auth: Error: LDAP: binding failed (dn >> (imap/host.example.com at EXAMPLE.COM)): Local error, SASL(-1): generic >> failure: GSSAPI Error: Unspecified GSS failure. Minor code may >> provide more information (No Kerberos credentials available (default >> cache: FILE:/tmp/dovecot.krb5.ccache)) >> >> I have set the import_environment in dovecot.conf: >> >> import_environment = TZ CORE_OUTOFMEM CORE_ERROR LISTEN_PID LISTEN_FDS >> KRB5CCNAME=FILE:/tmp/dovecot.krb5.ccache >> >> And these in LDAP configuration: >> >> dn = imap/host.example.com at EXAMPLE.COM >> sasl_bind = yes >> sasl_mech = gssapi >> sasl_realm = EXAMPLE.COM >> sasl_authz_id = imap/host.example.com at EXAMPLE.COM >> >> I have tried with different values in dn and sasl_authz_id and also >> leaving them out completely but I always end up with the error message >> above. Using simple bind without GSSAPI works just fine. >> >> The credentials cache file exists and is valid for the principal >> imap/host.example.com at EXAMPLE.COM. The file is owned by dovecot user >> so it shouldn't be a permission problem either. >> >> GSSAPI in OpenLDAP works but I suppose it is irrelevant here since the >> connection attempt never reaches the LDAP server due to the error. I >> also have similar setup for Postfix and it works fine. >> >> Any ideas what to try next? >> >> Best regards, >> Juha > > Can you provide klist output for the cache file? Also, it should be > readable by dovenull user, or whatever is configured as > default_login_user. Here's the klist output of the cache file: -- Ticket cache: FILE:/tmp/dovecot.krb5.ccache Default principal: imap/host.example.com at EXAMPLE.COM Valid starting Expires Service principal 10/11/2016 09:26:25 10/11/2016 21:26:25 krbtgt/EXAMPLE.COM at EXAMPLE.COM renew until 10/12/2016 09:26:25 --- That I didn't know that also dovenull must have access to the cache but I tried also setting 0644 permissions to the cache file with no luck. So permissions shouldn't be the issue... Juha From aki.tuomi at dovecot.fi Tue Oct 11 08:00:44 2016 From: aki.tuomi at dovecot.fi (Aki Tuomi) Date: Tue, 11 Oct 2016 11:00:44 +0300 Subject: Problems with GSSAPI and LDAP In-Reply-To: <35e251782e3bca91e4fce84a2804a59c@trineco.fi> References: <43b4213568d7ca31dca5452ba9020f09@trineco.fi> <35e251782e3bca91e4fce84a2804a59c@trineco.fi> Message-ID: <88ce680f-e349-1424-d617-24ad0621fac7@dovecot.fi> On 11.10.2016 10:43, Juha Koho wrote: > > On 2016-10-11 09:18, Aki Tuomi wrote: >> On 11.10.2016 10:13, Juha Koho wrote: >>> Hello, >>> >>> I have a Dovecot 2.2.25 set up with OpenLDAP back end. I was trying to >>> set up a GSSAPI Kerberos authentication with the LDAP server but with >>> little success. Seems no matter what I try I end up with the following >>> error message: >>> >>> dovecot: auth: Error: LDAP: binding failed (dn >>> (imap/host.example.com at EXAMPLE.COM)): Local error, SASL(-1): generic >>> failure: GSSAPI Error: Unspecified GSS failure. Minor code may >>> provide more information (No Kerberos credentials available (default >>> cache: FILE:/tmp/dovecot.krb5.ccache)) >>> >>> I have set the import_environment in dovecot.conf: >>> >>> import_environment = TZ CORE_OUTOFMEM CORE_ERROR LISTEN_PID LISTEN_FDS >>> KRB5CCNAME=FILE:/tmp/dovecot.krb5.ccache >>> >>> And these in LDAP configuration: >>> >>> dn = imap/host.example.com at EXAMPLE.COM >>> sasl_bind = yes >>> sasl_mech = gssapi >>> sasl_realm = EXAMPLE.COM >>> sasl_authz_id = imap/host.example.com at EXAMPLE.COM >>> >>> I have tried with different values in dn and sasl_authz_id and also >>> leaving them out completely but I always end up with the error message >>> above. Using simple bind without GSSAPI works just fine. >>> >>> The credentials cache file exists and is valid for the principal >>> imap/host.example.com at EXAMPLE.COM. The file is owned by dovecot user >>> so it shouldn't be a permission problem either. >>> >>> GSSAPI in OpenLDAP works but I suppose it is irrelevant here since the >>> connection attempt never reaches the LDAP server due to the error. I >>> also have similar setup for Postfix and it works fine. >>> >>> Any ideas what to try next? >>> >>> Best regards, >>> Juha >> >> Can you provide klist output for the cache file? Also, it should be >> readable by dovenull user, or whatever is configured as >> default_login_user. > > > Here's the klist output of the cache file: > -- > Ticket cache: FILE:/tmp/dovecot.krb5.ccache > Default principal: imap/host.example.com at EXAMPLE.COM > > Valid starting Expires Service principal > 10/11/2016 09:26:25 10/11/2016 21:26:25 krbtgt/EXAMPLE.COM at EXAMPLE.COM > renew until 10/12/2016 09:26:25 > --- > > That I didn't know that also dovenull must have access to the cache > but I tried also setting 0644 permissions to the cache file with no > luck. So permissions shouldn't be the issue... > > Juha Your ccache has no ticket for imap/host.example.com at EXAMPLE.COM please use kinit to acquire one. Aki From juha.koho at trineco.fi Tue Oct 11 08:56:24 2016 From: juha.koho at trineco.fi (Juha Koho) Date: Tue, 11 Oct 2016 10:56:24 +0200 Subject: Problems with GSSAPI and LDAP In-Reply-To: <88ce680f-e349-1424-d617-24ad0621fac7@dovecot.fi> References: <43b4213568d7ca31dca5452ba9020f09@trineco.fi> <35e251782e3bca91e4fce84a2804a59c@trineco.fi> <88ce680f-e349-1424-d617-24ad0621fac7@dovecot.fi> Message-ID: On 2016-10-11 10:00, Aki Tuomi wrote: > On 11.10.2016 10:43, Juha Koho wrote: >> >> On 2016-10-11 09:18, Aki Tuomi wrote: >>> On 11.10.2016 10:13, Juha Koho wrote: >>>> Hello, >>>> >>>> I have a Dovecot 2.2.25 set up with OpenLDAP back end. I was trying >>>> to >>>> set up a GSSAPI Kerberos authentication with the LDAP server but >>>> with >>>> little success. Seems no matter what I try I end up with the >>>> following >>>> error message: >>>> >>>> dovecot: auth: Error: LDAP: binding failed (dn >>>> (imap/host.example.com at EXAMPLE.COM)): Local error, SASL(-1): generic >>>> failure: GSSAPI Error: Unspecified GSS failure. Minor code may >>>> provide more information (No Kerberos credentials available (default >>>> cache: FILE:/tmp/dovecot.krb5.ccache)) >>>> >>>> I have set the import_environment in dovecot.conf: >>>> >>>> import_environment = TZ CORE_OUTOFMEM CORE_ERROR LISTEN_PID >>>> LISTEN_FDS >>>> KRB5CCNAME=FILE:/tmp/dovecot.krb5.ccache >>>> >>>> And these in LDAP configuration: >>>> >>>> dn = imap/host.example.com at EXAMPLE.COM >>>> sasl_bind = yes >>>> sasl_mech = gssapi >>>> sasl_realm = EXAMPLE.COM >>>> sasl_authz_id = imap/host.example.com at EXAMPLE.COM >>>> >>>> I have tried with different values in dn and sasl_authz_id and also >>>> leaving them out completely but I always end up with the error >>>> message >>>> above. Using simple bind without GSSAPI works just fine. >>>> >>>> The credentials cache file exists and is valid for the principal >>>> imap/host.example.com at EXAMPLE.COM. The file is owned by dovecot user >>>> so it shouldn't be a permission problem either. >>>> >>>> GSSAPI in OpenLDAP works but I suppose it is irrelevant here since >>>> the >>>> connection attempt never reaches the LDAP server due to the error. I >>>> also have similar setup for Postfix and it works fine. >>>> >>>> Any ideas what to try next? >>>> >>>> Best regards, >>>> Juha >>> >>> Can you provide klist output for the cache file? Also, it should be >>> readable by dovenull user, or whatever is configured as >>> default_login_user. >> >> >> Here's the klist output of the cache file: >> -- >> Ticket cache: FILE:/tmp/dovecot.krb5.ccache >> Default principal: imap/host.example.com at EXAMPLE.COM >> >> Valid starting Expires Service principal >> 10/11/2016 09:26:25 10/11/2016 21:26:25 >> krbtgt/EXAMPLE.COM at EXAMPLE.COM >> renew until 10/12/2016 09:26:25 >> --- >> >> That I didn't know that also dovenull must have access to the cache >> but I tried also setting 0644 permissions to the cache file with no >> luck. So permissions shouldn't be the issue... >> >> Juha > > Your ccache has no ticket for imap/host.example.com at EXAMPLE.COM > > please use kinit to acquire one. Now I'm confused. The cache file is created by kinit using the command: sudo -u dovenull kinit -c FILE:/tmp/dovecot.krb5.ccache -k -t /path/to/keytab imap/host.example.com After that: $ sudo -u dovenull klist /tmp/dovecot.krb5.ccache Ticket cache: FILE:/tmp/dovecot.krb5.ccache Default principal: imap/host.example.com at EXAMPLE.COM Valid starting Expires Service principal 10/11/2016 10:47:47 10/11/2016 22:47:47 krbtgt/EXAMPLE.COM at EXAMPLE.COM renew until 10/12/2016 10:47:47 Also, I can use the cache file with ldapsearch just fine by running the following: sudo -u dovenull KRB5CCNAME=FILE:/tmp/dovecot.krb5.ccache ldapsearch -Y GSSAPI -ZZ -H ldap://ldap.example.com/ -b dc=example,dc=com After the ldapsearch has succeeded the klist output is the following: $ sudo -u dovenull klist /tmp/dovecot.krb5.ccache Ticket cache: FILE:/tmp/dovecot.krb5.ccache Default principal: imap/host.example.com at EXAMPLE.COM Valid starting Expires Service principal 10/11/2016 10:47:47 10/11/2016 22:47:47 krbtgt/EXAMPLE.COM at EXAMPLE.COM renew until 10/12/2016 10:47:47 10/11/2016 10:49:32 10/11/2016 22:47:47 ldap/ldap.example.com at EXAMPLE.COM renew until 10/12/2016 10:47:47 Which is what I expected. Isn't this basically what dovecot does (or should do) or did I miss something? Juha From aki.tuomi at dovecot.fi Tue Oct 11 09:03:30 2016 From: aki.tuomi at dovecot.fi (Aki Tuomi) Date: Tue, 11 Oct 2016 12:03:30 +0300 Subject: Problems with GSSAPI and LDAP In-Reply-To: References: <43b4213568d7ca31dca5452ba9020f09@trineco.fi> <35e251782e3bca91e4fce84a2804a59c@trineco.fi> <88ce680f-e349-1424-d617-24ad0621fac7@dovecot.fi> Message-ID: <0c57f0df-dcc3-c19a-b528-220f8adfe4c1@dovecot.fi> On 11.10.2016 11:56, Juha Koho wrote: > > On 2016-10-11 10:00, Aki Tuomi wrote: >> On 11.10.2016 10:43, Juha Koho wrote: >>> >>> On 2016-10-11 09:18, Aki Tuomi wrote: >>>> On 11.10.2016 10:13, Juha Koho wrote: >>>>> Hello, >>>>> >>>>> I have a Dovecot 2.2.25 set up with OpenLDAP back end. I was >>>>> trying to >>>>> set up a GSSAPI Kerberos authentication with the LDAP server but with >>>>> little success. Seems no matter what I try I end up with the >>>>> following >>>>> error message: >>>>> >>>>> dovecot: auth: Error: LDAP: binding failed (dn >>>>> (imap/host.example.com at EXAMPLE.COM)): Local error, SASL(-1): generic >>>>> failure: GSSAPI Error: Unspecified GSS failure. Minor code may >>>>> provide more information (No Kerberos credentials available (default >>>>> cache: FILE:/tmp/dovecot.krb5.ccache)) >>>>> >>>>> I have set the import_environment in dovecot.conf: >>>>> >>>>> import_environment = TZ CORE_OUTOFMEM CORE_ERROR LISTEN_PID >>>>> LISTEN_FDS >>>>> KRB5CCNAME=FILE:/tmp/dovecot.krb5.ccache >>>>> >>>>> And these in LDAP configuration: >>>>> >>>>> dn = imap/host.example.com at EXAMPLE.COM >>>>> sasl_bind = yes >>>>> sasl_mech = gssapi >>>>> sasl_realm = EXAMPLE.COM >>>>> sasl_authz_id = imap/host.example.com at EXAMPLE.COM >>>>> >>>>> I have tried with different values in dn and sasl_authz_id and also >>>>> leaving them out completely but I always end up with the error >>>>> message >>>>> above. Using simple bind without GSSAPI works just fine. >>>>> >>>>> The credentials cache file exists and is valid for the principal >>>>> imap/host.example.com at EXAMPLE.COM. The file is owned by dovecot user >>>>> so it shouldn't be a permission problem either. >>>>> >>>>> GSSAPI in OpenLDAP works but I suppose it is irrelevant here since >>>>> the >>>>> connection attempt never reaches the LDAP server due to the error. I >>>>> also have similar setup for Postfix and it works fine. >>>>> >>>>> Any ideas what to try next? >>>>> >>>>> Best regards, >>>>> Juha >>>> >>>> Can you provide klist output for the cache file? Also, it should be >>>> readable by dovenull user, or whatever is configured as >>>> default_login_user. >>> >>> >>> Here's the klist output of the cache file: >>> -- >>> Ticket cache: FILE:/tmp/dovecot.krb5.ccache >>> Default principal: imap/host.example.com at EXAMPLE.COM >>> >>> Valid starting Expires Service principal >>> 10/11/2016 09:26:25 10/11/2016 21:26:25 >>> krbtgt/EXAMPLE.COM at EXAMPLE.COM >>> renew until 10/12/2016 09:26:25 >>> --- >>> >>> That I didn't know that also dovenull must have access to the cache >>> but I tried also setting 0644 permissions to the cache file with no >>> luck. So permissions shouldn't be the issue... >>> >>> Juha >> >> Your ccache has no ticket for imap/host.example.com at EXAMPLE.COM >> >> please use kinit to acquire one. > > > Now I'm confused. The cache file is created by kinit using the command: > > sudo -u dovenull kinit -c FILE:/tmp/dovecot.krb5.ccache -k -t > /path/to/keytab imap/host.example.com > > After that: > > $ sudo -u dovenull klist /tmp/dovecot.krb5.ccache > Ticket cache: FILE:/tmp/dovecot.krb5.ccache > Default principal: imap/host.example.com at EXAMPLE.COM > > Valid starting Expires Service principal > 10/11/2016 10:47:47 10/11/2016 22:47:47 krbtgt/EXAMPLE.COM at EXAMPLE.COM > renew until 10/12/2016 10:47:47 > > Also, I can use the cache file with ldapsearch just fine by running > the following: > > sudo -u dovenull KRB5CCNAME=FILE:/tmp/dovecot.krb5.ccache ldapsearch > -Y GSSAPI -ZZ -H ldap://ldap.example.com/ -b dc=example,dc=com > > After the ldapsearch has succeeded the klist output is the following: > > $ sudo -u dovenull klist /tmp/dovecot.krb5.ccache > Ticket cache: FILE:/tmp/dovecot.krb5.ccache > Default principal: imap/host.example.com at EXAMPLE.COM > > Valid starting Expires Service principal > 10/11/2016 10:47:47 10/11/2016 22:47:47 krbtgt/EXAMPLE.COM at EXAMPLE.COM > renew until 10/12/2016 10:47:47 > 10/11/2016 10:49:32 10/11/2016 22:47:47 > ldap/ldap.example.com at EXAMPLE.COM > renew until 10/12/2016 10:47:47 > > > Which is what I expected. Isn't this basically what dovecot does (or > should do) or did I miss something? > > Juha Dovecot won't acquire service tickets for you. It requires that you have ticket for imap/imap.example.com at EXAMPLE.COM in the cache or keytab. The default principal is used when *CONNECTING* to a service, but you are *ACCEPTING* a service, so you need a service principal. Aki From juha.koho at trineco.fi Tue Oct 11 10:10:33 2016 From: juha.koho at trineco.fi (Juha Koho) Date: Tue, 11 Oct 2016 12:10:33 +0200 Subject: Problems with GSSAPI and LDAP In-Reply-To: <0c57f0df-dcc3-c19a-b528-220f8adfe4c1@dovecot.fi> References: <43b4213568d7ca31dca5452ba9020f09@trineco.fi> <35e251782e3bca91e4fce84a2804a59c@trineco.fi> <88ce680f-e349-1424-d617-24ad0621fac7@dovecot.fi> <0c57f0df-dcc3-c19a-b528-220f8adfe4c1@dovecot.fi> Message-ID: On 2016-10-11 11:03, Aki Tuomi wrote: > On 11.10.2016 11:56, Juha Koho wrote: >> >> On 2016-10-11 10:00, Aki Tuomi wrote: >>> On 11.10.2016 10:43, Juha Koho wrote: >>>> >>>> On 2016-10-11 09:18, Aki Tuomi wrote: >>>>> On 11.10.2016 10:13, Juha Koho wrote: >>>>>> Hello, >>>>>> >>>>>> I have a Dovecot 2.2.25 set up with OpenLDAP back end. I was >>>>>> trying to >>>>>> set up a GSSAPI Kerberos authentication with the LDAP server but >>>>>> with >>>>>> little success. Seems no matter what I try I end up with the >>>>>> following >>>>>> error message: >>>>>> >>>>>> dovecot: auth: Error: LDAP: binding failed (dn >>>>>> (imap/host.example.com at EXAMPLE.COM)): Local error, SASL(-1): >>>>>> generic >>>>>> failure: GSSAPI Error: Unspecified GSS failure. Minor code may >>>>>> provide more information (No Kerberos credentials available >>>>>> (default >>>>>> cache: FILE:/tmp/dovecot.krb5.ccache)) >>>>>> >>>>>> I have set the import_environment in dovecot.conf: >>>>>> >>>>>> import_environment = TZ CORE_OUTOFMEM CORE_ERROR LISTEN_PID >>>>>> LISTEN_FDS >>>>>> KRB5CCNAME=FILE:/tmp/dovecot.krb5.ccache >>>>>> >>>>>> And these in LDAP configuration: >>>>>> >>>>>> dn = imap/host.example.com at EXAMPLE.COM >>>>>> sasl_bind = yes >>>>>> sasl_mech = gssapi >>>>>> sasl_realm = EXAMPLE.COM >>>>>> sasl_authz_id = imap/host.example.com at EXAMPLE.COM >>>>>> >>>>>> I have tried with different values in dn and sasl_authz_id and >>>>>> also >>>>>> leaving them out completely but I always end up with the error >>>>>> message >>>>>> above. Using simple bind without GSSAPI works just fine. >>>>>> >>>>>> The credentials cache file exists and is valid for the principal >>>>>> imap/host.example.com at EXAMPLE.COM. The file is owned by dovecot >>>>>> user >>>>>> so it shouldn't be a permission problem either. >>>>>> >>>>>> GSSAPI in OpenLDAP works but I suppose it is irrelevant here since >>>>>> the >>>>>> connection attempt never reaches the LDAP server due to the error. >>>>>> I >>>>>> also have similar setup for Postfix and it works fine. >>>>>> >>>>>> Any ideas what to try next? >>>>>> >>>>>> Best regards, >>>>>> Juha >>>>> >>>>> Can you provide klist output for the cache file? Also, it should be >>>>> readable by dovenull user, or whatever is configured as >>>>> default_login_user. >>>> >>>> >>>> Here's the klist output of the cache file: >>>> -- >>>> Ticket cache: FILE:/tmp/dovecot.krb5.ccache >>>> Default principal: imap/host.example.com at EXAMPLE.COM >>>> >>>> Valid starting Expires Service principal >>>> 10/11/2016 09:26:25 10/11/2016 21:26:25 >>>> krbtgt/EXAMPLE.COM at EXAMPLE.COM >>>> renew until 10/12/2016 09:26:25 >>>> --- >>>> >>>> That I didn't know that also dovenull must have access to the cache >>>> but I tried also setting 0644 permissions to the cache file with no >>>> luck. So permissions shouldn't be the issue... >>>> >>>> Juha >>> >>> Your ccache has no ticket for imap/host.example.com at EXAMPLE.COM >>> >>> please use kinit to acquire one. >> >> >> Now I'm confused. The cache file is created by kinit using the >> command: >> >> sudo -u dovenull kinit -c FILE:/tmp/dovecot.krb5.ccache -k -t >> /path/to/keytab imap/host.example.com >> >> After that: >> >> $ sudo -u dovenull klist /tmp/dovecot.krb5.ccache >> Ticket cache: FILE:/tmp/dovecot.krb5.ccache >> Default principal: imap/host.example.com at EXAMPLE.COM >> >> Valid starting Expires Service principal >> 10/11/2016 10:47:47 10/11/2016 22:47:47 >> krbtgt/EXAMPLE.COM at EXAMPLE.COM >> renew until 10/12/2016 10:47:47 >> >> Also, I can use the cache file with ldapsearch just fine by running >> the following: >> >> sudo -u dovenull KRB5CCNAME=FILE:/tmp/dovecot.krb5.ccache ldapsearch >> -Y GSSAPI -ZZ -H ldap://ldap.example.com/ -b dc=example,dc=com >> >> After the ldapsearch has succeeded the klist output is the following: >> >> $ sudo -u dovenull klist /tmp/dovecot.krb5.ccache >> Ticket cache: FILE:/tmp/dovecot.krb5.ccache >> Default principal: imap/host.example.com at EXAMPLE.COM >> >> Valid starting Expires Service principal >> 10/11/2016 10:47:47 10/11/2016 22:47:47 >> krbtgt/EXAMPLE.COM at EXAMPLE.COM >> renew until 10/12/2016 10:47:47 >> 10/11/2016 10:49:32 10/11/2016 22:47:47 >> ldap/ldap.example.com at EXAMPLE.COM >> renew until 10/12/2016 10:47:47 >> >> >> Which is what I expected. Isn't this basically what dovecot does (or >> should do) or did I miss something? >> >> Juha > > Dovecot won't acquire service tickets for you. It requires that you > have > ticket for imap/imap.example.com at EXAMPLE.COM in the cache or keytab. > > The default principal is used when *CONNECTING* to a service, but you > are *ACCEPTING* a service, so you need a service principal. > > Aki Sorry, all this Kerberos stuff is quite new to me and I'm still a bit confused... :) What I still fail to understand is why would I need the service principal in the cache since I'm trying to set dovecot to use GSSAPI when connecting to the LDAP back end for passdb and userdb lookups. My imap users can connect to Dovecot using GSSAPI without problems. This isn't the issue. Dovecot being the client to the LDAP service is the issue. But anyway, after adding the ticket for imap/host.example.com at EXAMPLE.COM in the cache the error still remains: dovecot: auth: Error: LDAP: binding failed (dn imap/host.example.com at EXAMPLE.COM): Local error, SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information (No Kerberos credentials available (default cache: FILE:/tmp/dovecot.krb5.ccache)) $ sudo -u dovenull klist /tmp/dovecot.krb5.ccache Ticket cache: FILE:/tmp/dovecot.krb5.ccache Default principal: imap/host.example.com at EXAMPLE.COM Valid starting Expires Service principal 10/11/2016 11:00:50 10/11/2016 23:00:50 krbtgt/EXAMPLE.COM at EXAMPLE.COM renew until 10/12/2016 11:00:50 10/11/2016 11:19:09 10/11/2016 23:00:50 imap/host.example.com@ renew until 10/12/2016 11:00:50 10/11/2016 11:19:09 10/11/2016 23:00:50 imap/host.example.com at EXAMPLE.COM renew until 10/12/2016 11:00:50 Juha From gkontos.mail at gmail.com Tue Oct 11 11:31:57 2016 From: gkontos.mail at gmail.com (George Kontostanos) Date: Tue, 11 Oct 2016 14:31:57 +0300 Subject: dsync replication quota2 issue Message-ID: Hello list, We are testing a configuration with 2 mail servers using dsync replication (dovecot 2.2.25 ). Everything works fine except the quota2 which is calculated wrong only on one server. Quota2 resides on different databases since each server needs to update it. The problem: The local server always updates quota2 twice on each message it receives. This happens only on one server. Updates run fine on the second. SQL Debug: Query UPDATE quota2 SET bytes=bytes+2108,messages=messages+1 WHERE username = 'user at domain.org' Query UPDATE quota2 SET bytes=bytes+2108,messages=messages+1 WHERE username = 'user at domain.org' The result on the server that runs fine mysql> select * from quota2; +----------------------------+---------+----------+ | username | bytes | messages | +----------------------------+---------+----------+ | | user at domain.org | 2917126 | 17 | The result on the server that has the problem: mysql> select * from quota2; +----------------------------+---------+----------+ | username | bytes | messages | +----------------------------+---------+----------+ | | user at domain.org | 2920317 | 19 | dovecot -n is the same on both: root at mx2:/var/log # dovecot -n # 2.2.25 (7be1766): /usr/local/etc/dovecot/dovecot.conf # Pigeonhole version 0.4.15 (97b3da0) # OS: FreeBSD 10.3-RELEASE amd64 ufs auth_mechanisms = plain login auth_verbose = yes default_client_limit = 2560 default_process_limit = 512 dict { acl = mysql:/usr/local/etc/dovecot/dovecot-dict-shares-sql.conf.ext quota = mysql:/usr/local/etc/dovecot/dovecot-dict-quota-sql.conf.ext } doveadm_password = # hidden, use -P to show it doveadm_port = 12345 log_path = /var/log/dovecot.log mail_debug = yes mail_home = /usr/local/vhosts/mail/%d/%n mail_location = maildir:/usr/local/vhosts/mail/%d/%n:LAYOUT=fs mail_max_userip_connections = 70 mail_plugins = quota acl notify replication mail_privileged_group = vmail mail_shared_explicit_inbox = yes managesieve_notify_capability = mailto managesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy include variables body enotify environment mailbox date index ihave duplicate mime foreverypart extracttext mbox_write_locks = fcntl namespace { inbox = no list = children location = maildir:/usr/local/vhosts/mail/%%d/%%n:LAYOUT=fs:INDEX=/usr/local/vhosts/indexes/%d/%n/shared/%%u:INDEXPVT=/usr/local/vhosts/indexes/%d/%n/shared/%%u prefix = shared/%%d/%%n/ separator = / subscriptions = no type = shared } namespace inbox { inbox = yes list = yes location = mailbox Drafts { auto = subscribe special_use = \Drafts } mailbox Junk { auto = subscribe special_use = \Junk } mailbox Sent { auto = subscribe special_use = \Sent } mailbox Trash { auto = subscribe special_use = \Trash } prefix = separator = / type = private } passdb { args = /usr/local/etc/dovecot/dovecot-sql.conf.ext driver = sql } plugin { acl = vfile acl_shared_dict = proxy::acl mail_replica = tcp:beta.sophimail.com:12345 quota = dict:User quota::proxy::quota quota_rule2 = Trash:storage=+100M sieve = /usr/local/vhosts/mail/%d/%n/.dovecot.sieve sieve_before = /usr/local/vhosts/sieve/before.d/ sieve_dir = /usr/local/vhosts/mail/%d/%n sieve_global_dir = /usr/local/vhosts/sieve/%d sieve_global_path = /usr/local/vhosts/sieve/%d/default.sieve } protocols = imap lmtp sieve sieve service aggregator { fifo_listener replication-notify-fifo { mode = 0666 user = vmail } unix_listener replication-notify { mode = 0666 user = vmail } } service auth-worker { user = vmail } service auth { unix_listener /var/spool/postfix/private/auth { group = postfix mode = 0666 user = postfix } unix_listener auth-userdb { mode = 0600 user = vmail } user = dovecot } service config { unix_listener config { user = vmail } } service dict { unix_listener dict { mode = 0600 user = vmail } } service doveadm { inet_listener { port = 12345 } user = vmail } service imap-login { inet_listener imap { port = 143 } } service lmtp { unix_listener /var/spool/postfix/private/dovecot-lmtp { group = postfix mode = 0600 user = postfix } } service managesieve-login { inet_listener sieve { port = 4190 } process_min_avail = 0 service_count = 1 vsz_limit = 64 M } service replicator { unix_listener replicator-doveadm { mode = 0666 } } ssl_cert = Hi all, I am working on a fresh install with Ubuntu 16.04 LTS and Dovecot 2.2.22-1ubuntu2.1. I am unable to open port 4190, just 2000. I have this on 20-managesieve.conf: protocols = $protocols sieve service managesieve-login { inet_listener sieve { port = 4190 } inet_listener sieve_deprecated { port = 2000 } } service managesieve { process_limit = 1024 } protocol sieve { mail_debug=yes } This way I get only port 2000, if comment out the "sieve_deprecated" section leaving just the "inet_listener sieve" I get nothing. Could not find anything useful on logs or strace. Any hint? Thanks in advance, best regards. -- *Marcio Merlone* From michael at felt.demon.nl Tue Oct 11 20:25:37 2016 From: michael at felt.demon.nl (Michael Felt) Date: Tue, 11 Oct 2016 22:25:37 +0200 Subject: Compound Literal - xlc and gcc differences can be patched Message-ID: Since I so miserably misspelled packaging - a new thread specific to the issue at hand. I found a "workaround". In short, xlc does not accept arrays of nested array - the bottom line (best message) is: 1506-995 (S) An aggregate containing a flexible array member cannot be used as a member of a structure or as an array element. At the core - all the other messages come because this (I shall call it nesting). For the test-http-auth.c file I have a patch (attached). make ran for while with no issue, then it started running into several files with this same style of specification - doveadm-dict.c being the first one. I ran "make -i" and I notice several files with this - for xlc - "unaccepted" syntax. Please look at the patch - what it basically is - and let me know whether you would consider accepting patches in this form (e.g., verify gcc (I assume) will accept it as well). If your answer is yes, I shall proceed with the additional files (learn how your MACROS work!) and send them to you. Again, initially I would just send one, e.g., doveadm-dict.c - to be sure your regular compiler also builds this alternate specification. Thank you for your consideration! Michael -------------- next part -------------- --- test-http-auth.c.orig 2016-06-29 20:01:30 +0000 +++ test-http-auth.c.new 2016-10-11 15:54:50 +0000 @@ -9,7 +9,7 @@ struct http_auth_challenge_test { const char *scheme; const char *data; - struct http_auth_param *params; + struct http_auth_param params[]; }; struct http_auth_challenges_test { @@ -17,65 +17,69 @@ struct http_auth_challenge_test *challenges; }; - +/* The schemes */ +static const struct http_auth_challenge_test basic[] = { + { .scheme = "Basic", + .data = NULL, + .params = { + (struct http_auth_param) { "realm", "WallyWorld" }, + (struct http_auth_param) { } + } + },{ + .scheme = NULL + } +}; +static const struct http_auth_challenge_test digest[] = { + { .scheme = "Digest", + .data = NULL, + .params = (struct http_auth_param []) { + { "realm", "testrealm at host.com" }, + { "qop", "auth,auth-int" }, + { "nonce", "dcd98b7102dd2f0e8b11d0f600bfb0c093" }, + { "opaque", "5ccc069c403ebaf9f0171e9517f40e41" }, + { } + } + },{ + .scheme = NULL + } +}; +static const struct http_auth_challenge_test realms[] = { + { .scheme = "Newauth", + .data = NULL, + .params = (struct http_auth_param []) { + { "realm", "apps" }, + { "type", "1" }, + { "title", "Login to \"apps\"" }, + { } + } + },{ + .scheme = "Basic", + .data = NULL, + .params = (struct http_auth_param []) { + { "realm", "simple" }, + { } + } + },{ + .scheme = NULL + } +}; /* Valid auth challenges tests */ static const struct http_auth_challenges_test valid_auth_challenges_tests[] = { { .challenges_in = "Basic realm=\"WallyWorld\"", - .challenges = (struct http_auth_challenge_test []) { - { .scheme = "Basic", - .data = NULL, - .params = (struct http_auth_param []) { - { "realm", "WallyWorld" }, { NULL, NULL } - } - },{ - .scheme = NULL - } - } + .challenges = &basic },{ .challenges_in = "Digest " "realm=\"testrealm at host.com\", " "qop=\"auth,auth-int\", " "nonce=\"dcd98b7102dd2f0e8b11d0f600bfb0c093\", " "opaque=\"5ccc069c403ebaf9f0171e9517f40e41\"", - .challenges = (struct http_auth_challenge_test []) { - { .scheme = "Digest", - .data = NULL, - .params = (struct http_auth_param []) { - { "realm", "testrealm at host.com" }, - { "qop", "auth,auth-int" }, - { "nonce", "dcd98b7102dd2f0e8b11d0f600bfb0c093" }, - { "opaque", "5ccc069c403ebaf9f0171e9517f40e41" }, - { NULL, NULL } - } - },{ - .scheme = NULL - } - } + .challenges = &digest },{ .challenges_in = "Newauth realm=\"apps\", type=1, " "title=\"Login to \\\"apps\\\"\", Basic realm=\"simple\"", - .challenges = (struct http_auth_challenge_test []) { - { .scheme = "Newauth", - .data = NULL, - .params = (struct http_auth_param []) { - { "realm", "apps" }, - { "type", "1" }, - { "title", "Login to \"apps\"" }, - { NULL, NULL } - } - },{ - .scheme = "Basic", - .data = NULL, - .params = (struct http_auth_param []) { - { "realm", "simple" }, - { NULL, NULL } - } - },{ - .scheme = NULL - } - } + .challenges = &realms } }; @@ -160,27 +164,18 @@ const char *scheme; const char *data; - struct http_auth_param *params; + struct http_auth_param params[]; }; - -/* Valid auth credentials tests */ static const struct http_auth_credentials_test -valid_auth_credentials_tests[] = { - { - .credentials_in = "Basic QWxhZGRpbjpvcGVuIHNlc2FtZQ==", +basic_cred[] = { + { .scheme = "Basic", .data = "QWxhZGRpbjpvcGVuIHNlc2FtZQ==", .params = NULL - },{ - .credentials_in = "Digest username=\"Mufasa\", " - "realm=\"testrealm at host.com\", " - "nonce=\"dcd98b7102dd2f0e8b11d0f600bfb0c093\", " - "uri=\"/dir/index.html\", " - "qop=auth, " - "nc=00000001, " - "cnonce=\"0a4f113b\", " - "response=\"6629fae49393a05397450978507c4ef1\", " - "opaque=\"5ccc069c403ebaf9f0171e9517f40e41\"", + } +}; +static const struct http_auth_credentials_test mufasa[] = { + { .scheme = "Digest", .data = NULL, .params = (struct http_auth_param []) { @@ -198,6 +193,26 @@ } }; +/* Valid auth credentials tests */ +static const struct http_auth_credentials_test +valid_auth_credentials_tests[] = { + { + .credentials_in = "Basic QWxhZGRpbjpvcGVuIHNlc2FtZQ==", + .params = &basic_cred + },{ + .credentials_in = "Digest username=\"Mufasa\", " + "realm=\"testrealm at host.com\", " + "nonce=\"dcd98b7102dd2f0e8b11d0f600bfb0c093\", " + "uri=\"/dir/index.html\", " + "qop=auth, " + "nc=00000001, " + "cnonce=\"0a4f113b\", " + "response=\"6629fae49393a05397450978507c4ef1\", " + "opaque=\"5ccc069c403ebaf9f0171e9517f40e41\"", + .params = &mufasa + } + }; + static const unsigned int valid_auth_credentials_test_count = N_ELEMENTS(valid_auth_credentials_tests); From aki.tuomi at dovecot.fi Wed Oct 12 05:51:16 2016 From: aki.tuomi at dovecot.fi (Aki Tuomi) Date: Wed, 12 Oct 2016 08:51:16 +0300 Subject: Compound Literal - xlc and gcc differences can be patched In-Reply-To: References: Message-ID: On 11.10.2016 23:25, Michael Felt wrote: > Since I so miserably misspelled packaging - a new thread specific to > the issue at hand. > > I found a "workaround". In short, xlc does not accept arrays of nested > array - the bottom line (best message) is: 1506-995 (S) An aggregate > containing a flexible array member cannot be used as a member of a > structure or as an array element. > > At the core - all the other messages come because this (I shall call > it nesting). > > For the test-http-auth.c file I have a patch (attached). > > make ran for while with no issue, then it started running into several > files with this same style of specification - doveadm-dict.c being the > first one. I ran "make -i" and I notice several files with this - for > xlc - "unaccepted" syntax. > > Please look at the patch - what it basically is - and let me know > whether you would consider accepting patches in this form (e.g., > verify gcc (I assume) will accept it as well). If your answer is yes, > I shall proceed with the additional files (learn how your MACROS > work!) and send them to you. Again, initially I would just send one, > e.g., doveadm-dict.c - to be sure your regular compiler also builds > this alternate specification. > > Thank you for your consideration! > > Michael > Hi! Please make your patch, if possible, via https://github.com/dovecot/core as pull request. Aki Tuomi Dovecot oy From cedric.bassaget.ml at gmail.com Wed Oct 12 06:53:42 2016 From: cedric.bassaget.ml at gmail.com (=?UTF-8?Q?C=c3=a9dric_ML?=) Date: Wed, 12 Oct 2016 08:53:42 +0200 Subject: Quota & prefetch uiserDB questions Message-ID: Hello, I'm trying to make quota work on my dovecot server. I'm using prefetch userdb (source : http://wiki2.dovecot.org/UserDatabase/Prefetch) with a database located on a remote host : passdb { driver = sql args = /etc/dovecot/dovecot-sql.conf.ext } userdb { driver = prefetch } userdb { driver = sql args = /etc/dovecot/dovecot-sql.conf.ext } With password_query containing (source : http://wiki2.dovecot.org/Quota/Configuration) : password_query = SELECT \ username AS user, \ password, \ homedir AS userdb_home, \ maildir AS userdb_mail, \ uid AS userdb_uid, \ gid AS userdb_gid, \ CONCAT('*:bytes=', quota) AS userdb_quota_rule \ FROM mailbox \ WHERE username = '%u' When I change the quota value in DB, it's not reflected to maildirsize file of the user. If I delete maildirsize file, it's re-created but not with the quota value which is set in the DB. Questions : how is this maildirsize file created ? how is it updated ? is there a way to make maildir++ quota work with dovecot using prefetch userDB ? Or do I have to use dict quotas ? Many thanks for your help. Regards, C?dric From juha.koho at trineco.fi Wed Oct 12 07:27:47 2016 From: juha.koho at trineco.fi (Juha Koho) Date: Wed, 12 Oct 2016 09:27:47 +0200 Subject: Problems with GSSAPI and LDAP In-Reply-To: References: <43b4213568d7ca31dca5452ba9020f09@trineco.fi> <35e251782e3bca91e4fce84a2804a59c@trineco.fi> <88ce680f-e349-1424-d617-24ad0621fac7@dovecot.fi> <0c57f0df-dcc3-c19a-b528-220f8adfe4c1@dovecot.fi> Message-ID: <1593db0716616dcf7347c3683f7f1c14@trineco.fi> On 2016-10-11 12:10, Juha Koho wrote: > On 2016-10-11 11:03, Aki Tuomi wrote: >> On 11.10.2016 11:56, Juha Koho wrote: >>> >>> On 2016-10-11 10:00, Aki Tuomi wrote: >>>> On 11.10.2016 10:43, Juha Koho wrote: >>>>> >>>>> On 2016-10-11 09:18, Aki Tuomi wrote: >>>>>> On 11.10.2016 10:13, Juha Koho wrote: >>>>>>> Hello, >>>>>>> >>>>>>> I have a Dovecot 2.2.25 set up with OpenLDAP back end. I was >>>>>>> trying to >>>>>>> set up a GSSAPI Kerberos authentication with the LDAP server but >>>>>>> with >>>>>>> little success. Seems no matter what I try I end up with the >>>>>>> following >>>>>>> error message: >>>>>>> >>>>>>> dovecot: auth: Error: LDAP: binding failed (dn >>>>>>> (imap/host.example.com at EXAMPLE.COM)): Local error, SASL(-1): >>>>>>> generic >>>>>>> failure: GSSAPI Error: Unspecified GSS failure. Minor code may >>>>>>> provide more information (No Kerberos credentials available >>>>>>> (default >>>>>>> cache: FILE:/tmp/dovecot.krb5.ccache)) >>>>>>> >>>>>>> I have set the import_environment in dovecot.conf: >>>>>>> >>>>>>> import_environment = TZ CORE_OUTOFMEM CORE_ERROR LISTEN_PID >>>>>>> LISTEN_FDS >>>>>>> KRB5CCNAME=FILE:/tmp/dovecot.krb5.ccache >>>>>>> >>>>>>> And these in LDAP configuration: >>>>>>> >>>>>>> dn = imap/host.example.com at EXAMPLE.COM >>>>>>> sasl_bind = yes >>>>>>> sasl_mech = gssapi >>>>>>> sasl_realm = EXAMPLE.COM >>>>>>> sasl_authz_id = imap/host.example.com at EXAMPLE.COM >>>>>>> >>>>>>> I have tried with different values in dn and sasl_authz_id and >>>>>>> also >>>>>>> leaving them out completely but I always end up with the error >>>>>>> message >>>>>>> above. Using simple bind without GSSAPI works just fine. >>>>>>> >>>>>>> The credentials cache file exists and is valid for the principal >>>>>>> imap/host.example.com at EXAMPLE.COM. The file is owned by dovecot >>>>>>> user >>>>>>> so it shouldn't be a permission problem either. >>>>>>> >>>>>>> GSSAPI in OpenLDAP works but I suppose it is irrelevant here >>>>>>> since >>>>>>> the >>>>>>> connection attempt never reaches the LDAP server due to the >>>>>>> error. I >>>>>>> also have similar setup for Postfix and it works fine. >>>>>>> >>>>>>> Any ideas what to try next? >>>>>>> >>>>>>> Best regards, >>>>>>> Juha >>>>>> >>>>>> Can you provide klist output for the cache file? Also, it should >>>>>> be >>>>>> readable by dovenull user, or whatever is configured as >>>>>> default_login_user. >>>>> >>>>> >>>>> Here's the klist output of the cache file: >>>>> -- >>>>> Ticket cache: FILE:/tmp/dovecot.krb5.ccache >>>>> Default principal: imap/host.example.com at EXAMPLE.COM >>>>> >>>>> Valid starting Expires Service principal >>>>> 10/11/2016 09:26:25 10/11/2016 21:26:25 >>>>> krbtgt/EXAMPLE.COM at EXAMPLE.COM >>>>> renew until 10/12/2016 09:26:25 >>>>> --- >>>>> >>>>> That I didn't know that also dovenull must have access to the cache >>>>> but I tried also setting 0644 permissions to the cache file with no >>>>> luck. So permissions shouldn't be the issue... >>>>> >>>>> Juha >>>> >>>> Your ccache has no ticket for imap/host.example.com at EXAMPLE.COM >>>> >>>> please use kinit to acquire one. >>> >>> >>> Now I'm confused. The cache file is created by kinit using the >>> command: >>> >>> sudo -u dovenull kinit -c FILE:/tmp/dovecot.krb5.ccache -k -t >>> /path/to/keytab imap/host.example.com >>> >>> After that: >>> >>> $ sudo -u dovenull klist /tmp/dovecot.krb5.ccache >>> Ticket cache: FILE:/tmp/dovecot.krb5.ccache >>> Default principal: imap/host.example.com at EXAMPLE.COM >>> >>> Valid starting Expires Service principal >>> 10/11/2016 10:47:47 10/11/2016 22:47:47 >>> krbtgt/EXAMPLE.COM at EXAMPLE.COM >>> renew until 10/12/2016 10:47:47 >>> >>> Also, I can use the cache file with ldapsearch just fine by running >>> the following: >>> >>> sudo -u dovenull KRB5CCNAME=FILE:/tmp/dovecot.krb5.ccache ldapsearch >>> -Y GSSAPI -ZZ -H ldap://ldap.example.com/ -b dc=example,dc=com >>> >>> After the ldapsearch has succeeded the klist output is the following: >>> >>> $ sudo -u dovenull klist /tmp/dovecot.krb5.ccache >>> Ticket cache: FILE:/tmp/dovecot.krb5.ccache >>> Default principal: imap/host.example.com at EXAMPLE.COM >>> >>> Valid starting Expires Service principal >>> 10/11/2016 10:47:47 10/11/2016 22:47:47 >>> krbtgt/EXAMPLE.COM at EXAMPLE.COM >>> renew until 10/12/2016 10:47:47 >>> 10/11/2016 10:49:32 10/11/2016 22:47:47 >>> ldap/ldap.example.com at EXAMPLE.COM >>> renew until 10/12/2016 10:47:47 >>> >>> >>> Which is what I expected. Isn't this basically what dovecot does (or >>> should do) or did I miss something? >>> >>> Juha >> >> Dovecot won't acquire service tickets for you. It requires that you >> have >> ticket for imap/imap.example.com at EXAMPLE.COM in the cache or keytab. >> >> The default principal is used when *CONNECTING* to a service, but you >> are *ACCEPTING* a service, so you need a service principal. >> >> Aki > > Sorry, all this Kerberos stuff is quite new to me and I'm still a bit > confused... :) What I still fail to understand is why would I need the > service principal in the cache since I'm trying to set dovecot to use > GSSAPI when connecting to the LDAP back end for passdb and userdb > lookups. > > My imap users can connect to Dovecot using GSSAPI without problems. > This isn't the issue. Dovecot being the client to the LDAP service is > the issue. > > But anyway, after adding the ticket for > imap/host.example.com at EXAMPLE.COM in the cache the error still > remains: > > dovecot: auth: Error: LDAP: binding failed (dn > imap/host.example.com at EXAMPLE.COM): Local error, SASL(-1): generic > failure: GSSAPI Error: Unspecified GSS failure. Minor code may > provide more information (No Kerberos credentials available (default > cache: FILE:/tmp/dovecot.krb5.ccache)) > > $ sudo -u dovenull klist /tmp/dovecot.krb5.ccache > Ticket cache: FILE:/tmp/dovecot.krb5.ccache > Default principal: imap/host.example.com at EXAMPLE.COM > > Valid starting Expires Service principal > 10/11/2016 11:00:50 10/11/2016 23:00:50 > krbtgt/EXAMPLE.COM at EXAMPLE.COM > renew until 10/12/2016 11:00:50 > 10/11/2016 11:19:09 10/11/2016 23:00:50 imap/host.example.com@ > renew until 10/12/2016 11:00:50 > 10/11/2016 11:19:09 10/11/2016 23:00:50 > imap/host.example.com at EXAMPLE.COM > renew until 10/12/2016 11:00:50 > > Juha Just to let anyone interested know the configuration was correct but this turned out to be some sort of library incompatibility or whatever. I cloned the configuration to a new virtual server and compiled a fresh copy of Dovecot from source (tried git master and release-2.2.25) and it worked without problems. I also noticed that with the freshly compiled version the error message changed to dovecot: auth: Error: LDAP: binding failed (dn (none)): Local error, SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information (No Kerberos credentials available: Credentials cache permissions incorrect (filename: /tmp/dovecot.krb5.ccache)) if the permissions of the cache file were incorrect instead of this general error message above. So seems like the issue - whatever it was - caused that Dovecot (or the underlying libraries) were unable to locate or open the cache file in the first place. Juha From cedric.bassaget.ml at gmail.com Wed Oct 12 07:30:22 2016 From: cedric.bassaget.ml at gmail.com (=?UTF-8?Q?C=c3=a9dric_ML?=) Date: Wed, 12 Oct 2016 09:30:22 +0200 Subject: Quota & prefetch uiserDB questions In-Reply-To: References: Message-ID: I'll answer my own question : Adding quota information in user_query like this : user_query = SELECT \ maildir as mail, \ homedir as home, \ uid, \ gid, \ ---> CONCAT('*:bytes=', quota) AS quota_rule \ FROM mailbox WHERE username = '%u' fixed my problem. Regards, C?dric Le 12/10/2016 ? 08:53, C?dric ML a ?crit : > Hello, > > I'm trying to make quota work on my dovecot server. > I'm using prefetch userdb (source : > http://wiki2.dovecot.org/UserDatabase/Prefetch) with a database > located on a remote host : > > passdb { > driver = sql > args = /etc/dovecot/dovecot-sql.conf.ext > } > > userdb { > driver = prefetch > } > > userdb { > driver = sql > args = /etc/dovecot/dovecot-sql.conf.ext > } > > > With password_query containing (source : > http://wiki2.dovecot.org/Quota/Configuration) : > password_query = SELECT \ > username AS user, \ > password, \ > homedir AS userdb_home, \ > maildir AS userdb_mail, \ > uid AS userdb_uid, \ > gid AS userdb_gid, \ > CONCAT('*:bytes=', quota) AS userdb_quota_rule \ > FROM mailbox \ > WHERE username = '%u' > > > > When I change the quota value in DB, it's not reflected to maildirsize > file of the user. > If I delete maildirsize file, it's re-created but not with the quota > value which is set in the DB. > > Questions : how is this maildirsize file created ? how is it updated ? > is there a way to make maildir++ quota work with dovecot using > prefetch userDB ? Or do I have to use dict quotas ? > > Many thanks for your help. > Regards, > C?dric From stephan at rename-it.nl Wed Oct 12 07:41:38 2016 From: stephan at rename-it.nl (Stephan Bosch) Date: Wed, 12 Oct 2016 09:41:38 +0200 Subject: Managesieve port 4190 not working In-Reply-To: References: Message-ID: <7c3c067d-80ba-6715-fe45-f4bceaeb59e4@rename-it.nl> Op 10/11/2016 om 9:24 PM schreef Marcio Vogel Merlone dos Santos: > Hi all, > > I am working on a fresh install with Ubuntu 16.04 LTS and Dovecot > 2.2.22-1ubuntu2.1. I am unable to open port 4190, just 2000. I have > this on 20-managesieve.conf: > > protocols = $protocols sieve > service managesieve-login { > inet_listener sieve { > port = 4190 > } > inet_listener sieve_deprecated { > port = 2000 > } > } > service managesieve { > process_limit = 1024 > } > protocol sieve { > mail_debug=yes > } > > This way I get only port 2000, if comment out the "sieve_deprecated" > section leaving just the "inet_listener sieve" I get nothing. Could > not find anything useful on logs or strace. Any hint? > > Thanks in advance, best regards. It should log something about that. Use `doveadm log find' to find out where. Probably something else is using that port. You can use the netstat tool to find out what that is. Regards, Stephan. From aki.tuomi at dovecot.fi Wed Oct 12 10:02:58 2016 From: aki.tuomi at dovecot.fi (Aki Tuomi) Date: Wed, 12 Oct 2016 13:02:58 +0300 Subject: Problems with GSSAPI and LDAP In-Reply-To: <1593db0716616dcf7347c3683f7f1c14@trineco.fi> References: <43b4213568d7ca31dca5452ba9020f09@trineco.fi> <35e251782e3bca91e4fce84a2804a59c@trineco.fi> <88ce680f-e349-1424-d617-24ad0621fac7@dovecot.fi> <0c57f0df-dcc3-c19a-b528-220f8adfe4c1@dovecot.fi> <1593db0716616dcf7347c3683f7f1c14@trineco.fi> Message-ID: On 12.10.2016 10:27, Juha Koho wrote: > > On 2016-10-11 12:10, Juha Koho wrote: >> On 2016-10-11 11:03, Aki Tuomi wrote: >>> On 11.10.2016 11:56, Juha Koho wrote: >>>> >>>> On 2016-10-11 10:00, Aki Tuomi wrote: >>>>> On 11.10.2016 10:43, Juha Koho wrote: >>>>>> >>>>>> On 2016-10-11 09:18, Aki Tuomi wrote: >>>>>>> On 11.10.2016 10:13, Juha Koho wrote: >>>>>>>> Hello, >>>>>>>> >>>>>>>> I have a Dovecot 2.2.25 set up with OpenLDAP back end. I was >>>>>>>> trying to >>>>>>>> set up a GSSAPI Kerberos authentication with the LDAP server >>>>>>>> but with >>>>>>>> little success. Seems no matter what I try I end up with the >>>>>>>> following >>>>>>>> error message: >>>>>>>> >>>>>>>> dovecot: auth: Error: LDAP: binding failed (dn >>>>>>>> (imap/host.example.com at EXAMPLE.COM)): Local error, SASL(-1): >>>>>>>> generic >>>>>>>> failure: GSSAPI Error: Unspecified GSS failure. Minor code may >>>>>>>> provide more information (No Kerberos credentials available >>>>>>>> (default >>>>>>>> cache: FILE:/tmp/dovecot.krb5.ccache)) >>>>>>>> >>>>>>>> I have set the import_environment in dovecot.conf: >>>>>>>> >>>>>>>> import_environment = TZ CORE_OUTOFMEM CORE_ERROR LISTEN_PID >>>>>>>> LISTEN_FDS >>>>>>>> KRB5CCNAME=FILE:/tmp/dovecot.krb5.ccache >>>>>>>> >>>>>>>> And these in LDAP configuration: >>>>>>>> >>>>>>>> dn = imap/host.example.com at EXAMPLE.COM >>>>>>>> sasl_bind = yes >>>>>>>> sasl_mech = gssapi >>>>>>>> sasl_realm = EXAMPLE.COM >>>>>>>> sasl_authz_id = imap/host.example.com at EXAMPLE.COM >>>>>>>> >>>>>>>> I have tried with different values in dn and sasl_authz_id and >>>>>>>> also >>>>>>>> leaving them out completely but I always end up with the error >>>>>>>> message >>>>>>>> above. Using simple bind without GSSAPI works just fine. >>>>>>>> >>>>>>>> The credentials cache file exists and is valid for the principal >>>>>>>> imap/host.example.com at EXAMPLE.COM. The file is owned by dovecot >>>>>>>> user >>>>>>>> so it shouldn't be a permission problem either. >>>>>>>> >>>>>>>> GSSAPI in OpenLDAP works but I suppose it is irrelevant here since >>>>>>>> the >>>>>>>> connection attempt never reaches the LDAP server due to the >>>>>>>> error. I >>>>>>>> also have similar setup for Postfix and it works fine. >>>>>>>> >>>>>>>> Any ideas what to try next? >>>>>>>> >>>>>>>> Best regards, >>>>>>>> Juha >>>>>>> >>>>>>> Can you provide klist output for the cache file? Also, it should be >>>>>>> readable by dovenull user, or whatever is configured as >>>>>>> default_login_user. >>>>>> >>>>>> >>>>>> Here's the klist output of the cache file: >>>>>> -- >>>>>> Ticket cache: FILE:/tmp/dovecot.krb5.ccache >>>>>> Default principal: imap/host.example.com at EXAMPLE.COM >>>>>> >>>>>> Valid starting Expires Service principal >>>>>> 10/11/2016 09:26:25 10/11/2016 21:26:25 >>>>>> krbtgt/EXAMPLE.COM at EXAMPLE.COM >>>>>> renew until 10/12/2016 09:26:25 >>>>>> --- >>>>>> >>>>>> That I didn't know that also dovenull must have access to the cache >>>>>> but I tried also setting 0644 permissions to the cache file with no >>>>>> luck. So permissions shouldn't be the issue... >>>>>> >>>>>> Juha >>>>> >>>>> Your ccache has no ticket for imap/host.example.com at EXAMPLE.COM >>>>> >>>>> please use kinit to acquire one. >>>> >>>> >>>> Now I'm confused. The cache file is created by kinit using the >>>> command: >>>> >>>> sudo -u dovenull kinit -c FILE:/tmp/dovecot.krb5.ccache -k -t >>>> /path/to/keytab imap/host.example.com >>>> >>>> After that: >>>> >>>> $ sudo -u dovenull klist /tmp/dovecot.krb5.ccache >>>> Ticket cache: FILE:/tmp/dovecot.krb5.ccache >>>> Default principal: imap/host.example.com at EXAMPLE.COM >>>> >>>> Valid starting Expires Service principal >>>> 10/11/2016 10:47:47 10/11/2016 22:47:47 >>>> krbtgt/EXAMPLE.COM at EXAMPLE.COM >>>> renew until 10/12/2016 10:47:47 >>>> >>>> Also, I can use the cache file with ldapsearch just fine by running >>>> the following: >>>> >>>> sudo -u dovenull KRB5CCNAME=FILE:/tmp/dovecot.krb5.ccache ldapsearch >>>> -Y GSSAPI -ZZ -H ldap://ldap.example.com/ -b dc=example,dc=com >>>> >>>> After the ldapsearch has succeeded the klist output is the following: >>>> >>>> $ sudo -u dovenull klist /tmp/dovecot.krb5.ccache >>>> Ticket cache: FILE:/tmp/dovecot.krb5.ccache >>>> Default principal: imap/host.example.com at EXAMPLE.COM >>>> >>>> Valid starting Expires Service principal >>>> 10/11/2016 10:47:47 10/11/2016 22:47:47 >>>> krbtgt/EXAMPLE.COM at EXAMPLE.COM >>>> renew until 10/12/2016 10:47:47 >>>> 10/11/2016 10:49:32 10/11/2016 22:47:47 >>>> ldap/ldap.example.com at EXAMPLE.COM >>>> renew until 10/12/2016 10:47:47 >>>> >>>> >>>> Which is what I expected. Isn't this basically what dovecot does (or >>>> should do) or did I miss something? >>>> >>>> Juha >>> >>> Dovecot won't acquire service tickets for you. It requires that you >>> have >>> ticket for imap/imap.example.com at EXAMPLE.COM in the cache or keytab. >>> >>> The default principal is used when *CONNECTING* to a service, but you >>> are *ACCEPTING* a service, so you need a service principal. >>> >>> Aki >> >> Sorry, all this Kerberos stuff is quite new to me and I'm still a bit >> confused... :) What I still fail to understand is why would I need the >> service principal in the cache since I'm trying to set dovecot to use >> GSSAPI when connecting to the LDAP back end for passdb and userdb >> lookups. >> >> My imap users can connect to Dovecot using GSSAPI without problems. >> This isn't the issue. Dovecot being the client to the LDAP service is >> the issue. >> >> But anyway, after adding the ticket for >> imap/host.example.com at EXAMPLE.COM in the cache the error still >> remains: >> >> dovecot: auth: Error: LDAP: binding failed (dn >> imap/host.example.com at EXAMPLE.COM): Local error, SASL(-1): generic >> failure: GSSAPI Error: Unspecified GSS failure. Minor code may >> provide more information (No Kerberos credentials available (default >> cache: FILE:/tmp/dovecot.krb5.ccache)) >> >> $ sudo -u dovenull klist /tmp/dovecot.krb5.ccache >> Ticket cache: FILE:/tmp/dovecot.krb5.ccache >> Default principal: imap/host.example.com at EXAMPLE.COM >> >> Valid starting Expires Service principal >> 10/11/2016 11:00:50 10/11/2016 23:00:50 krbtgt/EXAMPLE.COM at EXAMPLE.COM >> renew until 10/12/2016 11:00:50 >> 10/11/2016 11:19:09 10/11/2016 23:00:50 imap/host.example.com@ >> renew until 10/12/2016 11:00:50 >> 10/11/2016 11:19:09 10/11/2016 23:00:50 >> imap/host.example.com at EXAMPLE.COM >> renew until 10/12/2016 11:00:50 >> >> Juha > > Just to let anyone interested know the configuration was correct but > this turned out to be some sort of library incompatibility or whatever. > > I cloned the configuration to a new virtual server and compiled a > fresh copy of Dovecot from source (tried git master and > release-2.2.25) and it worked without problems. > > I also noticed that with the freshly compiled version the error > message changed to > > dovecot: auth: Error: LDAP: binding failed (dn (none)): Local error, > SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. > Minor code may provide more information (No Kerberos credentials > available: Credentials cache permissions incorrect (filename: > /tmp/dovecot.krb5.ccache)) > > if the permissions of the cache file were incorrect instead of this > general error message above. So seems like the issue - whatever it was > - caused that Dovecot (or the underlying libraries) were unable to > locate or open the cache file in the first place. > > Juha I think it requires that the file is readable only by the user, so setting if 0644 was probably a mistake. Aki From juha.koho at trineco.fi Wed Oct 12 11:52:07 2016 From: juha.koho at trineco.fi (Juha Koho) Date: Wed, 12 Oct 2016 13:52:07 +0200 Subject: Problems with GSSAPI and LDAP In-Reply-To: References: <43b4213568d7ca31dca5452ba9020f09@trineco.fi> <35e251782e3bca91e4fce84a2804a59c@trineco.fi> <88ce680f-e349-1424-d617-24ad0621fac7@dovecot.fi> <0c57f0df-dcc3-c19a-b528-220f8adfe4c1@dovecot.fi> <1593db0716616dcf7347c3683f7f1c14@trineco.fi> Message-ID: On 2016-10-12 12:02, Aki Tuomi wrote: > On 12.10.2016 10:27, Juha Koho wrote: >> >> On 2016-10-11 12:10, Juha Koho wrote: >>> On 2016-10-11 11:03, Aki Tuomi wrote: >>>> On 11.10.2016 11:56, Juha Koho wrote: >>>>> >>>>> On 2016-10-11 10:00, Aki Tuomi wrote: >>>>>> On 11.10.2016 10:43, Juha Koho wrote: >>>>>>> >>>>>>> On 2016-10-11 09:18, Aki Tuomi wrote: >>>>>>>> On 11.10.2016 10:13, Juha Koho wrote: >>>>>>>>> Hello, >>>>>>>>> >>>>>>>>> I have a Dovecot 2.2.25 set up with OpenLDAP back end. I was >>>>>>>>> trying to >>>>>>>>> set up a GSSAPI Kerberos authentication with the LDAP server >>>>>>>>> but with >>>>>>>>> little success. Seems no matter what I try I end up with the >>>>>>>>> following >>>>>>>>> error message: >>>>>>>>> >>>>>>>>> dovecot: auth: Error: LDAP: binding failed (dn >>>>>>>>> (imap/host.example.com at EXAMPLE.COM)): Local error, SASL(-1): >>>>>>>>> generic >>>>>>>>> failure: GSSAPI Error: Unspecified GSS failure. Minor code may >>>>>>>>> provide more information (No Kerberos credentials available >>>>>>>>> (default >>>>>>>>> cache: FILE:/tmp/dovecot.krb5.ccache)) >>>>>>>>> >>>>>>>>> I have set the import_environment in dovecot.conf: >>>>>>>>> >>>>>>>>> import_environment = TZ CORE_OUTOFMEM CORE_ERROR LISTEN_PID >>>>>>>>> LISTEN_FDS >>>>>>>>> KRB5CCNAME=FILE:/tmp/dovecot.krb5.ccache >>>>>>>>> >>>>>>>>> And these in LDAP configuration: >>>>>>>>> >>>>>>>>> dn = imap/host.example.com at EXAMPLE.COM >>>>>>>>> sasl_bind = yes >>>>>>>>> sasl_mech = gssapi >>>>>>>>> sasl_realm = EXAMPLE.COM >>>>>>>>> sasl_authz_id = imap/host.example.com at EXAMPLE.COM >>>>>>>>> >>>>>>>>> I have tried with different values in dn and sasl_authz_id and >>>>>>>>> also >>>>>>>>> leaving them out completely but I always end up with the error >>>>>>>>> message >>>>>>>>> above. Using simple bind without GSSAPI works just fine. >>>>>>>>> >>>>>>>>> The credentials cache file exists and is valid for the >>>>>>>>> principal >>>>>>>>> imap/host.example.com at EXAMPLE.COM. The file is owned by dovecot >>>>>>>>> user >>>>>>>>> so it shouldn't be a permission problem either. >>>>>>>>> >>>>>>>>> GSSAPI in OpenLDAP works but I suppose it is irrelevant here >>>>>>>>> since >>>>>>>>> the >>>>>>>>> connection attempt never reaches the LDAP server due to the >>>>>>>>> error. I >>>>>>>>> also have similar setup for Postfix and it works fine. >>>>>>>>> >>>>>>>>> Any ideas what to try next? >>>>>>>>> >>>>>>>>> Best regards, >>>>>>>>> Juha >>>>>>>> >>>>>>>> Can you provide klist output for the cache file? Also, it should >>>>>>>> be >>>>>>>> readable by dovenull user, or whatever is configured as >>>>>>>> default_login_user. >>>>>>> >>>>>>> >>>>>>> Here's the klist output of the cache file: >>>>>>> -- >>>>>>> Ticket cache: FILE:/tmp/dovecot.krb5.ccache >>>>>>> Default principal: imap/host.example.com at EXAMPLE.COM >>>>>>> >>>>>>> Valid starting Expires Service principal >>>>>>> 10/11/2016 09:26:25 10/11/2016 21:26:25 >>>>>>> krbtgt/EXAMPLE.COM at EXAMPLE.COM >>>>>>> renew until 10/12/2016 09:26:25 >>>>>>> --- >>>>>>> >>>>>>> That I didn't know that also dovenull must have access to the >>>>>>> cache >>>>>>> but I tried also setting 0644 permissions to the cache file with >>>>>>> no >>>>>>> luck. So permissions shouldn't be the issue... >>>>>>> >>>>>>> Juha >>>>>> >>>>>> Your ccache has no ticket for imap/host.example.com at EXAMPLE.COM >>>>>> >>>>>> please use kinit to acquire one. >>>>> >>>>> >>>>> Now I'm confused. The cache file is created by kinit using the >>>>> command: >>>>> >>>>> sudo -u dovenull kinit -c FILE:/tmp/dovecot.krb5.ccache -k -t >>>>> /path/to/keytab imap/host.example.com >>>>> >>>>> After that: >>>>> >>>>> $ sudo -u dovenull klist /tmp/dovecot.krb5.ccache >>>>> Ticket cache: FILE:/tmp/dovecot.krb5.ccache >>>>> Default principal: imap/host.example.com at EXAMPLE.COM >>>>> >>>>> Valid starting Expires Service principal >>>>> 10/11/2016 10:47:47 10/11/2016 22:47:47 >>>>> krbtgt/EXAMPLE.COM at EXAMPLE.COM >>>>> renew until 10/12/2016 10:47:47 >>>>> >>>>> Also, I can use the cache file with ldapsearch just fine by running >>>>> the following: >>>>> >>>>> sudo -u dovenull KRB5CCNAME=FILE:/tmp/dovecot.krb5.ccache >>>>> ldapsearch >>>>> -Y GSSAPI -ZZ -H ldap://ldap.example.com/ -b dc=example,dc=com >>>>> >>>>> After the ldapsearch has succeeded the klist output is the >>>>> following: >>>>> >>>>> $ sudo -u dovenull klist /tmp/dovecot.krb5.ccache >>>>> Ticket cache: FILE:/tmp/dovecot.krb5.ccache >>>>> Default principal: imap/host.example.com at EXAMPLE.COM >>>>> >>>>> Valid starting Expires Service principal >>>>> 10/11/2016 10:47:47 10/11/2016 22:47:47 >>>>> krbtgt/EXAMPLE.COM at EXAMPLE.COM >>>>> renew until 10/12/2016 10:47:47 >>>>> 10/11/2016 10:49:32 10/11/2016 22:47:47 >>>>> ldap/ldap.example.com at EXAMPLE.COM >>>>> renew until 10/12/2016 10:47:47 >>>>> >>>>> >>>>> Which is what I expected. Isn't this basically what dovecot does >>>>> (or >>>>> should do) or did I miss something? >>>>> >>>>> Juha >>>> >>>> Dovecot won't acquire service tickets for you. It requires that you >>>> have >>>> ticket for imap/imap.example.com at EXAMPLE.COM in the cache or keytab. >>>> >>>> The default principal is used when *CONNECTING* to a service, but >>>> you >>>> are *ACCEPTING* a service, so you need a service principal. >>>> >>>> Aki >>> >>> Sorry, all this Kerberos stuff is quite new to me and I'm still a bit >>> confused... :) What I still fail to understand is why would I need >>> the >>> service principal in the cache since I'm trying to set dovecot to use >>> GSSAPI when connecting to the LDAP back end for passdb and userdb >>> lookups. >>> >>> My imap users can connect to Dovecot using GSSAPI without problems. >>> This isn't the issue. Dovecot being the client to the LDAP service is >>> the issue. >>> >>> But anyway, after adding the ticket for >>> imap/host.example.com at EXAMPLE.COM in the cache the error still >>> remains: >>> >>> dovecot: auth: Error: LDAP: binding failed (dn >>> imap/host.example.com at EXAMPLE.COM): Local error, SASL(-1): generic >>> failure: GSSAPI Error: Unspecified GSS failure. Minor code may >>> provide more information (No Kerberos credentials available (default >>> cache: FILE:/tmp/dovecot.krb5.ccache)) >>> >>> $ sudo -u dovenull klist /tmp/dovecot.krb5.ccache >>> Ticket cache: FILE:/tmp/dovecot.krb5.ccache >>> Default principal: imap/host.example.com at EXAMPLE.COM >>> >>> Valid starting Expires Service principal >>> 10/11/2016 11:00:50 10/11/2016 23:00:50 >>> krbtgt/EXAMPLE.COM at EXAMPLE.COM >>> renew until 10/12/2016 11:00:50 >>> 10/11/2016 11:19:09 10/11/2016 23:00:50 imap/host.example.com@ >>> renew until 10/12/2016 11:00:50 >>> 10/11/2016 11:19:09 10/11/2016 23:00:50 >>> imap/host.example.com at EXAMPLE.COM >>> renew until 10/12/2016 11:00:50 >>> >>> Juha >> >> Just to let anyone interested know the configuration was correct but >> this turned out to be some sort of library incompatibility or >> whatever. >> >> I cloned the configuration to a new virtual server and compiled a >> fresh copy of Dovecot from source (tried git master and >> release-2.2.25) and it worked without problems. >> >> I also noticed that with the freshly compiled version the error >> message changed to >> >> dovecot: auth: Error: LDAP: binding failed (dn (none)): Local error, >> SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. >> Minor code may provide more information (No Kerberos credentials >> available: Credentials cache permissions incorrect (filename: >> /tmp/dovecot.krb5.ccache)) >> >> if the permissions of the cache file were incorrect instead of this >> general error message above. So seems like the issue - whatever it was >> - caused that Dovecot (or the underlying libraries) were unable to >> locate or open the cache file in the first place. >> >> Juha > > I think it requires that the file is readable only by the user, so > setting if 0644 was probably a mistake. > > Aki No, this isn't the case. In the new test environment where things work it doesn't care about the cache file permissions as long as it can read the file. I just noticed the error when I mistakenly had created the cache file as root and therefore it wasn't readable by Dovecot. Juha From matthew.broadhead at nbmlaw.co.uk Wed Oct 12 11:57:56 2016 From: matthew.broadhead at nbmlaw.co.uk (Matthew Broadhead) Date: Wed, 12 Oct 2016 13:57:56 +0200 Subject: sieve sending vacation message from vmail@ns1.domain.tld Message-ID: <71b362e8-3a69-076d-6376-2f3bbd39d0eb@nbmlaw.co.uk> I have a server running centos-release-7-2.1511.el7.centos.2.10.x86_64 with dovecot version 2.2.10. I am also using roundcube for webmail. when a vacation filter (reply with message) is created in roundcube it adds a rule to managesieve.sieve in the user's mailbox. everything works fine except the reply comes from vmail at ns1.domain.tld instead of user at domain.tld. ns1.domain.tld is the fully qualified name of the server. it used to work fine on my old CentOS 6 server so I am not sure what has changed. Can anyone point me in the direction of where I can configure this behaviour? From michael at felt.demon.nl Wed Oct 12 14:41:58 2016 From: michael at felt.demon.nl (Michael Felt) Date: Wed, 12 Oct 2016 16:41:58 +0200 Subject: Compound Literal - xlc and gcc differences can be patched In-Reply-To: References: Message-ID: <179abd19-b5bf-2b00-7ae1-038310ddba91@felt.demon.nl> On 12/10/2016 07:51, Aki Tuomi wrote: > Please make your patch, if possible, viahttps://github.com/dovecot/core > as pull request. I am not really git "schooled", but I shall look into that. Many Thanks for being open to a "non-gcc" compiler! Michael From bryan at shout.net Wed Oct 12 15:59:33 2016 From: bryan at shout.net (Bryan Holloway) Date: Wed, 12 Oct 2016 10:59:33 -0500 Subject: Outlook 2010 woes Message-ID: <2959c34e-fab7-cf17-3f43-82e93b8e525c@shout.net> Hello, everyone. We have recently begun migrating folks from an older server to a newer one, and things have been going quite well except for -- you guessed it -- Outlook 2010 users. Specifically it appears to be one customer in particular, and this particular customer has many nested mailboxes and lots of e-mail in general. Not sure if this is a factor. Old server: * Ubuntu 10.04.4 LTS * Dovecot 2.1.13 * Maildir++ * Local auth via passwd/shadow files New server: * Debian GNU/Linux 8.6 * Dovecot 2.2.13 * Maildir++ * Quotas enabled * LDAP Basically what's happening is that users are seeing large delays when navigating between different IMAP folders. So, for example, user "X" is sitting idle in their INBOX. If they then click on another folder there's a good 6-7 second delay before you can view its contents. If you immediately then navigate to other folders, you get a rapid response. But if the client then goes idle again for 10+ seconds, you will get this delay again. Some are reporting the OS saying "Outlook is not responding." (Everyone is running Windows 7.) Disclaimer: Yes, I know Outlook 2010 is a giant steaming pile of ____. However, everything worked dandy with Dovecot 2.1 and of course my customer is harping on this fact. Things I have tried/read about in the Dovecot list: * Check headers only -- doesn't seem to help. * Had a user completely remove their IMAP profile and re-add: no change. * Had a user set up an entirely new account on a new computer: same symptoms. * Someone mentioned to enable the "delay-newmail" workaround in 20-imap.conf, but this wasn't enabled in our 2.1 install, so that seems like it would not help our case. I was looking through the release notes of later Dovecots, and I noticed that in 2.2.14 there's mention of some issues with Outlook that were fixed, but it wasn't particularly specific about which versions. Should I be looking into that? And if so, is there a separate repository for newer dovecot? Currently the standard Debian 8.6 only has 2.2.13. Any help, suggestions, or pointers would be greatly appreciated. Thank you! - bryan From flatworm at users.sourceforge.net Wed Oct 12 16:51:39 2016 From: flatworm at users.sourceforge.net (Konstantin Khomoutov) Date: Wed, 12 Oct 2016 19:51:39 +0300 Subject: Outlook 2010 woes In-Reply-To: <2959c34e-fab7-cf17-3f43-82e93b8e525c@shout.net> References: <2959c34e-fab7-cf17-3f43-82e93b8e525c@shout.net> Message-ID: <20161012195139.3775264cedd8eae56f86c924@domain007.com> On Wed, 12 Oct 2016 10:59:33 -0500 Bryan Holloway wrote: [...] > New server: > * Debian GNU/Linux 8.6 > * Dovecot 2.2.13 > * Maildir++ > * Quotas enabled > * LDAP > > Basically what's happening is that users are seeing large delays when > navigating between different IMAP folders. So, for example, user "X" > is sitting idle in their INBOX. If they then click on another folder > there's a good 6-7 second delay before you can view its contents. If > you immediately then navigate to other folders, you get a rapid > response. But if the client then goes idle again for 10+ seconds, you > will get this delay again. Some are reporting the OS saying "Outlook > is not responding." (Everyone is running Windows 7.) [...] > Any help, suggestions, or pointers would be greatly appreciated. Do you see imap(username) Disconnected for inactivity in=X out=Y in the logs? >From your description, it appears as if outlook gets disconnected at some point, and that's why fast changing of the folders works OK (the connection is live) and doing this after a pause forces a reconnect with the following relogin. Just a shot in the dark but still... From bryan at shout.net Wed Oct 12 17:06:19 2016 From: bryan at shout.net (Bryan Holloway) Date: Wed, 12 Oct 2016 12:06:19 -0500 Subject: Outlook 2010 woes In-Reply-To: <20161012195139.3775264cedd8eae56f86c924@domain007.com> References: <2959c34e-fab7-cf17-3f43-82e93b8e525c@shout.net> <20161012195139.3775264cedd8eae56f86c924@domain007.com> Message-ID: <4fb193d6-40fa-a35c-2f61-fb9276aa8b29@shout.net> On 10/12/16 11:51 AM, Konstantin Khomoutov wrote: > On Wed, 12 Oct 2016 10:59:33 -0500 > Bryan Holloway wrote: > > [...] >> New server: >> * Debian GNU/Linux 8.6 >> * Dovecot 2.2.13 >> * Maildir++ >> * Quotas enabled >> * LDAP >> >> Basically what's happening is that users are seeing large delays when >> navigating between different IMAP folders. So, for example, user "X" >> is sitting idle in their INBOX. If they then click on another folder >> there's a good 6-7 second delay before you can view its contents. If >> you immediately then navigate to other folders, you get a rapid >> response. But if the client then goes idle again for 10+ seconds, you >> will get this delay again. Some are reporting the OS saying "Outlook >> is not responding." (Everyone is running Windows 7.) > [...] >> Any help, suggestions, or pointers would be greatly appreciated. > > Do you see > > imap(username) Disconnected for inactivity in=X out=Y > > in the logs? > > From your description, it appears as if outlook gets disconnected at > some point, and that's why fast changing of the folders works OK (the > connection is live) and doing this after a pause forces a reconnect > with the following relogin. > > Just a shot in the dark but still... I do see quite a few of those, but I tend to see those for many clients across the board. That does make a lot of sense though. I'm looking at the logs right now, and I see what amounts to a login, followed by a disconnect 10-15 seconds later. I thought that a standard IMAP connection stays open for at least 30 minutes based on the RFC. (?) From flatworm at users.sourceforge.net Wed Oct 12 17:16:21 2016 From: flatworm at users.sourceforge.net (Konstantin Khomoutov) Date: Wed, 12 Oct 2016 20:16:21 +0300 Subject: Outlook 2010 woes In-Reply-To: <4fb193d6-40fa-a35c-2f61-fb9276aa8b29@shout.net> References: <2959c34e-fab7-cf17-3f43-82e93b8e525c@shout.net> <20161012195139.3775264cedd8eae56f86c924@domain007.com> <4fb193d6-40fa-a35c-2f61-fb9276aa8b29@shout.net> Message-ID: <20161012201621.76047898049c295ecc9aa0b2@domain007.com> On Wed, 12 Oct 2016 12:06:19 -0500 Bryan Holloway wrote: [...] > >> Basically what's happening is that users are seeing large delays > >> when navigating between different IMAP folders. So, for example, > >> user "X" is sitting idle in their INBOX. If they then click on > >> another folder there's a good 6-7 second delay before you can view > >> its contents. If you immediately then navigate to other folders, > >> you get a rapid response. But if the client then goes idle again > >> for 10+ seconds, you will get this delay again. Some are reporting > >> the OS saying "Outlook is not responding." (Everyone is running > >> Windows 7.) > > [...] > >> Any help, suggestions, or pointers would be greatly appreciated. > > > > Do you see > > > > imap(username) Disconnected for inactivity in=X out=Y > > > > in the logs? > > > > From your description, it appears as if outlook gets disconnected at > > some point, and that's why fast changing of the folders works OK > > (the connection is live) and doing this after a pause forces a > > reconnect with the following relogin. > > > > Just a shot in the dark but still... > > I do see quite a few of those, but I tend to see those for many > clients across the board. > > That does make a lot of sense though. I'm looking at the logs right > now, and I see what amounts to a login, followed by a disconnect > 10-15 seconds later. > > I thought that a standard IMAP connection stays open for at least 30 > minutes based on the RFC. (?) Unfortunately, I don't have much familiarity with this topic but please try googling for the exact phrases dovecot + "Disconnected for inactivity" (literally double quoted) -- and you'll discover a hefty amount of past discussions touching this topic. They may give you further clues as to what things to try next. From kevin at my.walr.us Wed Oct 12 17:51:08 2016 From: kevin at my.walr.us (KT Walrus) Date: Wed, 12 Oct 2016 13:51:08 -0400 Subject: Detect IMAP server domain name in Dovecot IMAP proxy Message-ID: <794E3F41-DE6D-4B3A-8BA6-FDBFC9A3C2FD@my.walr.us> I?m in the process of setting up a Dovecot IMAP proxy to handle a number of IMAP server domains. At the current time, I have my users divided into 70 different groups of users (call them G1 to G70). I want each group to configure their email client to access their mailboxes at a domain name based on the group they belong to (e.g., g1.example.com , g2.example.com , ?, g70.example.com ). I will only support TLS encrypted IMAP connections to the Dovecot IMAP proxy (?ssl=yes? in the inet_listener). My SSL cert has alternate names for all 70 group domain names. I want the group domain to only support users that have been assigned to the group the domain name represents. That is, a user assigned to G23 would only be allowed to configure their email client for the IMAP server named g23.example.com . My solution during testing has been to have the Dovecot IMAP proxy to listen on different ports: 9930-9999. I plan to purchase 70 IPs, one for each group, and redirect traffic on port 993 to the appropriate Dovecot IMAP proxy port based on the IP I assign to the group domain name in the site?s DNS. The SQL for handling the IMAP login uses the port number of the inet_listener I think this could work in production, but it will cost me extra to rent the 70 IPs and might be a pain to manage. Eventually, I would like to have over 5,000 groups so requiring an IP per group is less than ideal. I also think having Dovecot IMAP proxy have 5,000 inet_listeners might not work so well or might create too many threads/processes/ports to fit on a small proxy server. I would rather have 1 public IP for each Dovecot IMAP proxy and somehow communicate to the userdb which group domain name was configured in the email client so only the users assigned to this group can login with that username. Anyone have any ideas? For HTTP traffic, it is easy to query the host in the HTTP Request, but I don?t think IMAP traffic has such host info in it. Does the Dovecot IMAP proxy receive the hostname from the email client when exchanging SSL certs (like SNI for HTTPS)? Or, maybe I should have group domain in the username used to log in with (e.g., username+g23 at example.com or username at g23.example.com ). I don?t like to make the user configure their email client to log in with a name that is different than their mailbox address. It is simpler to just have them configure their email client with username at example.com for both authorization and for the from/sender headers in the messages. Anyway, any ideas of how to set this up in production? From admin at vfemail.net Wed Oct 12 18:07:19 2016 From: admin at vfemail.net (Rick Romero) Date: Wed, 12 Oct 2016 13:07:19 -0500 Subject: Detect IMAP server domain name in Dovecot IMAP proxy In-Reply-To: <794E3F41-DE6D-4B3A-8BA6-FDBFC9A3C2FD@my.walr.us> Message-ID: <20161012130719.Horde.7LlxHlsHgT5jhRsUeuaWQg1@www.vfemail.net> Quoting KT Walrus : > I?m in the process of setting up a Dovecot IMAP proxy to handle a number > of IMAP server domains. At the current time, I have my users divided > into 70 different groups of users (call them G1 to G70). I want each > group to configure their email client to access their mailboxes at a > domain name based on the group they belong to (e.g., g1.example.com > , g2.example.com , ?, > g70.example.com ). I will only support TLS > encrypted IMAP connections to the Dovecot IMAP proxy (?ssl=yes? in the > inet_listener). My SSL cert has alternate names for all 70 group domain > names. > > I want the group domain to only support users that have been assigned to > the group the domain name represents. That is, a user assigned to G23 > would only be allowed to configure their email client for the IMAP > server named g23.example.com . > > My solution during testing has been to have the Dovecot IMAP proxy to > listen on different ports: 9930-9999. I plan to purchase 70 IPs, one for > each group, and redirect traffic on port 993 to the appropriate Dovecot > IMAP proxy port based on the IP I assign to the group domain name in the > site?s DNS. The SQL for handling the IMAP login uses the port number of > the inet_listener > > I think this could work in production, but it will cost me extra to rent > the 70 IPs and might be a pain to manage. Eventually, I would like to > have over 5,000 groups so requiring an IP per group is less than ideal. > I also think having Dovecot IMAP proxy have 5,000 inet_listeners might > not work so well or might create too many threads/processes/ports to fit > on a small proxy server. > > I would rather have 1 public IP for each Dovecot IMAP proxy and somehow > communicate to the userdb which group domain name was configured in the > email client so only the users assigned to this group can login with > that username. > > Anyone have any ideas? > ? Do you have a SQL userdb? Create a table or a 'host' field for the user. user_query = SELECT CONCAT(pw_name, '@', pw_domain) AS user, "89" as uid, "89" as gid, host, 'Y' AS proxy_maybe, pw_dir as home, pw_dir as mail_home, CONCAT('maildir:', pw_dir , '/Maildir/' ) as mail_location FROM vpopmail WHERE pw_name = '%n' AND pw_domain = '%d' (mine is based on qmail/vpopmail) Then populate 'host' for each user if you don't have any other way of programatically determining the host.. Rick From kevin at my.walr.us Wed Oct 12 18:33:39 2016 From: kevin at my.walr.us (KT Walrus) Date: Wed, 12 Oct 2016 14:33:39 -0400 Subject: Detect IMAP server domain name in Dovecot IMAP proxy In-Reply-To: <20161012130719.Horde.7LlxHlsHgT5jhRsUeuaWQg1@www.vfemail.net> References: <20161012130719.Horde.7LlxHlsHgT5jhRsUeuaWQg1@www.vfemail.net> Message-ID: > On Oct 12, 2016, at 2:07 PM, Rick Romero wrote: > > Quoting KT Walrus : > >> I?m in the process of setting up a Dovecot IMAP proxy to handle a > number >> of IMAP server domains. At the current time, I have my users divided >> into 70 different groups of users (call them G1 to G70). I want each >> group to configure their email client to access their mailboxes at a >> domain name based on the group they belong to (e.g., g1.example.com >> , g2.example.com , ?, >> g70.example.com ). I will only support TLS >> encrypted IMAP connections to the Dovecot IMAP proxy (?ssl=yes? in > the >> inet_listener). My SSL cert has alternate names for all 70 group domain >> names. >> >> I want the group domain to only support users that have been assigned to >> the group the domain name represents. That is, a user assigned to G23 >> would only be allowed to configure their email client for the IMAP >> server named g23.example.com . >> >> My solution during testing has been to have the Dovecot IMAP proxy to >> listen on different ports: 9930-9999. I plan to purchase 70 IPs, one for >> each group, and redirect traffic on port 993 to the appropriate Dovecot >> IMAP proxy port based on the IP I assign to the group domain name in the >> site?s DNS. The SQL for handling the IMAP login uses the port number of >> the inet_listener >> >> I think this could work in production, but it will cost me extra to rent >> the 70 IPs and might be a pain to manage. Eventually, I would like to >> have over 5,000 groups so requiring an IP per group is less than ideal. >> I also think having Dovecot IMAP proxy have 5,000 inet_listeners might >> not work so well or might create too many threads/processes/ports to fit >> on a small proxy server. >> >> I would rather have 1 public IP for each Dovecot IMAP proxy and somehow >> communicate to the userdb which group domain name was configured in the >> email client so only the users assigned to this group can login with >> that username. >> >> Anyone have any ideas? >> > > Do you have a SQL userdb? > Create a table or a 'host' field for the user. > > user_query = SELECT CONCAT(pw_name, '@', pw_domain) AS user, "89" as uid, > "89" as gid, host, 'Y' AS proxy_maybe, pw_dir as home, pw_dir as mail_home, > CONCAT('maildir:', pw_dir , '/Maildir/' ) as mail_location FROM vpopmail > WHERE pw_name = '%n' AND pw_domain = '%d' > > (mine is based on qmail/vpopmail) > > Then populate 'host' for each user if you don't have any other way of > programatically determining the host.. > This doesn?t solve my problem. Indeed, I am doing this already: password_query = SELECT password, 'Y' as proxy, CONCAT_WS('@',username,domain) AS destuser, pms AS host, ?secretmaster' AS master, ?secretpass' AS pass FROM users WHERE username='%n' and domain='%d' and (group_id=%{lport}-9930 or %{lport}=143 or '%s'='lmtp') and mailbox_status='active?; This is the password_query I am using on the Dovecot IMAP proxy. This proxy doesn?t use a user_query (only the real backend Dovecot servers do). I allow authorizations on port 143 only for Postfix. Port 143 isn?t exposed to the email clients (only 993 is used by email clients). Anyway, checking the %{lport} allows only IMAP logins using the proper domain name (IP or port) to allow the log in of the user. I?m looking to find out the IMAP server name that the user configured their email client with and make sure I only allow users to access their mailboxes using their assigned IMAP server name. Note that the problem I am trying to solve is if the user configures their email client with the wrong IMAP server name (e.g. using g2.example.com instead of g23.example.com ) and later I move G23 to another datacenter and leave G2 in the current datacenter, they will not be able to access their emails since the G2 datacenter doesn?t have their mailboxes any more and the mailboxes for G23 are only in the G23 datacenter. My users aren?t email experts and I don?t want them to have to discover that they made a typo in the original setup long after they have forgotten how they set up the client in the first place. To start with, the mailboxes will all be in the same datacenter, but I want to be able to move some of the mailboxes to be geographically closer to the users of those mailboxes (like Western users using Western servers while Eastern users use a datacenter closer to the East coast). Kevin From rick at havokmon.com Wed Oct 12 18:51:13 2016 From: rick at havokmon.com (Rick Romero) Date: Wed, 12 Oct 2016 13:51:13 -0500 Subject: Detect IMAP server domain name in Dovecot IMAP proxy In-Reply-To: References: <20161012130719.Horde.7LlxHlsHgT5jhRsUeuaWQg1@www.vfemail.net> Message-ID: <20161012135113.Horde.AuHIyN42ZkJZqnv19cmN5g6@www.vfemail.net> Quoting KT Walrus : >> On Oct 12, 2016, at 2:07 PM, Rick Romero wrote: >> >> Quoting KT Walrus : >> >>> I?m in the process of setting up a Dovecot IMAP proxy to handle a >> >> number >>> of IMAP server domains. At the current time, I have my users divided >>> into 70 different groups of users (call them G1 to G70). I want each >>> group to configure their email client to access their mailboxes at a >>> domain name based on the group they belong to (e.g., g1.example.com >>> , g2.example.com , ?, >>> g70.example.com ). I will only support TLS >>> encrypted IMAP connections to the Dovecot IMAP proxy (?ssl=yes? in >> >> the >>> inet_listener). My SSL cert has alternate names for all 70 group domain >>> names. >>> >>> I want the group domain to only support users that have been assigned to >>> the group the domain name represents. That is, a user assigned to G23 >>> would only be allowed to configure their email client for the IMAP >>> server named g23.example.com . >>> >>> My solution during testing has been to have the Dovecot IMAP proxy to >>> listen on different ports: 9930-9999. I plan to purchase 70 IPs, one for >>> each group, and redirect traffic on port 993 to the appropriate Dovecot >>> IMAP proxy port based on the IP I assign to the group domain name in the >>> site?s DNS. The SQL for handling the IMAP login uses the port number of >>> the inet_listener >>> >>> I think this could work in production, but it will cost me extra to rent >>> the 70 IPs and might be a pain to manage. Eventually, I would like to >>> have over 5,000 groups so requiring an IP per group is less than ideal. >>> I also think having Dovecot IMAP proxy have 5,000 inet_listeners might >>> not work so well or might create too many threads/processes/ports to fit >>> on a small proxy server. >>> >>> I would rather have 1 public IP for each Dovecot IMAP proxy and somehow >>> communicate to the userdb which group domain name was configured in the >>> email client so only the users assigned to this group can login with >>> that username. >>> >>> Anyone have any ideas? >> >> Do you have a SQL userdb? >> Create a table or a 'host' field for the user. >> >> user_query = SELECT CONCAT(pw_name, '@', pw_domain) AS user, "89" as uid, >> "89" as gid, host, 'Y' AS proxy_maybe, pw_dir as home, pw_dir as >> mail_home, >> CONCAT('maildir:', pw_dir , '/Maildir/' ) as mail_location FROM vpopmail >> WHERE pw_name = '%n' AND pw_domain = '%d' >> >> (mine is based on qmail/vpopmail) >> >> Then populate 'host' for each user if you don't have any other way of >> programatically determining the host.. > > This doesn?t solve my problem. Indeed, I am doing this already: > > password_query = SELECT password, 'Y' as proxy, > CONCAT_WS('@',username,domain) AS destuser, pms AS host, ?secretmaster' > AS master, ?secretpass' AS pass FROM users WHERE username='%n' and > domain='%d' and (group_id=%{lport}-9930 or %{lport}=143 or '%s'='lmtp') > and mailbox_status='active?; > > This is the password_query I am using on the Dovecot IMAP proxy. This > proxy doesn?t use a user_query (only the real backend Dovecot servers > do). I allow authorizations on port 143 only for Postfix. Port 143 isn?t > exposed to the email clients (only 993 is used by email clients). > > Anyway, checking the %{lport} allows only IMAP logins using the proper > domain name (IP or port) to allow the log in of the user. > > I?m looking to find out the IMAP server name that the user configured > their email client with and make sure I only allow users to access their > mailboxes using their assigned IMAP server name. > > Note that the problem I am trying to solve is if the user configures > their email client with the wrong IMAP server name (e.g. using > g2.example.com instead of g23.example.com > ) and later I move G23 to another datacenter > and leave G2 in the current datacenter, they will not be able to access > their emails since the G2 datacenter doesn?t have their mailboxes any > more and the mailboxes for G23 are only in the G23 datacenter. My users > aren?t email experts and I don?t want them to have to discover that they > made a typo in the original setup long after they have forgotten how > they set up the client in the first place. > > To start with, the mailboxes will all be in the same datacenter, but I > want to be able to move some of the mailboxes to be geographically > closer to the users of those mailboxes (like Western users using Western > servers while Eastern users use a datacenter closer to the East coast). > Kevin Gotcha.? I used g1.example.com and g2.example.com.?? There are some DNS services that will provide unique records based on the region of the caller - but I have no experience with those.? That's what I'd prefer to do in the long run though. In my setup, the 'host' field still has the internal IP of the servers physically hosting mail at g1 and g2 in order to allow the user to connect to g1 and still be redirected to g2 ('internally' via VPN) until they manually change the mail server name in their client.? It also allows seamless migrations. All I need to be concerned with is database replication. Rick From matthew.broadhead at nbmlaw.co.uk Wed Oct 12 19:42:12 2016 From: matthew.broadhead at nbmlaw.co.uk (Matthew Broadhead) Date: Wed, 12 Oct 2016 21:42:12 +0200 Subject: sieve sending vacation message from vmail@ns1.domain.tld In-Reply-To: <71b362e8-3a69-076d-6376-2f3bbd39d0eb@nbmlaw.co.uk> References: <71b362e8-3a69-076d-6376-2f3bbd39d0eb@nbmlaw.co.uk> Message-ID: <41607083-f0da-c0fd-1fac-07c0c7cd83e9@nbmlaw.co.uk> I read somewhere it might have something to do with a line in master.cf dovecot unix - n n - - pipe flags=DRhu user=vmail:mail argv=/usr/libexec/dovecot/deliver -d ${recipient} i changed it to flags=DRhu user=vmail:mail argv=/usr/libexec/dovecot/dovecot-lda -f ${sender} -d ${user}@${nexthop} -a ${original_recipient} but it made no difference On 12/10/2016 13:57, Matthew Broadhead wrote: > I have a server running centos-release-7-2.1511.el7.centos.2.10.x86_64 > with dovecot version 2.2.10. I am also using roundcube for webmail. > when a vacation filter (reply with message) is created in roundcube it > adds a rule to managesieve.sieve in the user's mailbox. everything > works fine except the reply comes from vmail at ns1.domain.tld instead of > user at domain.tld. ns1.domain.tld is the fully qualified name of the > server. > > it used to work fine on my old CentOS 6 server so I am not sure what > has changed. Can anyone point me in the direction of where I can > configure this behaviour? From gkontos.mail at gmail.com Wed Oct 12 19:56:31 2016 From: gkontos.mail at gmail.com (George Kontostanos) Date: Wed, 12 Oct 2016 22:56:31 +0300 Subject: dsync replication quota2 issue In-Reply-To: References: Message-ID: On Tue, Oct 11, 2016 at 2:31 PM, George Kontostanos wrote: > Hello list, > > We are testing a configuration with 2 mail servers using dsync replication > (dovecot 2.2.25 ). Everything works fine except the quota2 which is > calculated wrong only on one server. Quota2 resides on different databases > since each server needs to update it. > > The problem: The local server always updates quota2 twice on each message > it receives. This happens only on one server. Updates run fine on the > second. > > SQL Debug: > > Query UPDATE quota2 SET bytes=bytes+2108,messages=messages+1 WHERE > username = 'user at domain.org' > Query UPDATE quota2 SET bytes=bytes+2108,messages=messages+1 WHERE > username = 'user at domain.org' > > The result on the server that runs fine > > mysql> select * from quota2; > +----------------------------+---------+----------+ > | username | bytes | messages | > +----------------------------+---------+----------+ > | > | user at domain.org | 2917126 | 17 | > > The result on the server that has the problem: > > mysql> select * from quota2; > +----------------------------+---------+----------+ > | username | bytes | messages | > +----------------------------+---------+----------+ > | > | user at domain.org | 2920317 | 19 | > > dovecot -n is the same on both: > > root at mx2:/var/log # dovecot -n > # 2.2.25 (7be1766): /usr/local/etc/dovecot/dovecot.conf > # Pigeonhole version 0.4.15 (97b3da0) > # OS: FreeBSD 10.3-RELEASE amd64 ufs > auth_mechanisms = plain login > auth_verbose = yes > default_client_limit = 2560 > default_process_limit = 512 > dict { > acl = mysql:/usr/local/etc/dovecot/dovecot-dict-shares-sql.conf.ext > quota = mysql:/usr/local/etc/dovecot/dovecot-dict-quota-sql.conf.ext > } > doveadm_password = # hidden, use -P to show it > doveadm_port = 12345 > log_path = /var/log/dovecot.log > mail_debug = yes > mail_home = /usr/local/vhosts/mail/%d/%n > mail_location = maildir:/usr/local/vhosts/mail/%d/%n:LAYOUT=fs > mail_max_userip_connections = 70 > mail_plugins = quota acl notify replication > mail_privileged_group = vmail > mail_shared_explicit_inbox = yes > managesieve_notify_capability = mailto > managesieve_sieve_capability = fileinto reject envelope encoded-character > vacation subaddress comparator-i;ascii-numeric relational regex imap4flags > copy include variables body enotify environment mailbox date index ihave > duplicate mime foreverypart extracttext > mbox_write_locks = fcntl > namespace { > inbox = no > list = children > location = maildir:/usr/local/vhosts/mail/%%d/%%n:LAYOUT=fs:INDEX=/ > usr/local/vhosts/indexes/%d/%n/shared/%%u:INDEXPVT=/usr/ > local/vhosts/indexes/%d/%n/shared/%%u > prefix = shared/%%d/%%n/ > separator = / > subscriptions = no > type = shared > } > namespace inbox { > inbox = yes > list = yes > location = > mailbox Drafts { > auto = subscribe > special_use = \Drafts > } > mailbox Junk { > auto = subscribe > special_use = \Junk > } > mailbox Sent { > auto = subscribe > special_use = \Sent > } > mailbox Trash { > auto = subscribe > special_use = \Trash > } > prefix = > separator = / > type = private > } > passdb { > args = /usr/local/etc/dovecot/dovecot-sql.conf.ext > driver = sql > } > plugin { > acl = vfile > acl_shared_dict = proxy::acl > mail_replica = tcp:beta.sophimail.com:12345 > quota = dict:User quota::proxy::quota > quota_rule2 = Trash:storage=+100M > sieve = /usr/local/vhosts/mail/%d/%n/.dovecot.sieve > sieve_before = /usr/local/vhosts/sieve/before.d/ > sieve_dir = /usr/local/vhosts/mail/%d/%n > sieve_global_dir = /usr/local/vhosts/sieve/%d > sieve_global_path = /usr/local/vhosts/sieve/%d/default.sieve > } > protocols = imap lmtp sieve sieve > service aggregator { > fifo_listener replication-notify-fifo { > mode = 0666 > user = vmail > } > unix_listener replication-notify { > mode = 0666 > user = vmail > } > } > service auth-worker { > user = vmail > } > service auth { > unix_listener /var/spool/postfix/private/auth { > group = postfix > mode = 0666 > user = postfix > } > unix_listener auth-userdb { > mode = 0600 > user = vmail > } > user = dovecot > } > service config { > unix_listener config { > user = vmail > } > } > service dict { > unix_listener dict { > mode = 0600 > user = vmail > } > } > service doveadm { > inet_listener { > port = 12345 > } > user = vmail > } > service imap-login { > inet_listener imap { > port = 143 > } > } > service lmtp { > unix_listener /var/spool/postfix/private/dovecot-lmtp { > group = postfix > mode = 0600 > user = postfix > } > } > service managesieve-login { > inet_listener sieve { > port = 4190 > } > process_min_avail = 0 > service_count = 1 > vsz_limit = 64 M > } > service replicator { > unix_listener replicator-doveadm { > mode = 0666 > } > } > ssl_cert = ssl_key = userdb { > args = /usr/local/etc/dovecot/dovecot-sql.conf.ext > driver = sql > } > protocol lmtp { > mail_plugins = quota acl notify replication sieve notify replication > } > protocol imap { > imap_client_workarounds = tb-extra-mailbox-sep > mail_plugins = quota acl notify replication imap_quota imap_acl notify > replication > } > protocol lda { > mail_plugins = quota acl notify replication sieve acl > postmaster_address = root > } > local 192.168.3.6 { > protocol imap { > ssl_cert = ssl_key = } > } > > dovecot-dict-quota-sql.conf.ext: > > connect = host=127.0.0.1 dbname=quota user=mailadmin password=********** > map { > pattern = priv/quota/storage > table = quota2 > username_field = username > value_field = bytes > } > map { > pattern = priv/quota/messages > table = quota2 > username_field = username > value_field = messages > } > > Sorry for the lengthy email, any help is very much appreciated. > > > -- > George Kontostanos > --- > Hi, is there anything else I should need to post from my config? Apologies for insisting here but I have not found any solution yet. Thanks From jtam.home at gmail.com Wed Oct 12 21:11:44 2016 From: jtam.home at gmail.com (Joseph Tam) Date: Wed, 12 Oct 2016 14:11:44 -0700 (PDT) Subject: Outlook 2010 woes In-Reply-To: References: Message-ID: > Old server: > * Ubuntu 10.04.4 LTS > * Dovecot 2.1.13 > * Maildir++ > * Local auth via passwd/shadow files > > New server: > * Debian GNU/Linux 8.6 > * Dovecot 2.2.13 > * Maildir++ > * Quotas enabled > * LDAP > > Basically what's happening is that users are seeing large delays when > navigating between different IMAP folders. So, for example, user "X" is > sitting idle in their INBOX. Rebuilding caches? Do you get the same delay when going back to the folder after the initial delay. Joseph Tam From greminn at gmail.com Wed Oct 12 21:29:40 2016 From: greminn at gmail.com (Simon) Date: Wed, 12 Oct 2016 21:29:40 +0000 Subject: Disappearing message when move to Inbox subfolder Message-ID: We are using dovecot 2.0.9 on Centos 6.5 and having the following issues for one single user/single mailbox. When an Outlook (2016) user creates a folder under their Inbox, it does not create in their mailbox (checking with webmail the folder is not there). If they drag a message to that folder in Outlook, then it disappears. Ive searched their mailbox using doveadm and its simply gone. Ive tested with webmail doing essentially the same thing and it works with no issues. Any ideas here would be most well received :) Thanks in advance, Simon From kremels at kreme.com Thu Oct 13 00:03:30 2016 From: kremels at kreme.com (@lbutlr) Date: Wed, 12 Oct 2016 18:03:30 -0600 Subject: Auto-archiving In-Reply-To: <19414568.577.1475853490931@appsuite-dev.open-xchange.com> References: <444220129.2635.1475733049859@appsuite-dev.open-xchange.com> <7F43A200-346F-4068-B6B6-E9A9197CCA67@kreme.com> <19414568.577.1475853490931@appsuite-dev.open-xchange.com> Message-ID: <8202583A-C07E-40CE-BB34-95AC93EA02B2@kreme.com> On 07 Oct 2016, at 09:18, Aki Tuomi wrote: > doveadm move -u jane Archive ALL BEFORE 30d That failed, but doveadm move -u jane Archive ALL mailbox '*' BEFORE 30d Appears to be going something? From arnaud.gaboury at gmail.com Thu Oct 13 06:41:06 2016 From: arnaud.gaboury at gmail.com (arnaud gaboury) Date: Thu, 13 Oct 2016 06:41:06 +0000 Subject: SSL error Message-ID: I run dovecot + postfix as my email server. Everything is working as expected, but I see an error in the dovecot logs: lmtp(7331): Error: SSL context initialization failed, disabling SSL: ENGINE_init(dynamic) failed Dovecot is running and emails are OK. I wonder why this error and how I can fix it? I see it is a SSL issue but no idea in which direction to look. Thank you for help From ml+dovecot at valo.at Thu Oct 13 06:47:52 2016 From: ml+dovecot at valo.at (Christian Kivalo) Date: Thu, 13 Oct 2016 08:47:52 +0200 Subject: SSL error In-Reply-To: References: Message-ID: <6DAA2C7D-2FDD-4524-A255-1E6108C3F08F@valo.at> Am 13. Oktober 2016 08:41:06 MESZ, schrieb arnaud gaboury : >I run dovecot + postfix as my email server. Everything is working as >expected, but I see an error in the dovecot logs: > >lmtp(7331): Error: SSL context initialization failed, disabling SSL: >ENGINE_init(dynamic) failed > >Dovecot is running and emails are OK. I wonder why this error and how I >can >fix it? I see it is a SSL issue but no idea in which direction to look. >Thank you for help Please post the complete log lines and the output of dovecot -n -- Christian Kivalo From Rik.Theys at esat.kuleuven.be Thu Oct 13 07:47:36 2016 From: Rik.Theys at esat.kuleuven.be (Rik Theys) Date: Thu, 13 Oct 2016 09:47:36 +0200 Subject: Strange subscriptions added by dsync backup Message-ID: <0ab70f01-3d2d-df26-dc1f-c2e4ae6b7d5c@esat.kuleuven.be> Hi, We're in the process of migrating our dovecot 1.x mail server to a Dovecot 2.2.25 server. During the migration I'm moving from mbox storage to mdbox. I use the following method to do a one-way sync from our current mail server to our new mail server (command executed on the new server): doveadm -v -o imapc_host=oldserver \ -o imapc_user=$u \ -o imapc_master_user=$masteruser \ -o imapc_password="$masterpass" \ -o imapc_port=993 -o imapc_ssl=imaps \ -o ssl_client_ca_file=/etc/pki/tls/certs/ca-bundle.crt \ -o imapc_ssl_verify=yes \ -o mail_fsync=never \ -o mail_prefetch_count=20 \ backup -R -u $u imapc: $u is replaced by the username I'm migrating. The dsync command runs and exits with code 0 (all OK). However, when I look at the subscriptions file on the new server, there seem to be additional subscriptions that are not in the subscriptions file on the old server: 7b0d681945d0fc5711560000ffff90ca 7c0d681945d0fc5711560000ffff90ca 7d0d681945d0fc5711560000ffff90ca 7e0d681945d0fc5711560000ffff90ca 7f0d681945d0fc5711560000ffff90ca 800d681945d0fc5711560000ffff90ca Where do these come from? Is something wrong with the migration? It also seems that subscriptions for IMAP folders that only contain subfolders[1] are no longer present in the new subscriptions file: # diff -u sub-old.sorted sub-new.sorted --- sub-old.sorted 2016-10-11 13:42:44.175070610 +0200 +++ sub-new.sorted 2016-10-11 13:47:53.973888462 +0200 @@ -1,26 +1,26 @@ -Archive/Administration/ +7b0d681945d0fc5711560000ffff90ca +7c0d681945d0fc5711560000ffff90ca +7d0d681945d0fc5711560000ffff90ca +7e0d681945d0fc5711560000ffff90ca +7f0d681945d0fc5711560000ffff90ca +800d681945d0fc5711560000ffff90ca Archive/Administration/Conferences Archive/Announcements -Archive/Education/ Archive/Education/E02N3A Archive/Education/I0D51A Archive/Politics -Archive/Research/ -Archive/Research/FET/ Archive/Research/FET/VPH - Virtual Physiological Human Archive/Research/Grants/ICON IBBT Call 2011 Archive/Research/Grants/Marie Curie ITN 2011 Archive/Research/Grants/Odysseus Archive/Research/Grants/SymBioSysII/Funding Archive/Research/Grants/SymBioSysII/JobApplications -Archive/Research/Manuscripts/ Archive/Research/Manuscripts/ruby-ensembl-api Archive/Research/Projects/GUNZ Archive/Research/Projects/MIQAS Archive/Research/Projects/MODY Archive/Research/Projects/ruby-ensembl-api Archives -Archive/Service/ Archive/Service/EditorORC Archive/Service/Reviewing Deleted Messages Is this expected behaviour? Regards, Rik [1] On the old server a folder can only contain either messages or subfolders, not both at the same time. -- Rik Theys System Engineer KU Leuven - Dept. Elektrotechniek (ESAT) Kasteelpark Arenberg 10 bus 2440 - B-3001 Leuven-Heverlee +32(0)16/32.11.07 ---------------------------------------------------------------- <> From arnaud.gaboury at gmail.com Thu Oct 13 08:12:28 2016 From: arnaud.gaboury at gmail.com (arnaud gaboury) Date: Thu, 13 Oct 2016 08:12:28 +0000 Subject: SSL error In-Reply-To: <6DAA2C7D-2FDD-4524-A255-1E6108C3F08F@valo.at> References: <6DAA2C7D-2FDD-4524-A255-1E6108C3F08F@valo.at> Message-ID: On Thu, Oct 13, 2016 at 8:48 AM Christian Kivalo wrote: > > > > > Am 13. Oktober 2016 08:41:06 MESZ, schrieb arnaud gaboury < > arnaud.gaboury at gmail.com>: > > >I run dovecot + postfix as my email server. Everything is working as > > >expected, but I see an error in the dovecot logs: > > > > > >lmtp(7331): Error: SSL context initialization failed, disabling SSL: > > >ENGINE_init(dynamic) failed > > > > > >Dovecot is running and emails are OK. I wonder why this error and how I > > >can > > >fix it? I see it is a SSL issue but no idea in which direction to look. > > >Thank you for help > > Please post the complete log lines and the output of dovecot -n > > $ journalctl --unit=dovecot Oct 13 08:20:20 thetradinghall.com systemd[1]: Started Dovecot IMAP/POP3 email server. Oct 13 08:20:20 thetradinghall.com dovecot[7333]: lmtp(7331): Error: SSL context initialization failed, disabling SSL: ENGINE_init(dynamic) failed Oct 13 08:20:20 thetradinghall.com dovecot[7333]: lmtp(7337): Error: SSL context initialization failed, disabling SSL: ENGINE_init(dynamic) failed Oct 13 08:20:20 thetradinghall.com dovecot[7333]: lmtp(7338): Error: SSL context initialization failed, disabling SSL: ENGINE_init(dynamic) failed ........................ $ dovecot -n # 2.2.25 (7be1766): /etc/dovecot/dovecot.conf # OS: Linux 4.7.4-1-hortensia_docker x86_64 Fedora release 24 (Twenty Four) auth_cache_size = 10 M auth_debug = yes auth_debug_passwords = yes auth_mechanisms = plain login auth_verbose = yes auth_verbose_passwords = plain doveadm_socket_path = /run/dovecot/doveadm-server imap_id_log = * info_log_path = /storage/log/dovecot/dovecot-info.log mail_debug = yes mail_gid = 5000 mail_location = maildir:~:LAYOUT=fs mail_server_admin = mailto:admin at thetradinghall.com mail_uid = 5000 mailbox_list_index = yes maildir_very_dirty_syncs = yes namespace inbox { inbox = yes location = mailbox Archive { auto = create special_use = \Archive } mailbox Drafts { auto = create special_use = \Drafts } mailbox Junk { auto = create special_use = \Junk } mailbox Sent { auto = create special_use = \Sent } prefix = separator = / type = private } passdb { args = /etc/dovecot/dovecot-sql.conf.ext driver = sql } protocols = imap lmtp service auth-worker { user = vmail } service auth { unix_listener /var/spool/postfix/private/auth { group = postfix mode = 0666 user = postfix } unix_listener auth-userdb { group = postfix mode = 0600 user = postfix } user = root } service imap-login { inet_listener imaps { port = 993 ssl = yes } } service lmtp { process_min_avail = 10 unix_listener /var/spool/postfix/private/dovecot-lmtp { group = postfix mode = 0600 user = postfix } } ssl = required ssl_cert = > > -- > > Christian Kivalo > > From ml+dovecot at valo.at Thu Oct 13 08:30:26 2016 From: ml+dovecot at valo.at (Christian Kivalo) Date: Thu, 13 Oct 2016 10:30:26 +0200 Subject: SSL error In-Reply-To: References: <6DAA2C7D-2FDD-4524-A255-1E6108C3F08F@valo.at> Message-ID: On 2016-10-13 10:12, arnaud gaboury wrote: > On Thu, Oct 13, 2016 at 8:48 AM Christian Kivalo > wrote: > >> >> >> >> >> Am 13. Oktober 2016 08:41:06 MESZ, schrieb arnaud gaboury < >> arnaud.gaboury at gmail.com>: >> >> >I run dovecot + postfix as my email server. Everything is working as >> >> >expected, but I see an error in the dovecot logs: >> >> > >> >> >lmtp(7331): Error: SSL context initialization failed, disabling SSL: >> >> >ENGINE_init(dynamic) failed >> >> > >> >> >Dovecot is running and emails are OK. I wonder why this error and how I >> >> >can >> >> >fix it? I see it is a SSL issue but no idea in which direction to look. >> >> >Thank you for help >> >> Please post the complete log lines and the output of dovecot -n >> >> > $ journalctl --unit=dovecot > Oct 13 08:20:20 thetradinghall.com systemd[1]: Started Dovecot > IMAP/POP3 > email server. > Oct 13 08:20:20 thetradinghall.com dovecot[7333]: lmtp(7331): Error: > SSL > context initialization failed, disabling SSL: ENGINE_init(dynamic) > failed > Oct 13 08:20:20 thetradinghall.com dovecot[7333]: lmtp(7337): Error: > SSL > context initialization failed, disabling SSL: ENGINE_init(dynamic) > failed > Oct 13 08:20:20 thetradinghall.com dovecot[7333]: lmtp(7338): Error: > SSL > context initialization failed, disabling SSL: ENGINE_init(dynamic) > failed > ........................ > > > $ dovecot -n > # 2.2.25 (7be1766): /etc/dovecot/dovecot.conf > # OS: Linux 4.7.4-1-hortensia_docker x86_64 Fedora release 24 (Twenty > Four) > auth_cache_size = 10 M > auth_debug = yes > auth_debug_passwords = yes > auth_mechanisms = plain login > auth_verbose = yes > auth_verbose_passwords = plain > doveadm_socket_path = /run/dovecot/doveadm-server > imap_id_log = * > info_log_path = /storage/log/dovecot/dovecot-info.log > mail_debug = yes > mail_gid = 5000 > mail_location = maildir:~:LAYOUT=fs > mail_server_admin = mailto:admin at thetradinghall.com > mail_uid = 5000 > mailbox_list_index = yes > maildir_very_dirty_syncs = yes > namespace inbox { > inbox = yes > location = > mailbox Archive { > auto = create > special_use = \Archive > } > mailbox Drafts { > auto = create > special_use = \Drafts > } > mailbox Junk { > auto = create > special_use = \Junk > } > mailbox Sent { > auto = create > special_use = \Sent > } > prefix = > separator = / > type = private > } > passdb { > args = /etc/dovecot/dovecot-sql.conf.ext > driver = sql > } > protocols = imap lmtp > service auth-worker { > user = vmail > } > service auth { > unix_listener /var/spool/postfix/private/auth { > group = postfix > mode = 0666 > user = postfix > } > unix_listener auth-userdb { > group = postfix > mode = 0600 > user = postfix > } > user = root > } > service imap-login { > inet_listener imaps { > port = 993 > ssl = yes > } > } > service lmtp { > process_min_avail = 10 > unix_listener /var/spool/postfix/private/dovecot-lmtp { > group = postfix > mode = 0600 > user = postfix > } > } > ssl = required > ssl_cert = ssl_crypto_device = dynamic ^^ does it work when you comment/remove this setting? from my 10-ssl.conf # SSL crypto device to use, for valid values run "openssl engine" #ssl_crypto_device = by default ssl_crypto_device is not set. > ssl_key = ssl_protocols = !SSLv2 !SSLv3 > userdb { > args = uid=5000 gid=5000 home=/storage/vmail/%d/%n > driver = static > } > verbose_ssl = yes > protocol lmtp { > hostname = thetradinghall.com > postmaster_address = postmaster at thetradinghall.com > } > > -- Christian Kivalo From arnaud.gaboury at gmail.com Thu Oct 13 08:48:19 2016 From: arnaud.gaboury at gmail.com (arnaud gaboury) Date: Thu, 13 Oct 2016 08:48:19 +0000 Subject: SSL error In-Reply-To: References: <6DAA2C7D-2FDD-4524-A255-1E6108C3F08F@valo.at> Message-ID: On Thu, Oct 13, 2016 at 10:30 AM Christian Kivalo wrote: > > > > > On 2016-10-13 10:12, arnaud gaboury wrote: > > > On Thu, Oct 13, 2016 at 8:48 AM Christian Kivalo > > > wrote: > > > > > >> > > >> > > >> > > >> > > >> Am 13. Oktober 2016 08:41:06 MESZ, schrieb arnaud gaboury < > > >> arnaud.gaboury at gmail.com>: > > >> > > >> >I run dovecot + postfix as my email server. Everything is working as > > >> > > >> >expected, but I see an error in the dovecot logs: > > >> > > >> > > > >> > > >> >lmtp(7331): Error: SSL context initialization failed, disabling SSL: > > >> > > >> >ENGINE_init(dynamic) failed > > >> > > >> > > > >> > > >> >Dovecot is running and emails are OK. I wonder why this error and how I > > >> > > >> >can > > >> > > >> >fix it? I see it is a SSL issue but no idea in which direction to look. > > >> > > >> >Thank you for help > > >> > > >> Please post the complete log lines and the output of dovecot -n > > >> > > >> > > > $ journalctl --unit=dovecot > > > Oct 13 08:20:20 thetradinghall.com systemd[1]: Started Dovecot > > > IMAP/POP3 > > > email server. > > > Oct 13 08:20:20 thetradinghall.com dovecot[7333]: lmtp(7331): Error: > > > SSL > > > context initialization failed, disabling SSL: ENGINE_init(dynamic) > > > failed > > > Oct 13 08:20:20 thetradinghall.com dovecot[7333]: lmtp(7337): Error: > > > SSL > > > context initialization failed, disabling SSL: ENGINE_init(dynamic) > > > failed > > > Oct 13 08:20:20 thetradinghall.com dovecot[7333]: lmtp(7338): Error: > > > SSL > > > context initialization failed, disabling SSL: ENGINE_init(dynamic) > > > failed > > > ........................ > > > > > > > > > $ dovecot -n > > > # 2.2.25 (7be1766): /etc/dovecot/dovecot.conf > > > # OS: Linux 4.7.4-1-hortensia_docker x86_64 Fedora release 24 (Twenty > > > Four) > > > auth_cache_size = 10 M > > > auth_debug = yes > > > auth_debug_passwords = yes > > > auth_mechanisms = plain login > > > auth_verbose = yes > > > auth_verbose_passwords = plain > > > doveadm_socket_path = /run/dovecot/doveadm-server > > > imap_id_log = * > > > info_log_path = /storage/log/dovecot/dovecot-info.log > > > mail_debug = yes > > > mail_gid = 5000 > > > mail_location = maildir:~:LAYOUT=fs > > > mail_server_admin = mailto:admin at thetradinghall.com > > > mail_uid = 5000 > > > mailbox_list_index = yes > > > maildir_very_dirty_syncs = yes > > > namespace inbox { > > > inbox = yes > > > location = > > > mailbox Archive { > > > auto = create > > > special_use = \Archive > > > } > > > mailbox Drafts { > > > auto = create > > > special_use = \Drafts > > > } > > > mailbox Junk { > > > auto = create > > > special_use = \Junk > > > } > > > mailbox Sent { > > > auto = create > > > special_use = \Sent > > > } > > > prefix = > > > separator = / > > > type = private > > > } > > > passdb { > > > args = /etc/dovecot/dovecot-sql.conf.ext > > > driver = sql > > > } > > > protocols = imap lmtp > > > service auth-worker { > > > user = vmail > > > } > > > service auth { > > > unix_listener /var/spool/postfix/private/auth { > > > group = postfix > > > mode = 0666 > > > user = postfix > > > } > > > unix_listener auth-userdb { > > > group = postfix > > > mode = 0600 > > > user = postfix > > > } > > > user = root > > > } > > > service imap-login { > > > inet_listener imaps { > > > port = 993 > > > ssl = yes > > > } > > > } > > > service lmtp { > > > process_min_avail = 10 > > > unix_listener /var/spool/postfix/private/dovecot-lmtp { > > > group = postfix > > > mode = 0600 > > > user = postfix > > > } > > > } > > > ssl = required > > > ssl_cert = > > ssl_crypto_device = dynamic > > ^^ does it work when you comment/remove this setting? > > > > from my 10-ssl.conf > > # SSL crypto device to use, for valid values run "openssl engine" > > #ssl_crypto_device = > > > > by default ssl_crypto_device is not set. > the line was uncommented, so I commented it. Now .-) -------------------------------------------- ? dovecot.service - Dovecot IMAP/POP3 email server Loaded: loaded (/usr/lib/systemd/system/dovecot.service; enabled; vendor preset: disabled) Active: active (running) since Thu 2016-10-13 10:46:27 CEST; 6s ago Docs: man:dovecot(1) http://wiki2.dovecot.org/ Process: 9793 ExecStop=/usr/bin/doveadm stop (code=exited, status=0/SUCCESS) Process: 9806 ExecStart=/usr/sbin/dovecot (code=exited, status=0/SUCCESS) Process: 9804 ExecStartPre=/usr/libexec/dovecot/prestartscript (code=exited, status=0/SUCCESS) Main PID: 9807 (dovecot) CGroup: /machine.slice/systemd-nspawn at poppy.service /system.slice/dovecot.service ??9807 /usr/sbin/dovecot ??9808 dovecot/lmtp ??9809 dovecot/anvil ??9810 dovecot/log ??9811 dovecot/ssl-params ??9812 dovecot/lmtp ??9813 dovecot/lmtp ??9814 dovecot/lmtp ??9815 dovecot/lmtp ??9816 dovecot/lmtp ??9817 dovecot/lmtp ??9818 dovecot/lmtp ??9819 dovecot/lmtp ??9820 dovecot/lmtp ??9821 dovecot/config Oct 13 10:46:27 thetradinghall.com systemd[1]: Starting Dovecot IMAP/POP3 email server... Oct 13 10:46:27 thetradinghall.com systemd[1]: dovecot.service: PID file /var/run/dovecot/master.pid not r Oct 13 10:46:27 thetradinghall.com systemd[1]: Started Dovecot IMAP/POP3 email server. ------------------------------------------------------- Thank you so much for your precious help. > > > > ssl_key = > > ssl_protocols = !SSLv2 !SSLv3 > > > userdb { > > > args = uid=5000 gid=5000 home=/storage/vmail/%d/%n > > > driver = static > > > } > > > verbose_ssl = yes > > > protocol lmtp { > > > hostname = thetradinghall.com > > > postmaster_address = postmaster at thetradinghall.com > > > } > > > > > > > > -- > > Christian Kivalo > > From anic297 at mac.com Thu Oct 13 09:18:51 2016 From: anic297 at mac.com (Marnaud) Date: Thu, 13 Oct 2016 09:18:51 +0000 (GMT) Subject: First steps in Dovecot; IMAP not working Message-ID: <7905b4a0-8428-41d9-bf8b-3cbcedfe0874@me.com> Hello, I'm new in Dovecot and am having troubles making it working. I'm trying using Outlook and Apple's Mail as the mail clients. Outlook says it can't establish a secured connection to the server (for the IMAP protocol). I'm guessing sending e-mails works but I can't check. This is my current configuration (using dovecot -n): # 2.2.13: /etc/dovecot/dovecot.conf # OS: Linux 2.6.32-042stab116.1 x86_64 Debian 8.6 mail_location = mbox:~/mail:INBOX=/var/mail/%u namespace inbox { ? inbox = yes ? location = ? mailbox Drafts { ??? special_use = \Drafts ? } ? mailbox Junk { ??? special_use = \Junk ? } ? mailbox Sent { ??? special_use = \Sent ? } ? mailbox "Sent Messages" { ??? special_use = \Sent ? } ? mailbox Trash { ??? special_use = \Trash ? } ? prefix = } passdb { ? driver = pam } passdb { ? driver = pam } protocols = " imap" service auth { ? unix_listener /var/spool/postfix/private/auth { ??? group = postfix ??? mode = 0666 ??? user = postfix ? } } service imap-login { ? inet_listener imaps { ??? port = 993 ??? ssl = yes ? } } ssl = no ssl_cert = References: <7905b4a0-8428-41d9-bf8b-3cbcedfe0874@me.com> Message-ID: <8d5b22dd-a357-e2e0-69a1-d48ece0f7875@dovecot.fi> doveconf -n shows what's there. if you have ssl=no somewhere else in the config after you set it to required, it gets overwritten. Aki On 13.10.2016 12:18, Marnaud wrote: > Hello, > > > > I'm new in Dovecot and am having troubles making it working. I'm > trying using Outlook and Apple's Mail as the mail clients. Outlook > says it can't establish a secured connection to the server (for the > IMAP protocol). I'm guessing sending e-mails works but I can't check. > > > > This is my current configuration (using dovecot -n): > > > > # 2.2.13: /etc/dovecot/dovecot.conf > # OS: Linux 2.6.32-042stab116.1 x86_64 Debian 8.6 > mail_location = mbox:~/mail:INBOX=/var/mail/%u > namespace inbox { > inbox = yes > location = > mailbox Drafts { > special_use = \Drafts > } > mailbox Junk { > special_use = \Junk > } > mailbox Sent { > special_use = \Sent > } > mailbox "Sent Messages" { > special_use = \Sent > } > mailbox Trash { > special_use = \Trash > } > prefix = > } > passdb { > driver = pam > } > passdb { > driver = pam > } > protocols = " imap" > service auth { > unix_listener /var/spool/postfix/private/auth { > group = postfix > mode = 0666 > user = postfix > } > } > service imap-login { > inet_listener imaps { > port = 993 > ssl = yes > } > } > ssl = no > ssl_cert = ssl_key = userdb { > driver = passwd > } > userdb { > driver = passwd > } > > > > I find abnormal I'm seeing "ssl = no" in this configuration despite > the fact that I have "ssl = required" in the > /etc/dovecot/conf.d/10-ssl.conf file, but I'm new to this... > > I have looked around the web; finally, I'm asking here, hoping it's > the correct place to ask. > > > > Arnaud From aki.tuomi at dovecot.fi Thu Oct 13 09:57:41 2016 From: aki.tuomi at dovecot.fi (Aki Tuomi) Date: Thu, 13 Oct 2016 12:57:41 +0300 Subject: First steps in Dovecot; IMAP not working In-Reply-To: References: Message-ID: On 13.10.2016 12:42, Marnaud wrote: > > "Aki Tuomi" wrote: > >> doveconf -n shows what's there. if you have ssl=no somewhere else in the >> config after you set it to required, it gets overwritten. >> >> Aki > > Thanks, Aki. > It means I have to open each conf file (e.g. using nano) and search > for ssl=no; I'm right or there's a specific file to check? I see you replied to me only, please keep your replies on-list. Try grep -r ssl.*no /etc/dovecot Aki From anic297 at mac.com Thu Oct 13 10:57:24 2016 From: anic297 at mac.com (Moi) Date: Thu, 13 Oct 2016 12:57:24 +0200 Subject: First steps in Dovecot; IMAP not working In-Reply-To: References: Message-ID: <003301d22540$95d40340$c17c09c0$@mac.com> I think I found the culprit. I had backed files up using cp (e.g. 10-ssl.conf to 10-ssl.default.conf) so if I made mistakes, I could revert easily. It looks like all files in the conf.d folder are included, therefore my backup files overwrote the standard ones. Now, when I try to send mails, outlook tells me it can't save the message in the "sent" folder (the mail server denies saving there (I'm translating from French, sorry)). The error code is 0x80040119. At least, I don't get the same set of errors. Thanks, aki, for your previous answer. From webert.boss at gmail.com Thu Oct 13 11:28:21 2016 From: webert.boss at gmail.com (Webert de Souza Lima) Date: Thu, 13 Oct 2016 11:28:21 +0000 Subject: fix SIS attachment errors In-Reply-To: References: Message-ID: To whom it may interest; With the help of Aki Tuomi I've found a way to remove such errors and move forward, in a way that could be automated. As this might be a problem to others and there seems to be no discussion about it, i'll share it with you. What I did, essentially, was to write a shell script that do the following, per user: - read all the mailboxes with `doveadm fetch -u $username text all` and redirect errors to a file - identify all missing attachments' paths from the file created previously and try to create a hardlink to it. Any file with the same hash (before `-`) is good. - identify all mailboxes and uids from messages there are still broken (the same error file created before should have this information) and fetch them, and save them elsewhere. - after fetching and saving, expunge such messages. - use doveadm save to put the messages back. They'll be without the attachments but also without any errors. There are some gotchas to do the above, and to automate that, so I'll be happy to help if anyone needs. Thank you. On Wed, Oct 5, 2016 at 3:59 PM Webert de Souza Lima wrote: Hi, I've sent some e-mails about this before but since there was no answers I'll write it differently, with different information. I'm using SIS (Single Instance Attachment Storage). For any reason that is not relevant now, many attachments are missing and the messages can't be fetched: Error: read(attachments-connector(zlib(/dovecot/mdbox/bar.example/foo/storage/m.1))) failed: read(/dovecot/attach/bar.example/23/ae/23aed008c1f32f048afd38d9aae68c5aeae2d17a9170e28c60c75a02ec199ef4e7079cd92988ad857bd6e12cd24cdd7619bd29f26edeec842a6911bb14a86944-fb0b6a214dfa63573c1f00009331bd36[base64:19 b/l]) failed: open(/dovecot/attach/bar.example/23/ae/23aed008c1f32f048afd38d9aae68c5aeae2d17a9170e28c60c75a02ec199ef4e7079cd92988ad857bd6e12cd24cdd7619bd29f26edeec842a6911bb14a86944-fb0b6a214dfa63573c1f00009331bd36) failed: No such file or directory in this specific case, the /dovecot/attach/bar.example/23/ae/ director doesn't exist. In other cases, just one file is missing so I would assume the hardlink could be recreated and it would work. If I create the missing file (with touch or whatever), I get the following errors: Error: read(/dovecot/attach/bar.example/23/ae/23aed008c1f32f048afd38d9aae68c5aeae2d17a9170e28c60c75a02ec199ef4e7079cd92988ad857bd6e12cd24cdd7619bd29f26edeec842a6911bb14a86944-fb0b6a214dfa63573c1f00009331bd36[base64:19 b/l]) failed: Stream is smaller than expected (0 < 483065) Error: read(attachments-connector(zlib(/dovecot/mdbox/bar.example/foo/storage/m.1))) failed: read(/dovecot/attach/bar.example/23/ae/23aed008c1f32f048afd38d9aae68c5aeae2d17a9170e28c60c75a02ec199ef4e7079cd92988ad857bd6e12cd24cdd7619bd29f26edeec842a6911bb14a86944-fb0b6a214dfa63573c1f00009331bd36[base64:19 b/l]) failed: Stream is smaller than expected (0 < 483065) Error: fetch(body) failed for box=INBOX uid=15: BUG: Unknown internal error If I try to fill the file with the amount of bytes it complains about with the following command: $ dd if=/dev/zero of=/dovecot/attach/bar.example/23/ae/23aed008c1f32f048afd38d9aae68c5aeae2d17a9170e28c60c75a02ec199ef4e7079cd92988ad857bd6e12cd24cdd7619bd29f26edeec842a6911bb14a86944-fb0b6a214dfa63573c1f00009331bd36 bs=1 count=483065 then I get the following error: Error: read(/dovecot/attach/bar.example/23/ae/23aed008c1f32f048afd38d9aae68c5aeae2d17a9170e28c60c75a02ec199ef4e7079cd92988ad857bd6e12cd24cdd7619bd29f26edeec842a6911bb14a86944-fb0b6a214dfa63573c1f00009331bd36[base64:19 b/l]) failed: Stream is larger than expected (483928 > 483065, eof=0) Error: read(attachments-connector(zlib(/srv/dovecot/mdbox/bar.example/foo/storage/m.1))) failed: read(//dovecot/attach/bar.example/23/ae/23aed008c1f32f048afd38d9aae68c5aeae2d17a9170e28c60c75a02ec199ef4e7079cd92988ad857bd6e12cd24cdd7619bd29f26edeec842a6911bb14a86944-fb0b6a214dfa63573c1f00009331bd36[base64:19 b/l]) failed: Stream is larger than expected (483928 > 483065, eof=0) Error: fetch(body) failed for box=INBOX uid=15: BUG: Unknown internal error Based on this I have a few questions: 1. Is there a way, or a tool to scan all mailboxes to get all the messages that have compromised attachments? 2. is there a way to "fix" the missing files (even if creating fake files or removing the attachments information from the messages) 3. What I need is to migrate these boxes using doveadm backup/sync, which fails when these errors occur. Is is possible to ignore them or is there another tool that would do it? Thank you. Webert Lima Belo Horizonte, Brasil From arekm at maven.pl Thu Oct 13 13:09:10 2016 From: arekm at maven.pl (Arkadiusz =?utf-8?q?Mi=C5=9Bkiewicz?=) Date: Thu, 13 Oct 2016 15:09:10 +0200 Subject: dovecot 2.2.25 BUG: local_name is not matching correctly Message-ID: <201610131509.10597.arekm@maven.pl> Bug report: When using dovecot 2.2.25 SNI capability it doesn't always match proper vhost config. For example if we have such config: local_name imap.example.com { ssl_cert = References: <201610131509.10597.arekm@maven.pl> Message-ID: <071d4b6c-e1e3-4f14-1296-4bcb5aa231f8@dovecot.fi> On 13.10.2016 16:09, Arkadiusz Mi?kiewicz wrote: > Bug report: > > When using dovecot 2.2.25 SNI capability it doesn't always match proper vhost > config. For example if we have such config: > > local_name imap.example.com { > ssl_cert = ssl_key = } > > but imap client sends mixedcase SNI hostname like "IMAP.example.com" then > dovecot won't match above local_name imap.example.coml config section. > > IMO dovecot should do case insensitive comparison. Case sensitive matching for > DNS names makes little sense. > Hi! Thank you for reporting this, we'll look into it. Aki Tuomi Dovecot oy From bryan at shout.net Thu Oct 13 13:36:23 2016 From: bryan at shout.net (Bryan Holloway) Date: Thu, 13 Oct 2016 08:36:23 -0500 Subject: Outlook 2010 woes In-Reply-To: References: Message-ID: <4aad0d05-bc43-4fda-c4e7-544fc59557f4@shout.net> On 10/12/16 4:11 PM, Joseph Tam wrote: > >> Old server: >> * Ubuntu 10.04.4 LTS >> * Dovecot 2.1.13 >> * Maildir++ >> * Local auth via passwd/shadow files >> >> New server: >> * Debian GNU/Linux 8.6 >> * Dovecot 2.2.13 >> * Maildir++ >> * Quotas enabled >> * LDAP >> >> Basically what's happening is that users are seeing large delays when >> navigating between different IMAP folders. So, for example, user "X" is >> sitting idle in their INBOX. > > Rebuilding caches? Do you get the same delay when going back to the folder > after the initial delay. > > Joseph Tam No, but once sitting idle again for 10-15 seconds, the delay occurs again regardless of which folder you choose. Am I understanding your question correctly? It really seems to me like Outlook is prematurely ending IMAP sessions. I also extended the "Server Timeout" setting in OT2010 to 10 minutes, which doesn't seem to help either. (!) I was considering enabling the auth_cache feature to see if that helps. I'll let the list know what happens -- planning on doing that today. From forondarena at gmail.com Thu Oct 13 13:47:13 2016 From: forondarena at gmail.com (Luis Ugalde) Date: Thu, 13 Oct 2016 15:47:13 +0200 Subject: Too many references: cannot splice Message-ID: Hi, A while ago I sent an email regarding these "*ETOOMANYREFS* Too many references: cannot splice." that we've seen since Debian updated the Jessie kernel to 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt20-1+deb8u3 (2016-01-17) x86_64 while older kernels, like 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt11-1+deb8u6 (2015-11-09) x86_64 showed no errors at all. I was wondering if no one else is getting these errors, or if you know any workarounds that might probe useful, apart from downgrading the kernel. I would say that the infrastructure we're running is quite standard, with directors balancing users to NFS backed dovecot servers. Best regards, Luis Ugalde. From jerry at seibercom.net Thu Oct 13 13:55:31 2016 From: jerry at seibercom.net (Jerry) Date: Thu, 13 Oct 2016 09:55:31 -0400 Subject: Outlook 2010 woes In-Reply-To: <4aad0d05-bc43-4fda-c4e7-544fc59557f4@shout.net> References: <4aad0d05-bc43-4fda-c4e7-544fc59557f4@shout.net> Message-ID: <20161013095531.00007012@seibercom.net> On Thu, 13 Oct 2016 08:36:23 -0500, Bryan Holloway stated: >I also extended the "Server Timeout" setting in OT2010 to 10 minutes, >which doesn't seem to help either. (!) Outlook 2010 is a very old version. Why not update to the 2016 version. I am running it without any problems. If you do update, remember to remove the old version completely first. -- Jerry From bryan at shout.net Thu Oct 13 14:06:35 2016 From: bryan at shout.net (Bryan Holloway) Date: Thu, 13 Oct 2016 09:06:35 -0500 Subject: Outlook 2010 woes In-Reply-To: <20161013095531.00007012@seibercom.net> References: <4aad0d05-bc43-4fda-c4e7-544fc59557f4@shout.net> <20161013095531.00007012@seibercom.net> Message-ID: On 10/13/16 8:55 AM, Jerry wrote: > On Thu, 13 Oct 2016 08:36:23 -0500, Bryan Holloway stated: > >> I also extended the "Server Timeout" setting in OT2010 to 10 minutes, >> which doesn't seem to help either. (!) > > Outlook 2010 is a very old version. Why not update to the 2016 version. > I am running it without any problems. If you do update, remember to > remove the old version completely first. > Yeah -- totally not disagreeing with that statement ... the problem is that the customer is putting their foot down since everything worked fine with Dovecot 2.1. But yes, I have mentioned that to them ... From aki.tuomi at dovecot.fi Thu Oct 13 14:07:33 2016 From: aki.tuomi at dovecot.fi (Aki Tuomi) Date: Thu, 13 Oct 2016 17:07:33 +0300 (EEST) Subject: Outlook 2010 woes In-Reply-To: <20161013095531.00007012@seibercom.net> References: <4aad0d05-bc43-4fda-c4e7-544fc59557f4@shout.net> <20161013095531.00007012@seibercom.net> Message-ID: <1454388715.707.1476367654083@appsuite-dev.open-xchange.com> > On October 13, 2016 at 4:55 PM Jerry wrote: > > > On Thu, 13 Oct 2016 08:36:23 -0500, Bryan Holloway stated: > > >I also extended the "Server Timeout" setting in OT2010 to 10 minutes, > >which doesn't seem to help either. (!) > > Outlook 2010 is a very old version. Why not update to the 2016 version. > I am running it without any problems. If you do update, remember to > remove the old version completely first. > > -- > Jerry I do wonder if the real culprit is some firewall that timeouts the idle connection. Aki From bryan at shout.net Thu Oct 13 14:18:03 2016 From: bryan at shout.net (Bryan Holloway) Date: Thu, 13 Oct 2016 09:18:03 -0500 Subject: Outlook 2010 woes In-Reply-To: References: <4aad0d05-bc43-4fda-c4e7-544fc59557f4@shout.net> <20161013095531.00007012@seibercom.net> Message-ID: <6bf4267c-1f40-151e-77e8-3d71a217bd22@shout.net> On 10/13/16 9:06 AM, Bryan Holloway wrote: > On 10/13/16 8:55 AM, Jerry wrote: >> On Thu, 13 Oct 2016 08:36:23 -0500, Bryan Holloway stated: >> >>> I also extended the "Server Timeout" setting in OT2010 to 10 minutes, >>> which doesn't seem to help either. (!) >> >> Outlook 2010 is a very old version. Why not update to the 2016 version. >> I am running it without any problems. If you do update, remember to >> remove the old version completely first. >> > > Yeah -- totally not disagreeing with that statement ... the problem is > that the customer is putting their foot down since everything worked > fine with Dovecot 2.1. > > But yes, I have mentioned that to them ... I guess I should add that it would be one thing if there were a specific IMAP feature that a newer Dovecot version (2.2) supported and the client didn't, but I haven't been able to pinpoint it. Obviously the behavior is different than what it was, but it would be a lot easier to convince the customer to upgrade if I could point a finger right at the "feature" in question. In the meantime, I have to try and figure out what's changed ... From bryan at shout.net Thu Oct 13 14:53:19 2016 From: bryan at shout.net (Bryan Holloway) Date: Thu, 13 Oct 2016 09:53:19 -0500 Subject: Outlook 2010 woes In-Reply-To: <1454388715.707.1476367654083@appsuite-dev.open-xchange.com> References: <4aad0d05-bc43-4fda-c4e7-544fc59557f4@shout.net> <20161013095531.00007012@seibercom.net> <1454388715.707.1476367654083@appsuite-dev.open-xchange.com> Message-ID: On 10/13/16 9:07 AM, Aki Tuomi wrote: > >> On October 13, 2016 at 4:55 PM Jerry wrote: >> >> >> On Thu, 13 Oct 2016 08:36:23 -0500, Bryan Holloway stated: >> >>> I also extended the "Server Timeout" setting in OT2010 to 10 minutes, >>> which doesn't seem to help either. (!) >> >> Outlook 2010 is a very old version. Why not update to the 2016 version. >> I am running it without any problems. If you do update, remember to >> remove the old version completely first. >> >> -- >> Jerry > > I do wonder if the real culprit is some firewall that timeouts the idle connection. > > Aki > I considered that, but again everything worked fine until we moved them from 2.1 to 2.2. Their same firewall is in use. Is there a way to see the IMAP commands coming from the client? I've tried looking at PCAPs, but of course they're encrypted so I can't see the actual dialog going on between the server and client. I didn't see an obvious way to do this in the docs. From flatworm at users.sourceforge.net Thu Oct 13 15:23:34 2016 From: flatworm at users.sourceforge.net (Konstantin Khomoutov) Date: Thu, 13 Oct 2016 18:23:34 +0300 Subject: Outlook 2010 woes In-Reply-To: References: <4aad0d05-bc43-4fda-c4e7-544fc59557f4@shout.net> <20161013095531.00007012@seibercom.net> <1454388715.707.1476367654083@appsuite-dev.open-xchange.com> Message-ID: <20161013182334.f65847ce815588d05557bd94@domain007.com> On Thu, 13 Oct 2016 09:53:19 -0500 Bryan Holloway wrote: [...] > Is there a way to see the IMAP commands coming from the client? I've > tried looking at PCAPs, but of course they're encrypted so I can't > see the actual dialog going on between the server and client. I > didn't see an obvious way to do this in the docs. If you have access to the SSL/TLS key (IOW, the private part of the cert) the server uses to secure IMAP connections you can dump the IMAP traffic using the `ssldump` utility (which builds on `tcpdump`). From bryan at shout.net Thu Oct 13 15:35:14 2016 From: bryan at shout.net (Bryan Holloway) Date: Thu, 13 Oct 2016 10:35:14 -0500 Subject: Outlook 2010 woes In-Reply-To: <20161013182334.f65847ce815588d05557bd94@domain007.com> References: <4aad0d05-bc43-4fda-c4e7-544fc59557f4@shout.net> <20161013095531.00007012@seibercom.net> <1454388715.707.1476367654083@appsuite-dev.open-xchange.com> <20161013182334.f65847ce815588d05557bd94@domain007.com> Message-ID: On 10/13/16 10:23 AM, Konstantin Khomoutov wrote: > On Thu, 13 Oct 2016 09:53:19 -0500 > Bryan Holloway wrote: > > [...] >> Is there a way to see the IMAP commands coming from the client? I've >> tried looking at PCAPs, but of course they're encrypted so I can't >> see the actual dialog going on between the server and client. I >> didn't see an obvious way to do this in the docs. > > If you have access to the SSL/TLS key (IOW, the private part of the > cert) the server uses to secure IMAP connections you can dump the IMAP > traffic using the `ssldump` utility (which builds on `tcpdump`). > I do, but the client is using a DH key exchange so I only have the server-side private key. Tried that using Wireshark's decoder features and ran into this problem. I'm assuming I'd run into the same using ssldump, but I'll give it a shot! Stupid privacy. :) From bind at enas.net Thu Oct 13 15:42:18 2016 From: bind at enas.net (Urban Loesch) Date: Thu, 13 Oct 2016 17:42:18 +0200 Subject: Outlook 2010 woes In-Reply-To: References: <4aad0d05-bc43-4fda-c4e7-544fc59557f4@shout.net> <20161013095531.00007012@seibercom.net> <1454388715.707.1476367654083@appsuite-dev.open-xchange.com> Message-ID: <74903dfc-074f-97c3-b3ff-80659a1d2fa4@enas.net> Am 13.10.2016 um 16:53 schrieb Bryan Holloway: > On 10/13/16 9:07 AM, Aki Tuomi wrote: >> >>> On October 13, 2016 at 4:55 PM Jerry wrote: >>> >>> >>> On Thu, 13 Oct 2016 08:36:23 -0500, Bryan Holloway stated: >>> >>>> I also extended the "Server Timeout" setting in OT2010 to 10 minutes, >>>> which doesn't seem to help either. (!) >>> >>> Outlook 2010 is a very old version. Why not update to the 2016 version. >>> I am running it without any problems. If you do update, remember to >>> remove the old version completely first. >>> >>> -- >>> Jerry >> >> I do wonder if the real culprit is some firewall that timeouts the idle connection. >> >> Aki >> > > I considered that, but again everything worked fine until we moved them from 2.1 to 2.2. Their same firewall is in use. > > Is there a way to see the IMAP commands coming from the client? I've tried looking at PCAPs, but of course they're encrypted so I can't see the actual > dialog going on between the server and client. I didn't see an obvious way to do this in the docs. > There is a "rawlog" feature, which writes down the hole decrypted imap session in files. ... service imap { ... executable = imap postlogin ... } ... service postlogin { executable = script-login -d rawlog unix_listener postlogin { } } ... This should write *.in an *.out files to "$mail_location/dovecot.rawlog/" directory for each imap session. The directory should be writeable by the dovecot user. I tested this some years ago, so I'm not shure if the configuration is still valid. Regards Urban From flatworm at users.sourceforge.net Thu Oct 13 15:52:00 2016 From: flatworm at users.sourceforge.net (Konstantin Khomoutov) Date: Thu, 13 Oct 2016 18:52:00 +0300 Subject: Outlook 2010 woes In-Reply-To: References: <4aad0d05-bc43-4fda-c4e7-544fc59557f4@shout.net> <20161013095531.00007012@seibercom.net> <1454388715.707.1476367654083@appsuite-dev.open-xchange.com> <20161013182334.f65847ce815588d05557bd94@domain007.com> Message-ID: <20161013185200.5aa3b7a5d485f24b2a036c84@domain007.com> On Thu, 13 Oct 2016 10:35:14 -0500 Bryan Holloway wrote: > > [...] > >> Is there a way to see the IMAP commands coming from the client? > >> I've tried looking at PCAPs, but of course they're encrypted so I > >> can't see the actual dialog going on between the server and > >> client. I didn't see an obvious way to do this in the docs. > > > > If you have access to the SSL/TLS key (IOW, the private part of the > > cert) the server uses to secure IMAP connections you can dump the > > IMAP traffic using the `ssldump` utility (which builds on > > `tcpdump`). > > I do, but the client is using a DH key exchange so I only have the > server-side private key. > > Tried that using Wireshark's decoder features and ran into this > problem. I'm assuming I'd run into the same using ssldump, but I'll > give it a shot! I think DH is not the culprit: just to be able to actually decode SSL traffic, you must have the server private key when you're decoding the SSL handshake phase -- to be able to recover the session keys, which you then use to decode the actual tunneled data. From aki.tuomi at dovecot.fi Thu Oct 13 16:01:50 2016 From: aki.tuomi at dovecot.fi (Aki Tuomi) Date: Thu, 13 Oct 2016 19:01:50 +0300 (EEST) Subject: Outlook 2010 woes In-Reply-To: <20161013185200.5aa3b7a5d485f24b2a036c84@domain007.com> References: <4aad0d05-bc43-4fda-c4e7-544fc59557f4@shout.net> <20161013095531.00007012@seibercom.net> <1454388715.707.1476367654083@appsuite-dev.open-xchange.com> <20161013182334.f65847ce815588d05557bd94@domain007.com> <20161013185200.5aa3b7a5d485f24b2a036c84@domain007.com> Message-ID: <1040717331.825.1476374511077@appsuite-dev.open-xchange.com> > On October 13, 2016 at 6:52 PM Konstantin Khomoutov wrote: > > > On Thu, 13 Oct 2016 10:35:14 -0500 > Bryan Holloway wrote: > > > > [...] > > >> Is there a way to see the IMAP commands coming from the client? > > >> I've tried looking at PCAPs, but of course they're encrypted so I > > >> can't see the actual dialog going on between the server and > > >> client. I didn't see an obvious way to do this in the docs. > > > > > > If you have access to the SSL/TLS key (IOW, the private part of the > > > cert) the server uses to secure IMAP connections you can dump the > > > IMAP traffic using the `ssldump` utility (which builds on > > > `tcpdump`). > > > > I do, but the client is using a DH key exchange so I only have the > > server-side private key. > > > > Tried that using Wireshark's decoder features and ran into this > > problem. I'm assuming I'd run into the same using ssldump, but I'll > > give it a shot! > > I think DH is not the culprit: just to be able to actually decode SSL > traffic, you must have the server private key when you're decoding the > SSL handshake phase -- to be able to recover the session keys, which > you then use to decode the actual tunneled data. You can also enable only non DH algorithms in ssl settings if rawlog isn't working for you. Aki From bryan at shout.net Thu Oct 13 16:04:46 2016 From: bryan at shout.net (Bryan Holloway) Date: Thu, 13 Oct 2016 11:04:46 -0500 Subject: Outlook 2010 woes In-Reply-To: <74903dfc-074f-97c3-b3ff-80659a1d2fa4@enas.net> References: <4aad0d05-bc43-4fda-c4e7-544fc59557f4@shout.net> <20161013095531.00007012@seibercom.net> <1454388715.707.1476367654083@appsuite-dev.open-xchange.com> <74903dfc-074f-97c3-b3ff-80659a1d2fa4@enas.net> Message-ID: <9298f895-1f1e-f09a-88da-19fba2fa620b@shout.net> On 10/13/16 10:42 AM, Urban Loesch wrote: > > > Am 13.10.2016 um 16:53 schrieb Bryan Holloway: >> On 10/13/16 9:07 AM, Aki Tuomi wrote: >>> >>>> On October 13, 2016 at 4:55 PM Jerry wrote: >>>> >>>> >>>> On Thu, 13 Oct 2016 08:36:23 -0500, Bryan Holloway stated: >>>> >>>>> I also extended the "Server Timeout" setting in OT2010 to 10 minutes, >>>>> which doesn't seem to help either. (!) >>>> >>>> Outlook 2010 is a very old version. Why not update to the 2016 version. >>>> I am running it without any problems. If you do update, remember to >>>> remove the old version completely first. >>>> >>>> -- >>>> Jerry >>> >>> I do wonder if the real culprit is some firewall that timeouts the >>> idle connection. >>> >>> Aki >>> >> >> I considered that, but again everything worked fine until we moved >> them from 2.1 to 2.2. Their same firewall is in use. >> >> Is there a way to see the IMAP commands coming from the client? I've >> tried looking at PCAPs, but of course they're encrypted so I can't see >> the actual >> dialog going on between the server and client. I didn't see an obvious >> way to do this in the docs. >> > > There is a "rawlog" feature, which writes down the hole decrypted imap > session in files. > > ... > service imap { > ... > executable = imap postlogin > ... > } > > ... > > service postlogin { > executable = script-login -d rawlog > unix_listener postlogin { > } > } > ... > > This should write *.in an *.out files to > "$mail_location/dovecot.rawlog/" directory for each imap session. > The directory should be writeable by the dovecot user. I tested this > some years ago, so I'm not shure if the configuration > is still valid. > > Regards > Urban Great! I will try this. From bryan at shout.net Thu Oct 13 16:21:09 2016 From: bryan at shout.net (Bryan Holloway) Date: Thu, 13 Oct 2016 11:21:09 -0500 Subject: Outlook 2010 woes In-Reply-To: <1040717331.825.1476374511077@appsuite-dev.open-xchange.com> References: <4aad0d05-bc43-4fda-c4e7-544fc59557f4@shout.net> <20161013095531.00007012@seibercom.net> <1454388715.707.1476367654083@appsuite-dev.open-xchange.com> <20161013182334.f65847ce815588d05557bd94@domain007.com> <20161013185200.5aa3b7a5d485f24b2a036c84@domain007.com> <1040717331.825.1476374511077@appsuite-dev.open-xchange.com> Message-ID: <918e60ae-be12-6994-e397-eeb0ae11313a@shout.net> On 10/13/16 11:01 AM, Aki Tuomi wrote: > >> On October 13, 2016 at 6:52 PM Konstantin Khomoutov wrote: >> >> >> On Thu, 13 Oct 2016 10:35:14 -0500 >> Bryan Holloway wrote: >> >>>> [...] >>>>> Is there a way to see the IMAP commands coming from the client? >>>>> I've tried looking at PCAPs, but of course they're encrypted so I >>>>> can't see the actual dialog going on between the server and >>>>> client. I didn't see an obvious way to do this in the docs. >>>> >>>> If you have access to the SSL/TLS key (IOW, the private part of the >>>> cert) the server uses to secure IMAP connections you can dump the >>>> IMAP traffic using the `ssldump` utility (which builds on >>>> `tcpdump`). >>> >>> I do, but the client is using a DH key exchange so I only have the >>> server-side private key. >>> >>> Tried that using Wireshark's decoder features and ran into this >>> problem. I'm assuming I'd run into the same using ssldump, but I'll >>> give it a shot! >> >> I think DH is not the culprit: just to be able to actually decode SSL >> traffic, you must have the server private key when you're decoding the >> SSL handshake phase -- to be able to recover the session keys, which >> you then use to decode the actual tunneled data. > > You can also enable only non DH algorithms in ssl settings if rawlog isn't working for you. > > Aki > Ah -- interesting tip. I hadn't thought of that. Thank you! I'll report my findings to the list. From jtam.home at gmail.com Thu Oct 13 19:25:51 2016 From: jtam.home at gmail.com (Joseph Tam) Date: Thu, 13 Oct 2016 12:25:51 -0700 (PDT) Subject: Outlook 2010 woes In-Reply-To: <4aad0d05-bc43-4fda-c4e7-544fc59557f4@shout.net> References: <4aad0d05-bc43-4fda-c4e7-544fc59557f4@shout.net> Message-ID: On Thu, 13 Oct 2016, Bryan Holloway wrote: >> Rebuilding caches? Do you get the same delay when going back to the folder >> after the initial delay. > > No, but once sitting idle again for 10-15 seconds, the delay occurs again > regardless of which folder you choose. Another diagnostic is to strace the server process. Joseph Tam From anic297 at mac.com Fri Oct 14 07:19:02 2016 From: anic297 at mac.com (Moi) Date: Fri, 14 Oct 2016 09:19:02 +0200 Subject: First steps in Dovecot; IMAP not working In-Reply-To: <003301d22540$95d40340$c17c09c0$@mac.com> References: <003301d22540$95d40340$c17c09c0$@mac.com> Message-ID: <000501d225eb$3eb04f50$bc10edf0$@mac.com> Hello, I've made some more tests and I still can't receive mails; sending them still works. I don't receive any error message, just the mails that are supposed to be received won't come. In the mail logs, I find only this relevant line: dovecot: imap-login: Aborted login (no auth attempts in 2 secs): user=<> This line (which I shortened to remove IP addresses) seems to indicate there's no user referenced, although I've set the field in Outlook. Is this a problem that looks familiar? I'm sort of clueless without having an error message. From l.henrich at spirit-server.com Fri Oct 14 07:40:29 2016 From: l.henrich at spirit-server.com (Lukas Henrich) Date: Fri, 14 Oct 2016 09:40:29 +0200 Subject: update dovecot-acl in all subfolders in a public folder Message-ID: <58008BED.4090600@spirit-server.com> Hello everyone, right now I have problem and can't find a proper solution. But first: dovecot-version: 2.2.13 Now to my problem: A client of mine uses a public folder called "groups". In this Folder are several subfolders like "Archive", "projects", "sales" und so on. Unfortunately this client (and his employees) created thousands of subfolders within these folders. The folder "groups" is stored in /data/vmail/domain.com/. So, if i type "tree -a -L 1 /data/vmail/domain.com/groups" I get the following outpout: /data/vmail/domain.com/groups/ ??? .Archive ??? .Archive.subfolder1 ??? .Archive.subfolder2 ??? .Archive.subfolder2.subfolder3 ??? .Archive.subfolder2.subfolder4 (....) ??? .projects ??? .projects.subfolder1 ??? .projects.subfolder2 ??? .projects.subfolder2.subfolder3 ??? .projects.subfolder2.subfolder4 (...) In this groups-folder are at the moment more than 3400 folders! Now to the permissions: When the folders "Archive", "projects" and so on had been created every folder got a dovecot-acl for the permission for each user, e.g.: user=user1 kxeilprwts user=user2 kxeilprwts This worked finde, as the dovecot-acl got copied from the parent folder when the employees created new subfolders. Now to my problem I'm facing right now: This client got 2 new employees. So how can I edit all these dovecot-acl files in all subfolders where these 2 new employess schould get access to? Thank you in advance! Yours sincerely, Lukas Henrich Furthermore here is the output of dovecot -n: # 2.2.13: /etc/dovecot/dovecot.conf # OS: Linux 4.4.6-1-pve x86_64 Debian 8.4 auth_username_format = %Ln disable_plaintext_auth = no lda_mailbox_autocreate = yes mail_home = /data/vmail/domain.com/%Ln mail_location = maildir:~ mail_plugins = " acl" managesieve_notify_capability = mailto managesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy include variables body enotify environment mailbox date ihave namespace { hidden = no ignore_on_failure = no inbox = no list = children location = maildir:/data/vmail/domain.com/%%n:INDEXPVT=/data/vmail/domain.com/%n/shared/%%n prefix = shared/%%n/ separator = / subscriptions = yes type = shared } namespace { hidden = no ignore_on_failure = no inbox = no list = yes location = maildir:/data/vmail/domain.com/groups:INDEXPVT=/data/vmail/domain.com/%n/groups prefix = groups/ separator = / subscriptions = yes type = public } namespace inbox { inbox = yes location = mailbox Archiv { special_use = \Archive } mailbox Archive { auto = subscribe special_use = \Archive } mailbox Archives { special_use = \Archive } mailbox "Deleted Messages" { special_use = \Trash } mailbox Drafts { auto = subscribe special_use = \Drafts } mailbox Entw?rfe { special_use = \Drafts } mailbox "Gel?schte Elemente" { special_use = \Trash } mailbox "Gel?schte Objekte" { special_use = \Trash } mailbox Gesendet { special_use = \Sent } mailbox "Gesendete Elemente" { special_use = \Sent } mailbox "Gesendete Objekte" { special_use = \Sent } mailbox Junk { auto = subscribe special_use = \Junk } mailbox Papierkorb { special_use = \Trash } mailbox Sent { auto = subscribe special_use = \Sent } mailbox "Sent Messages" { special_use = \Sent } mailbox Spam { special_use = \Junk } mailbox Trash { auto = subscribe special_use = \Trash } prefix = INBOX/ separator = / subscriptions = yes } passdb { args = /etc/dovecot/dovecot-ldap.conf.ext driver = ldap } passdb { args = scheme=CRYPT username_format=%Ln /etc/dovecot/users driver = passwd-file } plugin { acl = vfile acl_shared_dict = file:/var/lib/dovecot/db/shared-mailboxes.db sieve = ~/dovecot.sieve sieve_dir = ~/sieve } postmaster_address = admin at domain.com protocols = " imap lmtp sieve sieve" service auth { unix_listener /var/spool/postfix/private/auth { mode = 0666 } unix_listener auth-userdb { group = vmail user = vmail } } service imap-login { inet_listener imap { port = 143 } } service lmtp { inet_listener lmtp { address = 127.0.0.1 port = 24 } unix_listener /var/spool/postfix/private/lmtp-dovecot { group = postfix user = postfix } } service managesieve-login { inet_listener sieve { port = 4190 } } ssl_cert = Hello, I am running into this error: /Maximum number of connections from user+IP exceeded (mail_max_userip_connections=10)/ The suggested solution in hundreds of support requests on this mailing list and throughout the internet is to increase the number of maximum userip connections. But this is not curing the problem, it is just postponing it to the moment when the new limit is reached. When i type: /doveadm who// / I can see that some accounts have several pids running: /someaccount 10 imap (25396 25391 25386 25381 25374 7822 7817 5559 5543 5531) (xxx.xxx.xxx.xxx)/ Now when I check these pids with /ps aux/ I find out that the oldest pid (5531) has a lifetime of already over 12 hours. Anyway I know that the clients that initiated the connections are not connected anymore, so there is no way that there is a valid reason why this connection should still be open. Also I never had this problem before, it appeared some months ago. Does anybody know how to solve this? Thanks in advance, Benedikt. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From skdovecot at smail.inf.fh-brs.de Fri Oct 14 12:08:55 2016 From: skdovecot at smail.inf.fh-brs.de (Steffen Kaiser) Date: Fri, 14 Oct 2016 14:08:55 +0200 (CEST) Subject: Dovecot does not close connections In-Reply-To: <72649af7-5007-8b11-d739-97de24d6adbe@two-wings.net> References: <72649af7-5007-8b11-d739-97de24d6adbe@two-wings.net> Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Fri, 14 Oct 2016, Benedikt Carda wrote: > I am running into this error: > /Maximum number of connections from user+IP exceeded > (mail_max_userip_connections=10)/ > > The suggested solution in hundreds of support requests on this mailing > list and throughout the internet is to increase the number of maximum > userip connections. But this is not curing the problem, it is just > postponing it to the moment when the new limit is reached. > > When i type: > /doveadm who// > / > > I can see that some accounts have several pids running: > /someaccount 10 imap (25396 25391 25386 25381 25374 7822 7817 > 5559 5543 5531) (xxx.xxx.xxx.xxx)/ > > Now when I check these pids with > /ps aux/ > > I find out that the oldest pid (5531) has a lifetime of already over 12 > hours. Anyway I know that the clients that initiated the connections are > not connected anymore, so there is no way that there is a valid reason > why this connection should still be open. What's the state of the connection ? - -- Steffen Kaiser -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQEVAwUBWADK13z1H7kL/d9rAQKw6gf/SbLMdf988i3u5arben3YseszjkOfMLqr bRzuBa3wopFC7h456qORiSUqs14YWK7IvLkC5Ke81pdz3beDPFaYrjxvIjldn0KJ YZzsAp7Nc04OzdcC1JZlZ96zjL85AfiokGVvjhCuqVNV0S1R9dy5wJLyouvdnNym gLO2twykuEajJugcnqSfMj0QWhMFO+quYAOEUNeRpf4fDvPPNo11Y89aDtwCrZUp OMEbDIMa92CnNRARkiqRINJmqt3v9ou3DEETnoyj8qGglO/zU+uAOE9BeoihPF4l GIKMJ4agva1p1Un53RBsnpsXxVCljMcvt++M5g/vs+svYqulRpZeXQ== =O6DY -----END PGP SIGNATURE----- From webert.boss at gmail.com Fri Oct 14 12:16:43 2016 From: webert.boss at gmail.com (Webert de Souza Lima) Date: Fri, 14 Oct 2016 12:16:43 +0000 Subject: Dovecot does not close connections In-Reply-To: References: <72649af7-5007-8b11-d739-97de24d6adbe@two-wings.net> Message-ID: This happens to me too. On my case, connections are ESTABILISHED. On Fri, Oct 14, 2016 at 9:09 AM Steffen Kaiser < skdovecot at smail.inf.fh-brs.de> wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > On Fri, 14 Oct 2016, Benedikt Carda wrote: > > > I am running into this error: > > /Maximum number of connections from user+IP exceeded > > (mail_max_userip_connections=10)/ > > > > The suggested solution in hundreds of support requests on this mailing > > list and throughout the internet is to increase the number of maximum > > userip connections. But this is not curing the problem, it is just > > postponing it to the moment when the new limit is reached. > > > > When i type: > > /doveadm who// > > / > > > > I can see that some accounts have several pids running: > > /someaccount 10 imap (25396 25391 25386 25381 25374 7822 7817 > > 5559 5543 5531) (xxx.xxx.xxx.xxx)/ > > > > Now when I check these pids with > > /ps aux/ > > > > I find out that the oldest pid (5531) has a lifetime of already over 12 > > hours. Anyway I know that the clients that initiated the connections are > > not connected anymore, so there is no way that there is a valid reason > > why this connection should still be open. > > What's the state of the connection ? > > > - -- > Steffen Kaiser > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1 > > iQEVAwUBWADK13z1H7kL/d9rAQKw6gf/SbLMdf988i3u5arben3YseszjkOfMLqr > bRzuBa3wopFC7h456qORiSUqs14YWK7IvLkC5Ke81pdz3beDPFaYrjxvIjldn0KJ > YZzsAp7Nc04OzdcC1JZlZ96zjL85AfiokGVvjhCuqVNV0S1R9dy5wJLyouvdnNym > gLO2twykuEajJugcnqSfMj0QWhMFO+quYAOEUNeRpf4fDvPPNo11Y89aDtwCrZUp > OMEbDIMa92CnNRARkiqRINJmqt3v9ou3DEETnoyj8qGglO/zU+uAOE9BeoihPF4l > GIKMJ4agva1p1Un53RBsnpsXxVCljMcvt++M5g/vs+svYqulRpZeXQ== > =O6DY > -----END PGP SIGNATURE----- > From carda at two-wings.net Fri Oct 14 12:26:06 2016 From: carda at two-wings.net (Benedikt Carda) Date: Fri, 14 Oct 2016 14:26:06 +0200 Subject: Dovecot does not close connections In-Reply-To: References: <72649af7-5007-8b11-d739-97de24d6adbe@two-wings.net> Message-ID: The state of the processes according to ps is "S" which means "interruptible sleep" as far as I know? What is also interesting is, that the processes, that seem to have this problem are not shown with the owner name but with the user ID. Normal imap process in ps aux: username 10841 0.1 0.1 9148 3472 ? S 13:18 0:04 dovecot/imap Imap Processes that seem to be quite old already: 1405 11099 0.0 0.1 8072 2644 ? S 13:23 0:00 dovecot/imap But I am not sure if this is really linked to the problem. Benedikt. Am 14.10.2016 um 14:08 schrieb Steffen Kaiser: > On Fri, 14 Oct 2016, Benedikt Carda wrote: > > > I am running into this error: > > /Maximum number of connections from user+IP exceeded > > (mail_max_userip_connections=10)/ > > > The suggested solution in hundreds of support requests on this mailing > > list and throughout the internet is to increase the number of maximum > > userip connections. But this is not curing the problem, it is just > > postponing it to the moment when the new limit is reached. > > > When i type: > > /doveadm who// > > / > > > I can see that some accounts have several pids running: > > /someaccount 10 imap (25396 25391 25386 25381 25374 7822 7817 > > 5559 5543 5531) (xxx.xxx.xxx.xxx)/ > > > Now when I check these pids with > > /ps aux/ > > > I find out that the oldest pid (5531) has a lifetime of already over 12 > > hours. Anyway I know that the clients that initiated the connections are > > not connected anymore, so there is no way that there is a valid reason > > why this connection should still be open. > > What's the state of the connection ? > > > -- Steffen Kaiser -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From anic297 at mac.com Fri Oct 14 12:43:58 2016 From: anic297 at mac.com (Moi) Date: Fri, 14 Oct 2016 14:43:58 +0200 Subject: First steps in Dovecot; IMAP not working In-Reply-To: <003301d22540$95d40340$c17c09c0$@mac.com> References: <003301d22540$95d40340$c17c09c0$@mac.com> Message-ID: <000201d22618$a4022400$ec066c00$@mac.com> Hello, First of all, I'm sorry if you received this mail twice; I haven't received it the first time so I think it was lost. Second attempt. I've made some more tests and I still can't receive mails. Outlook doesn't complain about anything, no error message; the mails that are supposed to be received just won't appear. In the mail logs, I find this line (other lines are irrelevant): dovecot: imap-login: Aborted login (no auth attempts in 2 secs): user=<> [...] Is this a problem that looks familiar? It's a bit clueless without an error message. Any idea welcome. From aki.tuomi at dovecot.fi Fri Oct 14 12:58:02 2016 From: aki.tuomi at dovecot.fi (Aki Tuomi) Date: Fri, 14 Oct 2016 15:58:02 +0300 Subject: First steps in Dovecot; IMAP not working In-Reply-To: <000201d22618$a4022400$ec066c00$@mac.com> References: <003301d22540$95d40340$c17c09c0$@mac.com> <000201d22618$a4022400$ec066c00$@mac.com> Message-ID: On 14.10.2016 15:43, Moi wrote: > Hello, > > First of all, I'm sorry if you received this mail twice; I haven't received > it the first time so I think it was lost. Second attempt. > > I've made some more tests and I still can't receive mails. > Outlook doesn't complain about anything, no error message; the mails that > are supposed to be received just won't appear. > In the mail logs, I find this line (other lines are irrelevant): > dovecot: imap-login: Aborted login (no auth attempts in 2 secs): user=<> > [...] > > Is this a problem that looks familiar? > > It's a bit clueless without an error message. > Any idea welcome. Please post doveconf -n doveadm log errors Aki Tuomi From anic297 at mac.com Fri Oct 14 13:16:03 2016 From: anic297 at mac.com (Moi) Date: Fri, 14 Oct 2016 15:16:03 +0200 Subject: First steps in Dovecot; IMAP not working In-Reply-To: References: <003301d22540$95d40340$c17c09c0$@mac.com> <000201d22618$a4022400$ec066c00$@mac.com> Message-ID: <000501d2261d$1f535df0$5dfa19d0$@mac.com> doveconf -n: # 2.2.13: /etc/dovecot/dovecot.conf # OS: Linux 2.6.32-042stab116.1 x86_64 Debian 8.6 mail_location = mbox:~/mail:INBOX=/var/mail/%u namespace inbox { inbox = yes location = mailbox Drafts { special_use = \Drafts } mailbox Junk { special_use = \Junk } mailbox Sent { special_use = \Sent } mailbox "Sent Messages" { special_use = \Sent } mailbox Trash { special_use = \Trash } prefix = } passdb { driver = pam } protocols = " imap" service auth { unix_listener /var/spool/postfix/private/auth { group = postfix mode = 0666 user = postfix } } service imap-login { inet_listener imaps { port = 993 ssl = yes } } ssl = required ssl_cert = References: <003301d22540$95d40340$c17c09c0$@mac.com> <000201d22618$a4022400$ec066c00$@mac.com> <000501d2261d$1f535df0$5dfa19d0$@mac.com> Message-ID: <1938197773.347.1476453284551@appsuite-dev.open-xchange.com> > On October 14, 2016 at 4:16 PM Moi wrote: > In your configuration, dovecot uses whatever user/group returned by PAM. Since the webuser has never logged in, it has no directory under /var/mail. If you want, you can a) override mail_uid and mail_gid in userdb/passdb b) pre-create /var/mail/webuser and chown it to webuser:ftpusers c) you can let ftpusers write to /var/mail. Aki From mick.crane at gmail.com Fri Oct 14 14:06:31 2016 From: mick.crane at gmail.com (mick crane) Date: Fri, 14 Oct 2016 15:06:31 +0100 Subject: First steps in Dovecot; IMAP not working In-Reply-To: <000201d22618$a4022400$ec066c00$@mac.com> References: <003301d22540$95d40340$c17c09c0$@mac.com> <000201d22618$a4022400$ec066c00$@mac.com> Message-ID: <24757359d05f17b2a06b3593a2f7e01a@rapunzel.local> On 2016-10-14 13:43, Moi wrote: > Hello, > > First of all, I'm sorry if you received this mail twice; I haven't > received > it the first time so I think it was lost. Second attempt. > > I've made some more tests and I still can't receive mails. > Outlook doesn't complain about anything, no error message; the mails > that > are supposed to be received just won't appear. > In the mail logs, I find this line (other lines are irrelevant): > dovecot: imap-login: Aborted login (no auth attempts in 2 secs): > user=<> > [...] > > Is this a problem that looks familiar? > > It's a bit clueless without an error message. > Any idea welcome. http://wiki.dovecot.org/TestInstallation -- key ID: 0x4BFEBB31 From aki.tuomi at dovecot.fi Fri Oct 14 14:13:06 2016 From: aki.tuomi at dovecot.fi (Aki Tuomi) Date: Fri, 14 Oct 2016 17:13:06 +0300 (EEST) Subject: Maildir Expunged GUID mismatch for UID In-Reply-To: <0c7b01d21b20$0f5c3410$2e149c30$@lba.ca> References: <0c7b01d21b20$0f5c3410$2e149c30$@lba.ca> Message-ID: <722795050.379.1476454387044@appsuite-dev.open-xchange.com> > On September 30, 2016 at 4:39 PM Steven Xu wrote: > > > > > Dovecot version:2.2.25 > > Since we used to keep our email files on widows server, I made the following > changes in maildir-storage.h > > #define MAILDIR_EXTRA_SEP ',' > > #define MAILDIR_INFO_SEP_S ":" to "+". > > > > Everything seems working except EXPUNG, The dovecot log is flooded by > messages like following: > > imap(xxxxx): Error: Mailbox INBOX: Expunged GUID mismatch for UID 7039 > > > > > > Then I read the source code, and found the following lines in > maildir-sync-index.c > > > > T_BEGIN { > > guid = maildir_uidlist_lookup_ext(ctx->mbox->uidlist, uid, > > MAILDIR_UIDLIST_REC_EXT_GUID); > > if (guid == NULL) > > guid = t_strcut(filename, ':'); > > mail_generate_guid_128_hash(guid, guid_128); > > } T_END; > > > > I have to change the code to guid = t_strcut(filename, '+'); > > > > > > So, should MAILDIR_EXTRA_SEP be used here instead of ':'? > > > > Thanks, > > > > Steven > > Hi! Can you try out the attached patch? Aki -------------- next part -------------- A non-text attachment was scrubbed... Name: maildir-info-sep.patch Type: text/x-diff Size: 1162 bytes Desc: not available URL: From jtam.home at gmail.com Fri Oct 14 19:22:56 2016 From: jtam.home at gmail.com (Joseph Tam) Date: Fri, 14 Oct 2016 12:22:56 -0700 (PDT) Subject: First steps in Dovecot; IMAP not working In-Reply-To: References: Message-ID: Moi wrote: > I've made some more tests and I still can't receive mails; sending them > still works. I don't receive any error message, just the mails that are > supposed to be received won't come. > In the mail logs, I find only this relevant line: > dovecot: imap-login: Aborted login (no auth attempts in 2 secs): user=<> Did you post doveconf -n (I didn't catch the head of this thread)? That would be step 0. A good first step is to test whether you have basic authentication working (to separate out if you have a server or client issue). I assume you allow plaintext communication, but if not, substitute telnet with "openssl sclient -connect your-server:993": C: # telnet your-server 143 S: * OK [CAPABILITY ... C: x1 login testuser theirpassword If you get an OK response to this, it may be a client issue (check settings on client). If you get an error or failure, look inward: check logs and config. Joseph Tam From anic297 at mac.com Fri Oct 14 20:26:01 2016 From: anic297 at mac.com (Marnaud) Date: Fri, 14 Oct 2016 22:26:01 +0200 Subject: First steps in Dovecot; IMAP not working In-Reply-To: References: Message-ID: Le 14 oct. 2016 ? 21:22, Joseph Tam a ?crit: > Did you post doveconf -n (I didn't catch the head of this thread)? That > would be step 0. Yes (actually, twice). If you want to see it again, no problem, just ask. > I assume you allow plaintext communication, but if not, substitute telnet with > "openssl sclient -connect your-server:993": > > C: # telnet your-server 143 > S: * OK [CAPABILITY ... > C: x1 login testuser theirpassword > > If you get an OK response to this, it may be a client issue (check settings on > client). If you get an error or failure, look inward: check logs and config. For sake of ?security?, I chose to not allow plaintext communication (being new to this, I think being strict is a good choice). I?ve tried with the openssl option and it successfully logged in. Thank you. From anic297 at mac.com Fri Oct 14 20:32:50 2016 From: anic297 at mac.com (Marnaud) Date: Fri, 14 Oct 2016 22:32:50 +0200 Subject: First steps in Dovecot; IMAP not working In-Reply-To: <24757359d05f17b2a06b3593a2f7e01a@rapunzel.local> References: <003301d22540$95d40340$c17c09c0$@mac.com> <000201d22618$a4022400$ec066c00$@mac.com> <24757359d05f17b2a06b3593a2f7e01a@rapunzel.local> Message-ID: <4E8367DF-0418-445B-B654-2F7DD1FE70B8@mac.com> Le 14 oct. 2016 ? 16:06, mick crane a ?crit: > On 2016-10-14 13:43, Moi wrote: >> Hello, >> First of all, I'm sorry if you received this mail twice; I haven't received >> it the first time so I think it was lost. Second attempt. >> I've made some more tests and I still can't receive mails. >> Outlook doesn't complain about anything, no error message; the mails that >> are supposed to be received just won't appear. >> In the mail logs, I find this line (other lines are irrelevant): >> dovecot: imap-login: Aborted login (no auth attempts in 2 secs): user=<> >> [...] >> Is this a problem that looks familiar? >> It's a bit clueless without an error message. >> Any idea welcome. > > http://wiki.dovecot.org/TestInstallation Thank you. I?m at the ?Check that it finds INBOX? section and am getting: * 0 EXISTS * 0 RECENT (the remaining text being the same as the example). So it looks like the mailbox doesn?t exist? From anic297 at mac.com Fri Oct 14 20:46:03 2016 From: anic297 at mac.com (Marnaud) Date: Fri, 14 Oct 2016 22:46:03 +0200 Subject: First steps in Dovecot; IMAP not working In-Reply-To: <1938197773.347.1476453284551@appsuite-dev.open-xchange.com> References: <003301d22540$95d40340$c17c09c0$@mac.com> <000201d22618$a4022400$ec066c00$@mac.com> <000501d2261d$1f535df0$5dfa19d0$@mac.com> <1938197773.347.1476453284551@appsuite-dev.open-xchange.com> Message-ID: <0D83EE2D-1A92-4992-ADF7-E28ABA68C19E@mac.com> Le 14 oct. 2016 ? 15:54, Aki Tuomi a ?crit: > In your configuration, dovecot uses whatever user/group returned by PAM. Excuse my ignorance, but what is PAM? > Since the web user has never logged in, it has no directory under /var/mail. Hmm? So it can?t log in because it has no directory and it has no directory as long as he does not log in, correct? > If you want, you can > > a) override mail_uid and mail_gid in userdb/passdb > b) pre-create /var/mail/webuser and chown it to webuser:ftpusers > c) you can let ftpusers write to /var/mail. Step b and c are ok for me, I believe. I should override mail_uid and mail_gid to what? From jtam.home at gmail.com Fri Oct 14 21:27:58 2016 From: jtam.home at gmail.com (Joseph Tam) Date: Fri, 14 Oct 2016 14:27:58 -0700 (PDT) Subject: First steps in Dovecot; IMAP not working In-Reply-To: References: Message-ID: (Sorry I read this list in digest form so frequently I'm half a step behind.) > For sake of ?security?, I chose to not allow plaintext communication > (being new to this, I think being strict is a good choice). I?ve tried > with the openssl option and it successfully logged in. Yes, you've included some more log entries, which makes the problem clearer, as it usually does. > Oct 13 05:56:28 imap(webuser): Error: open(/var/mail/webuser) failed: > Permission denied (euid=1001(webuser) egid=1000(ftpusers) missing +w perm: > /var/mail, we're not in group 8(mail), dir owned by 0:8 mode=0775) > ... > I checked, using ls -l /var, and I get this: > drwxrwsr-x 2 root mail 4096 Apr 27 11:27 mail > so the group looks to be correctly set to 'mail', despite what the log says, > right? No, it's quite explicit. User "webuser" has uid/gid = 1001(webuser)/1000(ftpusers). Your mail spool has permission uid/gid = root(0)/mail(8), neither of which allows webuser to write to this mail spool to creates its own mail folder. Aki Tuomi replies with several solutions: > In your configuration, dovecot uses whatever user/group returned by > PAM. Since the webuser has never logged in, it has no directory under > /var/mail. If you want, you can > > a) override mail_uid and mail_gid in userdb/passdb > b) pre-create /var/mail/webuser and chown it to webuser:ftpusers > c) you can let ftpusers write to /var/mail. Or if you dynamically/frequently onboard mail accounts, and users cannot arbitrarily write into this directory, you can "chmod 1777 /var/mail/" and let dovecot auto-create it (might also want to set "lda_mailbox_autocreate = yes". Joseph Tam From tlx at leuxner.net Sat Oct 15 07:55:19 2016 From: tlx at leuxner.net (Thomas Leuxner) Date: Sat, 15 Oct 2016 09:55:19 +0200 Subject: Latest HG Changes (fac92b5) affect Sieve-Plugin/LMTP Message-ID: <594B7F78-0A76-40CE-AF17-39DA074FB07B@leuxner.net> # 2.2.devel (c73322f): /etc/dovecot/dovecot.conf # Pigeonhole version 0.4.devel (fac92b5) # OS: Linux 3.16.0-4-amd64 x86_64 Debian 8.6 ==> /var/log/dovecot/dovecot.log <== Oct 15 09:50:15 nihlus dovecot: lmtp(11447): Connect from local Oct 15 09:50:15 nihlus dovecot: lmtp(tlx at leuxner.net): Panic: file lda-sieve-plugin.c: line 447 (lda_sieve_execute_scripts): assertion failed: (script != NULL) Oct 15 09:50:15 nihlus dovecot: lmtp(tlx at leuxner.net): Error: Raw backtrace: /usr/lib/dovecot/libdovecot.so.0(+0x938ae) [0x7fd161fc18ae] -> /usr/lib/dovecot/libdovecot.so.0(+0x9399c) [0x7fd161fc199c] -> /usr/lib/dovecot/libdovecot.so.0(i_fatal+0) [0x7fd161f5b6de] -> /usr/lib/dovecot/modules/lib90_sieve_plugin.so(+0x3af8) [0x7fd15fdf8af8] -> /usr/lib/dovecot/libdovecot-lda.so.0(mail_deliver+0x49) [0x7fd16258bb39] -> dovecot/lmtp [DATA tlx at leuxner.net](+0x724e) [0x7fd1629bc24e] -> /usr/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x4c) [0x7fd161fd5e4c] -> /usr/lib/dovecot/libdovecot.so.0(io_loop_handler_run_internal+0x10a) [0x7fd161fd730a] -> /usr/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0x25) [0x7fd161fd5ed5] -> /usr/lib/dovecot/libdovecot.so.0(io_loop_run+0x38) [0x7fd161fd6078] -> /usr/lib/dovecot/libdovecot.so.0(master_service_run+0x13) [0x7fd161f61be3] -> dovecot/lmtp [DATA tlx at leuxner.net](main+0x1a2) [0x7fd1629ba382] -> /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5) [0x7fd161ba4b45] -> dovecot/lmtp [DATA tlx at leuxner.net](+0x5430) [0x7fd1629ba430] Oct 15 09:50:15 nihlus dovecot: lmtp(tlx at leuxner.net): Fatal: master: service(lmtp): child 11447 killed with signal 6 (core not dumped) From mick.crane at gmail.com Sat Oct 15 07:56:03 2016 From: mick.crane at gmail.com (mick crane) Date: Sat, 15 Oct 2016 08:56:03 +0100 Subject: First steps in Dovecot; IMAP not working In-Reply-To: <4E8367DF-0418-445B-B654-2F7DD1FE70B8@mac.com> References: <003301d22540$95d40340$c17c09c0$@mac.com> <000201d22618$a4022400$ec066c00$@mac.com> <24757359d05f17b2a06b3593a2f7e01a@rapunzel.local> <4E8367DF-0418-445B-B654-2F7DD1FE70B8@mac.com> Message-ID: On 2016-10-14 21:32, Marnaud wrote: > Le 14 oct. 2016 ? 16:06, mick crane a ?crit: > >> On 2016-10-14 13:43, Moi wrote: >>> Hello, >>> First of all, I'm sorry if you received this mail twice; I haven't >>> received >>> it the first time so I think it was lost. Second attempt. >>> I've made some more tests and I still can't receive mails. >>> Outlook doesn't complain about anything, no error message; the mails >>> that >>> are supposed to be received just won't appear. >>> In the mail logs, I find this line (other lines are irrelevant): >>> dovecot: imap-login: Aborted login (no auth attempts in 2 secs): >>> user=<> >>> [...] >>> Is this a problem that looks familiar? >>> It's a bit clueless without an error message. >>> Any idea welcome. >> >> http://wiki.dovecot.org/TestInstallation > > Thank you. I?m at the ?Check that it finds INBOX? section and am > getting: > * 0 EXISTS > * 0 RECENT > > (the remaining text being the same as the example). So it looks like > the mailbox doesn?t exist? I assume this means there are no emails in the INBOX. send yourself a mail. http://www.binarytides.com/linux-mail-command-examples/ -- key ID: 0x4BFEBB31 From tlx at leuxner.net Sat Oct 15 08:25:21 2016 From: tlx at leuxner.net (Thomas Leuxner) Date: Sat, 15 Oct 2016 10:25:21 +0200 Subject: Latest HG Changes (fac92b5) affect Sieve-Plugin/LMTP In-Reply-To: <594B7F78-0A76-40CE-AF17-39DA074FB07B@leuxner.net> References: <594B7F78-0A76-40CE-AF17-39DA074FB07B@leuxner.net> Message-ID: <349A0000-0CA4-4B07-9BB4-1ECF3E7E7995@leuxner.net> > Oct 15 09:50:15 nihlus dovecot: lmtp(tlx at leuxner.net): Fatal: master: service(lmtp): child 11447 killed with signal 6 (core not dumped) #0 0x00007fdc0b5d7067 in __GI_raise (sig=sig at entry=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:56 resultvar = 0 pid = 22091 selftid = 22091 #1 0x00007fdc0b5d8448 in __GI_abort () at abort.c:89 save_stage = 2 act = {__sigaction_handler = {sa_handler = 0x3ed, sa_sigaction = 0x3ed}, sa_mask = {__val = {520, 140736754611104, 140583104653368, 513, 140583064431555, 140583029016568, 140583104653368, 513, 140583064421862, 140736754611344, 140583064603266, 140583104653368, 140736754611232, 0, 140583064603369, 140583104653368}}, sa_flags = 194885514, sa_restorer = 0x7fffd443f401} sigs = {__val = {32, 0 }} #2 0x00007fdc0b9e08a6 in default_fatal_finish (type=, status=status at entry=0) at failures.c:201 backtrace = 0x7fdc0e03a870 "/usr/lib/dovecot/libdovecot.so.0(+0x938ae) [0x7fdc0b9e08ae] -> /usr/lib/dovecot/libdovecot.so.0(+0x9399c) [0x7fdc0b9e099c] -> /usr/lib/dovecot/libdovecot.so.0(i_fatal+0) [0x7fdc0b97a6de] -> /usr/lib/d"... #3 0x00007fdc0b9e099c in i_internal_fatal_handler (ctx=0x7fffd443f470, format=, args=) at failures.c:670 status = 0 #4 0x00007fdc0b97a6de in i_panic (format=format at entry=0x7fdc098187f8 "file %s: line %d (%s): assertion failed: (%s)") at failures.c:275 ctx = {type = LOG_TYPE_PANIC, exit_status = 0, timestamp = 0x0, timestamp_usecs = 0} args = {{gp_offset = 40, fp_offset = 48, overflow_arg_area = 0x7fffd443f570, reg_save_area = 0x7fffd443f4b0}} #5 0x00007fdc09817af8 in lda_sieve_execute_scripts (srctx=0x7fffd443f690) at lda-sieve-plugin.c:447 sbin = 0x0 script = 0x0 cpflags = (unknown: 0) exflags = (unknown: 0) discard_script = i = ret = svinst = 0x7fdc0e133240 action_ehandler = 0x0 more = true exec_ehandler = debug = false user_script = mdctx = 0x7fffd443f8b0 mscript = 0x7fdc0e135d18 last_script = 0x0 compile_error = false error = SIEVE_ERROR_NONE #6 lda_sieve_execute (storage_r=0x7fffd443f888, srctx=0x7fffd443f690) at lda-sieve-plugin.c:821 msgdata = {mail = 0x7fdc0e0c64d0, return_path = 0x7fdc0e06e038 "tlx at leuxner.net", orig_envelope_to = 0x7fdc0e06e2c8 "tlx at leuxner.net", final_envelope_to = 0x7fdc0e06e2c8 "tlx at leuxner.net", auth_user = 0x7fdc0e0cb200 "tlx at leuxner.net", id = 0x7fdc0e0c6e60 ""} estatus = {last_storage = 0x0, message_saved = 0, message_forwarded = 0, tried_default_save = 0, keep_original = 0, store_failed = 0} trace_config = {level = SIEVE_TRLVL_NONE, flags = 0} debug = ret = mdctx = 0x7fffd443f8b0 svinst = scriptenv = {user = 0x7fdc0e0cb100, default_mailbox = 0x7fdc0c3de258 "INBOX", postmaster_address = 0x7fdc0e06cf38 "postmaster at leuxner.net", mailbox_autocreate = false, mailbox_autosubscribe = false, script_context = 0x7fffd443f8b0, smtp_start = 0x7fdc098168c0 , smtp_add_rcpt = 0x7fdc098168b0 , smtp_send = 0x7fdc098168a0 , smtp_finish = 0x7fdc09816880 , duplicate_check = 0x7fdc09816840 , duplicate_mark = 0x7fdc09816860 , duplicate_flush = 0x7fdc09816830 , reject_mail = 0x7fdc09816820 , exec_status = 0x7fffd443f610, trace_log = 0x0, trace_config = {level = SIEVE_TRLVL_NONE, flags = 0}} trace_log = 0x0 #7 lda_sieve_deliver_mail (mdctx=, storage_r=0x7fffd443f888) at lda-sieve-plugin.c:883 _data_stack_cur_id = 3 srctx = {svinst = 0x7fdc0e133240, mdctx = 0x7fffd443f8b0, home_dir = 0x7fdc0e0cceb8 "/var/vmail/domains/leuxner.net/tlx", scripts = 0x7fdc0e03a750, script_count = 1, user_script = 0x7fdc0e128630, main_script = 0x7fdc0e128630, discard_script = 0x0, msgdata = 0x7fffd443f620, scriptenv = 0x7fffd443f700, user_ehandler = 0x7fdc0e132d40, master_ehandler = 0x7fdc0e11b590, action_ehandler = 0x0, userlog = 0x7fdc0e03a7d8 "/var/vmail/domains/leuxner.net/tlx/.dovecot.sieve.log"} debug = svenv = {hostname = 0x7fdc0e0aa700 "spectre.leuxner.net", domainname = 0x0, base_dir = 0x7fdc0e0cc2b8 "/var/run/dovecot", username = 0x7fdc0e0cb200 "tlx at leuxner.net", home_dir = 0x7fdc0e0cceb8 "/var/vmail/domains/leuxner.net/tlx", temp_dir = 0x7fdc0e0cd170 "/tmp", flags = SIEVE_FLAG_HOME_RELATIVE, location = SIEVE_ENV_LOCATION_MDA, delivery_phase = SIEVE_DELIVERY_PHASE_DURING} i = ret = 0 #8 0x00007fdc0bfaab39 in mail_deliver (ctx=ctx at entry=0x7fffd443f8b0, storage_r=storage_r at entry=0x7fffd443f888) at mail-deliver.c:478 ret = #9 0x00007fdc0c3db24e in client_deliver (session=0x7fdc0e0cacf0, src_mail=0x7fdc0e0c64d0, rcpt=0x7fdc0e06e288, client=0x7fdc0e06caf0) at commands.c:890 set_parser = line = str = mail_error = 235329232 ret = input = ns = delivery_time_started = {tv_sec = 1476519591, tv_usec = 878671} sets = storage = 0x0 mail_set = username = dctx = {pool = 0x7fdc0e0cacd0, set = 0x7fdc0e0aa6a0, session = 0x7fdc0e0cacf0, timeout_secs = 30, session_time_msecs = 15, delivery_time_started = {tv_sec = 1476519591, tv_usec = 878671}, dup_ctx = 0x7fdc0e119bb0, session_id = 0x7fdc0e06e020 "QqxoM6fmAVhLVgAAgUOSbA", src_mail = 0x7fdc0e0c64d0, src_envelope_sender = 0x7fdc0e06e038 "tlx at leuxner.net", dest_user = 0x7fdc0e0cb100, dest_addr = 0x7fdc0e06e2c8 "tlx at leuxner.net", final_dest_addr = 0x7fdc0e06e2c8 "tlx at leuxner.net", dest_mailbox_name = 0x7fdc0c3de258 "INBOX", dest_mail = 0x0, var_expand_table = 0x0, tempfail_error = 0x0, tried_default_save = false, saved_mail = false, save_dest_mail = false, mailbox_full = false, dsn = false} lda_set = error = #10 client_deliver_next (session=0x7fdc0e0cacf0, src_mail=0x7fdc0e0c64d0, client=0x7fdc0e06caf0) at commands.c:930 count = #11 client_input_data_write_local (input=, client=0x7fdc0e06caf0) at commands.c:1026 src_mail = 0x7fdc0e0c64d0 first_uid = 4294967295 session = 0x7fdc0e0cacf0 old_uid = 0 #12 client_input_data_write (client=0x7fdc0e06caf0) at commands.c:1161 input = 0x7fdc0e0892d0 #13 client_input_data_handle (client=0x7fdc0e06caf0) at commands.c:1256 data = size = 1453 ret = #14 0x00007fdc0b9f4e4c in io_loop_call_io (io=0x7fdc0e06d6d0) at ioloop.c:584 ioloop = 0x7fdc0e040740 t_id = 2 __FUNCTION__ = "io_loop_call_io" #15 0x00007fdc0b9f630a in io_loop_handler_run_internal (ioloop=ioloop at entry=0x7fdc0e040740) at ioloop-epoll.c:222 ctx = 0x7fdc0e046360 io = tv = {tv_sec = 299, tv_usec = 983781} events_count = msecs = ret = 1 i = 0 j = call = __FUNCTION__ = "io_loop_handler_run_internal" #16 0x00007fdc0b9f4ed5 in io_loop_handler_run (ioloop=ioloop at entry=0x7fdc0e040740) at ioloop.c:632 No locals. #17 0x00007fdc0b9f5078 in io_loop_run (ioloop=0x7fdc0e040740) at ioloop.c:608 __FUNCTION__ = "io_loop_run" #18 0x00007fdc0b980be3 in master_service_run (service=0x7fdc0e0405e0, callback=) at master-service.c:641 No locals. #19 0x00007fdc0c3d9382 in main (argc=1, argv=0x7fdc0e040390) at main.c:125 set_roots = {0x7fdc0c1af400 , 0x7fdc0c5e0580 , 0x0} service_flags = storage_service_flags = c = From anic297 at mac.com Sat Oct 15 08:48:35 2016 From: anic297 at mac.com (Marnaud) Date: Sat, 15 Oct 2016 10:48:35 +0200 Subject: First steps in Dovecot; IMAP not working In-Reply-To: References: <003301d22540$95d40340$c17c09c0$@mac.com> <000201d22618$a4022400$ec066c00$@mac.com> <24757359d05f17b2a06b3593a2f7e01a@rapunzel.local> <4E8367DF-0418-445B-B654-2F7DD1FE70B8@mac.com> Message-ID: <949CA272-B24C-40E3-96C6-305BDF8F146C@mac.com> Le 15 oct. 2016 ? 9:56, mick crane a ?crit: > I assume this means there are no emails in the INBOX. > send yourself a mail. This is actually one of the first thing I tried when I saw outgoing mails worked. I also tried sending mails from another address, hoping to receive an error message back, but nothing got received. I now think the INBOX itself doesn?t exist, as logs seem to mention and someone else already told. Then, of course, I?d get 0 existing mails since the INBOX doesn?t exist. Thanks. From aki.tuomi at dovecot.fi Sat Oct 15 08:51:21 2016 From: aki.tuomi at dovecot.fi (Aki Tuomi) Date: Sat, 15 Oct 2016 11:51:21 +0300 (EEST) Subject: Latest HG Changes (fac92b5) affect Sieve-Plugin/LMTP In-Reply-To: <594B7F78-0A76-40CE-AF17-39DA074FB07B@leuxner.net> References: <594B7F78-0A76-40CE-AF17-39DA074FB07B@leuxner.net> Message-ID: <1501106469.1179.1476521482539@appsuite-dev.open-xchange.com> > On October 15, 2016 at 10:55 AM Thomas Leuxner wrote: > > > # 2.2.devel (c73322f): l/etc/dovecot/dovecot.conf > # Pigeonhole version 0.4.devel (fac92b5) > # OS: Linux 3.16.0-4-amd64 x86_64 Debian 8.6 I hope you mean git since hg is no longer maintained. Aki From tlx at leuxner.net Sat Oct 15 08:54:45 2016 From: tlx at leuxner.net (Thomas Leuxner) Date: Sat, 15 Oct 2016 10:54:45 +0200 Subject: Latest HG Changes (fac92b5) affect Sieve-Plugin/LMTP In-Reply-To: <1501106469.1179.1476521482539@appsuite-dev.open-xchange.com> References: <594B7F78-0A76-40CE-AF17-39DA074FB07B@leuxner.net> <1501106469.1179.1476521482539@appsuite-dev.open-xchange.com> Message-ID: > I hope you mean git since hg is no longer maintained. > > Aki Apologies. Latest and greatest in GIT. From tlx at leuxner.net Sat Oct 15 09:08:50 2016 From: tlx at leuxner.net (Thomas Leuxner) Date: Sat, 15 Oct 2016 11:08:50 +0200 Subject: Latest git Changes (fac92b5) affect Sieve-Plugin/LMTP In-Reply-To: References: <594B7F78-0A76-40CE-AF17-39DA074FB07B@leuxner.net> <1501106469.1179.1476521482539@appsuite-dev.open-xchange.com> Message-ID: # doveconf -d | grep discard # doveconf -a | grep discard sieve_discard = ~/.dovecot.sieve When set the crash disappears. From stephan at rename-it.nl Sat Oct 15 09:23:49 2016 From: stephan at rename-it.nl (Stephan Bosch) Date: Sat, 15 Oct 2016 11:23:49 +0200 Subject: Latest HG Changes (fac92b5) affect Sieve-Plugin/LMTP In-Reply-To: <594B7F78-0A76-40CE-AF17-39DA074FB07B@leuxner.net> References: <594B7F78-0A76-40CE-AF17-39DA074FB07B@leuxner.net> Message-ID: <0d2f7ce9-fcc8-6aa2-2c7e-acaec6e86a1d@rename-it.nl> Op 10/15/2016 om 9:55 AM schreef Thomas Leuxner: > # 2.2.devel (c73322f): /etc/dovecot/dovecot.conf > # Pigeonhole version 0.4.devel (fac92b5) > # OS: Linux 3.16.0-4-amd64 x86_64 Debian 8.6 > > ==> /var/log/dovecot/dovecot.log <== > Oct 15 09:50:15 nihlus dovecot: lmtp(11447): Connect from local > Oct 15 09:50:15 nihlus dovecot: lmtp(tlx at leuxner.net): Panic: file lda-sieve-plugin.c: line 447 (lda_sieve_execute_scripts): assertion failed: (script != NULL) > Oct 15 09:50:15 nihlus dovecot: lmtp(tlx at leuxner.net): Error: Raw backtrace: /usr/lib/dovecot/libdovecot.so.0(+0x938ae) [0x7fd161fc18ae] -> /usr/lib/dovecot/libdovecot.so.0(+0x9399c) [0x7fd161fc199c] -> /usr/lib/dovecot/libdovecot.so.0(i_fatal+0) [0x7fd161f5b6de] -> /usr/lib/dovecot/modules/lib90_sieve_plugin.so(+0x3af8) [0x7fd15fdf8af8] -> /usr/lib/dovecot/libdovecot-lda.so.0(mail_deliver+0x49) [0x7fd16258bb39] -> dovecot/lmtp [DATA tlx at leuxner.net](+0x724e) [0x7fd1629bc24e] -> /usr/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x4c) [0x7fd161fd5e4c] -> /usr/lib/dovecot/libdovecot.so.0(io_loop_handler_run_internal+0x10a) [0x7fd161fd730a] -> /usr/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0x25) [0x7fd161fd5ed5] -> /usr/lib/dovecot/libdovecot.so.0(io_loop_run+0x38) [0x7fd161fd6078] -> /usr/lib/dovecot/libdovecot.so.0(master_service_run+0x13) [0x7fd161f61be3] -> dovecot/lmtp [DATA tlx at leuxner.net](main+0x1a2) [0x7fd1629ba382] -> /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5) [0x7fd161ba4b45] -> dovecot/lmtp [DATA tlx at leuxner.net](+0x5430) [0x7fd1629ba430] > Oct 15 09:50:15 nihlus dovecot: lmtp(tlx at leuxner.net): Fatal: master: service(lmtp): child 11447 killed with signal 6 (core not dumped) Can you show us your configuration (`dovecot -n`)? Regards, Stephan. From stephan at rename-it.nl Sat Oct 15 09:42:00 2016 From: stephan at rename-it.nl (Stephan Bosch) Date: Sat, 15 Oct 2016 11:42:00 +0200 Subject: Latest HG Changes (fac92b5) affect Sieve-Plugin/LMTP In-Reply-To: <0d2f7ce9-fcc8-6aa2-2c7e-acaec6e86a1d@rename-it.nl> References: <594B7F78-0A76-40CE-AF17-39DA074FB07B@leuxner.net> <0d2f7ce9-fcc8-6aa2-2c7e-acaec6e86a1d@rename-it.nl> Message-ID: Op 10/15/2016 om 11:23 AM schreef Stephan Bosch: > Op 10/15/2016 om 9:55 AM schreef Thomas Leuxner: >> # 2.2.devel (c73322f): /etc/dovecot/dovecot.conf >> # Pigeonhole version 0.4.devel (fac92b5) >> # OS: Linux 3.16.0-4-amd64 x86_64 Debian 8.6 >> >> ==> /var/log/dovecot/dovecot.log <== >> Oct 15 09:50:15 nihlus dovecot: lmtp(11447): Connect from local >> Oct 15 09:50:15 nihlus dovecot: lmtp(tlx at leuxner.net): Panic: file lda-sieve-plugin.c: line 447 (lda_sieve_execute_scripts): assertion failed: (script != NULL) >> Oct 15 09:50:15 nihlus dovecot: lmtp(tlx at leuxner.net): Error: Raw backtrace: /usr/lib/dovecot/libdovecot.so.0(+0x938ae) [0x7fd161fc18ae] -> /usr/lib/dovecot/libdovecot.so.0(+0x9399c) [0x7fd161fc199c] -> /usr/lib/dovecot/libdovecot.so.0(i_fatal+0) [0x7fd161f5b6de] -> /usr/lib/dovecot/modules/lib90_sieve_plugin.so(+0x3af8) [0x7fd15fdf8af8] -> /usr/lib/dovecot/libdovecot-lda.so.0(mail_deliver+0x49) [0x7fd16258bb39] -> dovecot/lmtp [DATA tlx at leuxner.net](+0x724e) [0x7fd1629bc24e] -> /usr/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x4c) [0x7fd161fd5e4c] -> /usr/lib/dovecot/libdovecot.so.0(io_loop_handler_run_internal+0x10a) [0x7fd161fd730a] -> /usr/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0x25) [0x7fd161fd5ed5] -> /usr/lib/dovecot/libdovecot.so.0(io_loop_run+0x38) [0x7fd161fd6078] -> /usr/lib/dovecot/libdovecot.so.0(master_service_run+0x13) [0x7fd161f61be3] -> dovecot/lmtp [DATA tlx at leuxner.net](main+0x1a2) [0x7fd1629ba382] -> /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5) [0x7fd161ba4b45] -> dovecot/lmtp [DATA tlx at leuxner.net](+0x5430) [0x7fd1629ba430] >> Oct 15 09:50:15 nihlus dovecot: lmtp(tlx at leuxner.net): Fatal: master: service(lmtp): child 11447 killed with signal 6 (core not dumped) > > Can you show us your configuration (`dovecot -n`)? Ah, never mind. Found it already. Fixing... Regards, Stephan. From stephan at rename-it.nl Sat Oct 15 10:37:27 2016 From: stephan at rename-it.nl (Stephan Bosch) Date: Sat, 15 Oct 2016 12:37:27 +0200 Subject: Latest HG Changes (fac92b5) affect Sieve-Plugin/LMTP In-Reply-To: <594B7F78-0A76-40CE-AF17-39DA074FB07B@leuxner.net> References: <594B7F78-0A76-40CE-AF17-39DA074FB07B@leuxner.net> Message-ID: Op 10/15/2016 om 9:55 AM schreef Thomas Leuxner: > # 2.2.devel (c73322f): /etc/dovecot/dovecot.conf > # Pigeonhole version 0.4.devel (fac92b5) > # OS: Linux 3.16.0-4-amd64 x86_64 Debian 8.6 > > ==> /var/log/dovecot/dovecot.log <== > Oct 15 09:50:15 nihlus dovecot: lmtp(11447): Connect from local > Oct 15 09:50:15 nihlus dovecot: lmtp(tlx at leuxner.net): Panic: file lda-sieve-plugin.c: line 447 (lda_sieve_execute_scripts): assertion failed: (script != NULL) > Oct 15 09:50:15 nihlus dovecot: lmtp(tlx at leuxner.net): Error: Raw backtrace: /usr/lib/dovecot/libdovecot.so.0(+0x938ae) [0x7fd161fc18ae] -> /usr/lib/dovecot/libdovecot.so.0(+0x9399c) [0x7fd161fc199c] -> /usr/lib/dovecot/libdovecot.so.0(i_fatal+0) [0x7fd161f5b6de] -> /usr/lib/dovecot/modules/lib90_sieve_plugin.so(+0x3af8) [0x7fd15fdf8af8] -> /usr/lib/dovecot/libdovecot-lda.so.0(mail_deliver+0x49) [0x7fd16258bb39] -> dovecot/lmtp [DATA tlx at leuxner.net](+0x724e) [0x7fd1629bc24e] -> /usr/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x4c) [0x7fd161fd5e4c] -> /usr/lib/dovecot/libdovecot.so.0(io_loop_handler_run_internal+0x10a) [0x7fd161fd730a] -> /usr/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0x25) [0x7fd161fd5ed5] -> /usr/lib/dovecot/libdovecot.so.0(io_loop_run+0x38) [0x7fd161fd6078] -> /usr/lib/dovecot/libdovecot.so.0(master_service_run+0x13) [0x7fd161f61be3] -> dovecot/lmtp [DATA tlx at leuxner.net](main+0x1a2) [0x7fd1629ba382] -> /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5) [0x7fd161ba4b45] -> dovecot/lmtp [DATA tlx at leuxner.net](+0x5430) [0x7fd1629ba430] > Oct 15 09:50:15 nihlus dovecot: lmtp(tlx at leuxner.net): Fatal: master: service(lmtp): child 11447 killed with signal 6 (core not dumped) This should fix it: https://github.com/dovecot/pigeonhole/commit/63f9b42f20cf0bd26b981be8a08f01b48e23517f Regards, Stephan. From tlx at leuxner.net Sat Oct 15 11:32:30 2016 From: tlx at leuxner.net (Thomas Leuxner) Date: Sat, 15 Oct 2016 13:32:30 +0200 Subject: Latest HG Changes (fac92b5) affect Sieve-Plugin/LMTP In-Reply-To: References: <594B7F78-0A76-40CE-AF17-39DA074FB07B@leuxner.net> Message-ID: <368423A3-7300-42E8-90BD-FBA96AD82875@leuxner.net> > This should fix it: > > https://github.com/dovecot/pigeonhole/commit/63f9b42f20cf0bd26b981be8a08f01b48e23517f Confirmed fixed. Can you please push to 2.2 so builds pick up there? Thanks Thomas From stephan at rename-it.nl Sat Oct 15 13:07:49 2016 From: stephan at rename-it.nl (Stephan Bosch) Date: Sat, 15 Oct 2016 15:07:49 +0200 Subject: Latest HG Changes (fac92b5) affect Sieve-Plugin/LMTP In-Reply-To: <368423A3-7300-42E8-90BD-FBA96AD82875@leuxner.net> References: <594B7F78-0A76-40CE-AF17-39DA074FB07B@leuxner.net> <368423A3-7300-42E8-90BD-FBA96AD82875@leuxner.net> Message-ID: <88deda0d-86f6-6115-af10-60ac06bb2d22@rename-it.nl> Op 10/15/2016 om 1:32 PM schreef Thomas Leuxner: >> This should fix it: >> >> https://github.com/dovecot/pigeonhole/commit/63f9b42f20cf0bd26b981be8a08f01b48e23517f > Confirmed fixed. Can you please push to 2.2 so builds pick up there? It already is. In fact, that link points to the master-0.4 branch. Regards, Stephan. From mick.crane at gmail.com Sat Oct 15 14:30:00 2016 From: mick.crane at gmail.com (mick crane) Date: Sat, 15 Oct 2016 15:30:00 +0100 Subject: First steps in Dovecot; IMAP not working In-Reply-To: <949CA272-B24C-40E3-96C6-305BDF8F146C@mac.com> References: <003301d22540$95d40340$c17c09c0$@mac.com> <000201d22618$a4022400$ec066c00$@mac.com> <24757359d05f17b2a06b3593a2f7e01a@rapunzel.local> <4E8367DF-0418-445B-B654-2F7DD1FE70B8@mac.com> <949CA272-B24C-40E3-96C6-305BDF8F146C@mac.com> Message-ID: <2a8be41a293fc4b4cf4b21870b15d453@rapunzel.local> On 2016-10-15 09:48, Marnaud wrote: > Le 15 oct. 2016 ? 9:56, mick crane a ?crit: > >> I assume this means there are no emails in the INBOX. >> send yourself a mail. > > This is actually one of the first thing I tried when I saw outgoing > mails worked. I also tried sending mails from another address, hoping > to receive an error message back, but nothing got received. > I now think the INBOX itself doesn?t exist, as logs seem to mention > and someone else already told. Then, of course, I?d get 0 existing > mails since the INBOX doesn?t exist. > > Thanks. http://wiki.dovecot.org/FindMailLocation -- key ID: 0x4BFEBB31 From laska at kam.mff.cuni.cz Sat Oct 15 18:59:24 2016 From: laska at kam.mff.cuni.cz (Ladislav Laska) Date: Sat, 15 Oct 2016 20:59:24 +0200 Subject: Pigeonhole/sieve possibly corrupting mails Message-ID: <20161015185924.gt7i5jykuqu55pfc@wallaby> Hi! I'm here again with a problem. I'm using dovecot as an IMAP server and LDA, filtering mail via sieve. However, few times a day I get the following error on server and my client (mutt) gets disconnected. Oct 15 20:20:29 ibex dovecot: imap(krakonos): Error: Corrupted index cache file /home/krakonos/.mbox/.imap/INBOX/dovecot.index.cache: Broken physical s ize for mail UID 149418 in mailbox INBOX: read(/home/krakonos/.mbox/inbox) failed: Cached message size smaller than expected (3793 < 8065, box=INBOX, UID=149418, cached Message-Id=<88deda0d-86f6-6115-af10-60ac06bb2d22 at rename-it.nl>) Oct 15 20:20:29 ibex dovecot: imap(krakonos): Error: read(/home/krakonos/.mbox/inbox) failed: Cached message size smaller than expected (3793 < 8065, box=INBOX, UID=149418, cached Message-Id=<88deda0d-86f6-6115-af10-60ac06bb2d22 at rename-it.nl>) (FETCH BODY[] for mailbox INBOX UID 149418) Oct 15 20:20:29 ibex dovecot: imap(krakonos): FETCH read() failed in=110326 out=5115197 This is on a new message (attached), and this error happens on some messages when first opened. Once I reconnect, the message always opens fine, and no old message ever causes problem. I also noticed this error, which is possibly connected: Oct 15 20:15:12 ibex dovecot: lda(krakonos): Error: Next message unexpectedly corrupted in mbox file /home/krakonos/.mbox/inbox at 546862809 The filesystem is ext4, and there are no errors in syslog or problems with any other services. I also don't access the mbox locally, and only dovecot manipulates the mbox (via imap and mailbox_command = /usr/libexec/dovecot/deliver) The postfix version is 2.2.25. I'm attaching dovecot -n and the offending message (after it's been corrected). I'd rather not publish my sieve file, but will send it privately. The offending message also contains other message I received at approximately the same time. Any hint's on what could be wrong? -- S pozdravem Ladislav "Krakono?" L?ska http://www.krakonos.org/ -------------- next part -------------- A non-text attachment was scrubbed... Name: msg-error.mbox Type: application/mbox Size: 10550 bytes Desc: not available URL: -------------- next part -------------- # 2.2.25 (7be1766): /etc/dovecot/dovecot.conf # Pigeonhole version 0.4.15 (97b3da0) # OS: Linux 4.0.4-gentoo x86_64 Gentoo Base System release 2.2 auth_username_format = %n hostname = ibex.krakonos.org login_greeting = Dovecot at krakonos.org ready. mail_debug = yes mail_location = mbox:~/.mbox namespace inbox { inbox = yes location = mailbox Drafts { special_use = \Drafts } mailbox Junk { special_use = \Junk } mailbox Sent { special_use = \Sent } mailbox "Sent Messages" { special_use = \Sent } mailbox Trash { special_use = \Trash } prefix = } passdb { args = * driver = pam } passdb { args = scheme=CRYPT username_format=%u /etc/dovecot/users driver = passwd-file } plugin { sieve = file:~/sieve;active=~/.dovecot.sieve sieve_execute_bin_dir = /usr/lib/dovecot/sieve-execute sieve_execute_socket_dir = sieve-execute sieve_extensions = +vnd.dovecot.filter +editheader sieve_filter_bin_dir = /usr/lib/dovecot/sieve-filter sieve_filter_socket_dir = sieve-filter sieve_pipe_bin_dir = /usr/lib/dovecot/sieve-pipe sieve_pipe_socket_dir = sieve-pipe sieve_plugins = sieve_extprograms } postmaster_address = postmaster at krakonos.org protocols = imap service auth { unix_listener /var/spool/postfix/private/auth { mode = 0666 } } ssl_cert = References: <612182c9-326d-2983-329b-248cfa6d7804@jaury.eu> Message-ID: <8e81cd26-5d0a-7260-864d-507bc28c2d95@jaury.eu> I dived a little bit further into the rabbit hole, up to the point where debugging has become unpracticle but I still haven't found the root cause for sure. I read most of the code for "p_strdup" based on datastack memory pools (which are used for dictionary lookups both with doveadm and by extdata) and it seems ok. Still, after "t_malloc_real" is called in "t_malloc0", the allocated buffer has the same address as the source string. The only sensible explanation I can come up with is that during unescaping, strings are not allocated properly, leading to the memory pool reusing the string address and zeroing it in the process before the string copy operation. I will follow on this path tomorrow, any lead is more than welcome. kaiyou. On 10/16/2016 11:16 PM, Pierre Jaury wrote: > Hello, > > I am using a dict proxy for my sieve extdata plugin to access some > fields from an SQLite database (autoreply text and other > database-configured items). > > All tests are performed against version 2.2.25. > > $ dovecot --version > 2.2.25 (7be1766) > > My configuration looks like: > > dict { > sieve = sqlite:/etc/dovecot/pigeonhole-sieve.dict > } > > [...] > sieve_extdata_dict_uri = proxy::sieve > > I am able to read pretty much any attribute without any issue, except > when the value contains a special character like "\r" or "\n". By using > the doveadm dict client, I narrowed it down to the dictionary management > part (either server, protocol or client). > > I was suspecting escaping functions from "lib/strescape.c" (mostly > str_tabescape and its counterpart, used by "lib-dict/client.c"), so I > monitored socket communications. It seems that escaping is done properly > on the server and the socket is not an issue either. > > The following strace dump results from running "doveadm dict get" > against the dict socket: > > connect(8, {sa_family=AF_UNIX, sun_path="..."}, 110) = 0 > fstat(8, {st_mode=S_IFSOCK|0777, st_size=0, ...}) = 0 > [...] > write(8, "H2\t0\t0\tadmin at domain.tld\tsieve\n", 30) = 30 > [...] > read(8, "Otest\1r\1ntest\n", 8192) = 14 > > Indeed "\1r" and "\1n" are the escape sequences used by > "lib/strescape.c". I went deeped and debugged the call to "dict_lookup" > performed by doveadm. Indeed the client gets the proper string from the > socket and to my surprise, it is properly unescaped. > > Then, in "client_dict_lookup" ("lib-dict/dict-client.c"), the call to > "p_strdup" returns an empty string (null byte set at the target address). > > Before the call to the dict "->lookup" attribute (client_dict_lookup): > > RAX: 0x7ffff73a37c0 (push r14) > RBX: 0x6831b8 ("priv/reply_body") > RCX: 0x7fffffffe240 --> 0x682a60 --> 0x6831b8 ("priv/reply_body") > RDX: 0x6831b8 ("priv/reply_body") > RSI: 0x683288 --> 0x7ffff7653120 --> 0x7ffff73ea620 ([...]) > RDI: 0x690ad0 --> 0x7ffff7400713 --> 0x75250079786f7270 ('proxy') > > 0x7ffff73a1f10 : mov rcx,r11 (value_r) > 0x7ffff73a1f13 : mov rdx,r8 (key) > 0x7ffff73a1f16 : mov rsi,r10 (pool) > 0x7ffff73a1f19 : mov rdi,r9 (dict) > 0x7ffff73a1f1c : add rsp,0x8 > => 0x7ffff73a1f20 : jmp rax > > Before the call to p_strdup in "client_dict_lookup": > > RSI: 0x6832d8 ("test\r\ntest") (lookup.result.value) > RDI: 0x683288 --> 0x7ffff7653120 --> [...] (pool) > RAX: 0x0 (result) > > 0x7ffff73a384f: nop > 0x7ffff73a3850: mov rsi,QWORD PTR [rsp+0x8] > 0x7ffff73a3855: mov rdi,r14 > => 0x7ffff73a3858: call 0x7ffff736d3c0 > 0x7ffff73a385d: mov QWORD PTR [r13+0x0],rax > 0x7ffff73a3861: mov rsi,QWORD PTR [rsp+0x18] > 0x7ffff73a3866: xor rsi,QWORD PTR fs:0x28 > 0x7ffff73a386f: mov eax,ebx > > After the call: > > 0x7ffff73a3850: mov rsi,QWORD PTR [rsp+0x8] > 0x7ffff73a3855: mov rdi,r14 > 0x7ffff73a3858: call 0x7ffff736d3c0 > => 0x7ffff73a385d: mov QWORD PTR [r13+0x0],rax > 0x7ffff73a3861: mov rsi,QWORD PTR [rsp+0x18] > 0x7ffff73a3866: xor rsi,QWORD PTR fs:0x28 > 0x7ffff73a386f: mov eax,ebx > 0x7ffff73a3871: jne 0x7ffff73a38da > > RSI: 0x0 > RDI: 0x6832d8 --> 0x0 > RAX: 0x6832d8 --> 0x0 (result) > > It is worth noting that I can reproduce the exact same execution flow > with a non-multiline result string (lookup.result.value) that is > properly copied by "p_strdup" and returned in RAX, then displayed by > doveadm. > > I am not familiar with the pooling mechanism hidden behind the call to > p_strdump and not quite sure why this behaviour is emerging. Maybe I am > even miles away from an understanding of the issue here, but it sounds > to me like something is wrong in the way "p_strdup" performs the copy. > > Hope this helps, > kaiyou. > > > From pierre at jaury.eu Sun Oct 16 21:16:25 2016 From: pierre at jaury.eu (Pierre Jaury) Date: Sun, 16 Oct 2016 23:16:25 +0200 Subject: Dict proxy client returning empty string instead of multiline string Message-ID: <612182c9-326d-2983-329b-248cfa6d7804@jaury.eu> Hello, I am using a dict proxy for my sieve extdata plugin to access some fields from an SQLite database (autoreply text and other database-configured items). All tests are performed against version 2.2.25. $ dovecot --version 2.2.25 (7be1766) My configuration looks like: dict { sieve = sqlite:/etc/dovecot/pigeonhole-sieve.dict } [...] sieve_extdata_dict_uri = proxy::sieve I am able to read pretty much any attribute without any issue, except when the value contains a special character like "\r" or "\n". By using the doveadm dict client, I narrowed it down to the dictionary management part (either server, protocol or client). I was suspecting escaping functions from "lib/strescape.c" (mostly str_tabescape and its counterpart, used by "lib-dict/client.c"), so I monitored socket communications. It seems that escaping is done properly on the server and the socket is not an issue either. The following strace dump results from running "doveadm dict get" against the dict socket: connect(8, {sa_family=AF_UNIX, sun_path="..."}, 110) = 0 fstat(8, {st_mode=S_IFSOCK|0777, st_size=0, ...}) = 0 [...] write(8, "H2\t0\t0\tadmin at domain.tld\tsieve\n", 30) = 30 [...] read(8, "Otest\1r\1ntest\n", 8192) = 14 Indeed "\1r" and "\1n" are the escape sequences used by "lib/strescape.c". I went deeped and debugged the call to "dict_lookup" performed by doveadm. Indeed the client gets the proper string from the socket and to my surprise, it is properly unescaped. Then, in "client_dict_lookup" ("lib-dict/dict-client.c"), the call to "p_strdup" returns an empty string (null byte set at the target address). Before the call to the dict "->lookup" attribute (client_dict_lookup): RAX: 0x7ffff73a37c0 (push r14) RBX: 0x6831b8 ("priv/reply_body") RCX: 0x7fffffffe240 --> 0x682a60 --> 0x6831b8 ("priv/reply_body") RDX: 0x6831b8 ("priv/reply_body") RSI: 0x683288 --> 0x7ffff7653120 --> 0x7ffff73ea620 ([...]) RDI: 0x690ad0 --> 0x7ffff7400713 --> 0x75250079786f7270 ('proxy') 0x7ffff73a1f10 : mov rcx,r11 (value_r) 0x7ffff73a1f13 : mov rdx,r8 (key) 0x7ffff73a1f16 : mov rsi,r10 (pool) 0x7ffff73a1f19 : mov rdi,r9 (dict) 0x7ffff73a1f1c : add rsp,0x8 => 0x7ffff73a1f20 : jmp rax Before the call to p_strdup in "client_dict_lookup": RSI: 0x6832d8 ("test\r\ntest") (lookup.result.value) RDI: 0x683288 --> 0x7ffff7653120 --> [...] (pool) RAX: 0x0 (result) 0x7ffff73a384f: nop 0x7ffff73a3850: mov rsi,QWORD PTR [rsp+0x8] 0x7ffff73a3855: mov rdi,r14 => 0x7ffff73a3858: call 0x7ffff736d3c0 0x7ffff73a385d: mov QWORD PTR [r13+0x0],rax 0x7ffff73a3861: mov rsi,QWORD PTR [rsp+0x18] 0x7ffff73a3866: xor rsi,QWORD PTR fs:0x28 0x7ffff73a386f: mov eax,ebx After the call: 0x7ffff73a3850: mov rsi,QWORD PTR [rsp+0x8] 0x7ffff73a3855: mov rdi,r14 0x7ffff73a3858: call 0x7ffff736d3c0 => 0x7ffff73a385d: mov QWORD PTR [r13+0x0],rax 0x7ffff73a3861: mov rsi,QWORD PTR [rsp+0x18] 0x7ffff73a3866: xor rsi,QWORD PTR fs:0x28 0x7ffff73a386f: mov eax,ebx 0x7ffff73a3871: jne 0x7ffff73a38da RSI: 0x0 RDI: 0x6832d8 --> 0x0 RAX: 0x6832d8 --> 0x0 (result) It is worth noting that I can reproduce the exact same execution flow with a non-multiline result string (lookup.result.value) that is properly copied by "p_strdup" and returned in RAX, then displayed by doveadm. I am not familiar with the pooling mechanism hidden behind the call to p_strdump and not quite sure why this behaviour is emerging. Maybe I am even miles away from an understanding of the issue here, but it sounds to me like something is wrong in the way "p_strdup" performs the copy. Hope this helps, kaiyou. From aki.tuomi at dovecot.fi Mon Oct 17 05:51:52 2016 From: aki.tuomi at dovecot.fi (Aki Tuomi) Date: Mon, 17 Oct 2016 08:51:52 +0300 Subject: Dict proxy client returning empty string instead of multiline string In-Reply-To: <8e81cd26-5d0a-7260-864d-507bc28c2d95@jaury.eu> References: <612182c9-326d-2983-329b-248cfa6d7804@jaury.eu> <8e81cd26-5d0a-7260-864d-507bc28c2d95@jaury.eu> Message-ID: Hi! This does sound like a bug, we'll have look. Aki On 17.10.2016 01:26, Pierre Jaury wrote: > I dived a little bit further into the rabbit hole, up to the point where > debugging has become unpracticle but I still haven't found the root > cause for sure. > > I read most of the code for "p_strdup" based on datastack memory pools > (which are used for dictionary lookups both with doveadm and by extdata) > and it seems ok. Still, after "t_malloc_real" is called in "t_malloc0", > the allocated buffer has the same address as the source string. > > The only sensible explanation I can come up with is that during > unescaping, strings are not allocated properly, leading to the memory > pool reusing the string address and zeroing it in the process before the > string copy operation. > > I will follow on this path tomorrow, any lead is more than welcome. > > kaiyou. > > On 10/16/2016 11:16 PM, Pierre Jaury wrote: >> Hello, >> >> I am using a dict proxy for my sieve extdata plugin to access some >> fields from an SQLite database (autoreply text and other >> database-configured items). >> >> All tests are performed against version 2.2.25. >> >> $ dovecot --version >> 2.2.25 (7be1766) >> >> My configuration looks like: >> >> dict { >> sieve = sqlite:/etc/dovecot/pigeonhole-sieve.dict >> } >> >> [...] >> sieve_extdata_dict_uri = proxy::sieve >> >> I am able to read pretty much any attribute without any issue, except >> when the value contains a special character like "\r" or "\n". By using >> the doveadm dict client, I narrowed it down to the dictionary management >> part (either server, protocol or client). >> >> I was suspecting escaping functions from "lib/strescape.c" (mostly >> str_tabescape and its counterpart, used by "lib-dict/client.c"), so I >> monitored socket communications. It seems that escaping is done properly >> on the server and the socket is not an issue either. >> >> The following strace dump results from running "doveadm dict get" >> against the dict socket: >> >> connect(8, {sa_family=AF_UNIX, sun_path="..."}, 110) = 0 >> fstat(8, {st_mode=S_IFSOCK|0777, st_size=0, ...}) = 0 >> [...] >> write(8, "H2\t0\t0\tadmin at domain.tld\tsieve\n", 30) = 30 >> [...] >> read(8, "Otest\1r\1ntest\n", 8192) = 14 >> >> Indeed "\1r" and "\1n" are the escape sequences used by >> "lib/strescape.c". I went deeped and debugged the call to "dict_lookup" >> performed by doveadm. Indeed the client gets the proper string from the >> socket and to my surprise, it is properly unescaped. >> >> Then, in "client_dict_lookup" ("lib-dict/dict-client.c"), the call to >> "p_strdup" returns an empty string (null byte set at the target address). >> >> Before the call to the dict "->lookup" attribute (client_dict_lookup): >> >> RAX: 0x7ffff73a37c0 (push r14) >> RBX: 0x6831b8 ("priv/reply_body") >> RCX: 0x7fffffffe240 --> 0x682a60 --> 0x6831b8 ("priv/reply_body") >> RDX: 0x6831b8 ("priv/reply_body") >> RSI: 0x683288 --> 0x7ffff7653120 --> 0x7ffff73ea620 ([...]) >> RDI: 0x690ad0 --> 0x7ffff7400713 --> 0x75250079786f7270 ('proxy') >> >> 0x7ffff73a1f10 : mov rcx,r11 (value_r) >> 0x7ffff73a1f13 : mov rdx,r8 (key) >> 0x7ffff73a1f16 : mov rsi,r10 (pool) >> 0x7ffff73a1f19 : mov rdi,r9 (dict) >> 0x7ffff73a1f1c : add rsp,0x8 >> => 0x7ffff73a1f20 : jmp rax >> >> Before the call to p_strdup in "client_dict_lookup": >> >> RSI: 0x6832d8 ("test\r\ntest") (lookup.result.value) >> RDI: 0x683288 --> 0x7ffff7653120 --> [...] (pool) >> RAX: 0x0 (result) >> >> 0x7ffff73a384f: nop >> 0x7ffff73a3850: mov rsi,QWORD PTR [rsp+0x8] >> 0x7ffff73a3855: mov rdi,r14 >> => 0x7ffff73a3858: call 0x7ffff736d3c0 >> 0x7ffff73a385d: mov QWORD PTR [r13+0x0],rax >> 0x7ffff73a3861: mov rsi,QWORD PTR [rsp+0x18] >> 0x7ffff73a3866: xor rsi,QWORD PTR fs:0x28 >> 0x7ffff73a386f: mov eax,ebx >> >> After the call: >> >> 0x7ffff73a3850: mov rsi,QWORD PTR [rsp+0x8] >> 0x7ffff73a3855: mov rdi,r14 >> 0x7ffff73a3858: call 0x7ffff736d3c0 >> => 0x7ffff73a385d: mov QWORD PTR [r13+0x0],rax >> 0x7ffff73a3861: mov rsi,QWORD PTR [rsp+0x18] >> 0x7ffff73a3866: xor rsi,QWORD PTR fs:0x28 >> 0x7ffff73a386f: mov eax,ebx >> 0x7ffff73a3871: jne 0x7ffff73a38da >> >> RSI: 0x0 >> RDI: 0x6832d8 --> 0x0 >> RAX: 0x6832d8 --> 0x0 (result) >> >> It is worth noting that I can reproduce the exact same execution flow >> with a non-multiline result string (lookup.result.value) that is >> properly copied by "p_strdup" and returned in RAX, then displayed by >> doveadm. >> >> I am not familiar with the pooling mechanism hidden behind the call to >> p_strdump and not quite sure why this behaviour is emerging. Maybe I am >> even miles away from an understanding of the issue here, but it sounds >> to me like something is wrong in the way "p_strdup" performs the copy. >> >> Hope this helps, >> kaiyou. >> >> >> From arekm at maven.pl Mon Oct 17 06:41:38 2016 From: arekm at maven.pl (Arkadiusz =?utf-8?q?Mi=C5=9Bkiewicz?=) Date: Mon, 17 Oct 2016 08:41:38 +0200 Subject: logging TLS SNI hostname In-Reply-To: <201605300829.17351.arekm@maven.pl> References: <201605300829.17351.arekm@maven.pl> Message-ID: <201610170841.38721.arekm@maven.pl> On Monday 30 of May 2016, Arkadiusz Mi?kiewicz wrote: > Is there a way to log SNI hostname used in TLS session? Info is there in > SSL_CTX_set_tlsext_servername_callback, dovecot copies it to > ssl_io->host. > > Unfortunately I don't see it expanded to any variables ( > http://wiki.dovecot.org/Variables ). Please consider this to be a feature > request. > > The goal is to be able to see which hostname client used like: > > May 30 08:21:19 xxx dovecot: pop3-login: Login: user=, method=PLAIN, > rip=1.1.1.1, lip=2.2.2.2, mpid=17135, TLS, SNI=pop3.somehost.org, > session= Dear dovecot team, would be possible to add such variable ^^^^^ ? That would be neat feature because server operator would know what hostname client uses to connect to server (which is really usefull in case of many hostnames pointing to single IP). Thanks, -- Arkadiusz Mi?kiewicz, arekm / ( maven.pl | pld-linux.org ) From anic297 at mac.com Mon Oct 17 07:27:27 2016 From: anic297 at mac.com (Marnaud) Date: Mon, 17 Oct 2016 07:27:27 +0000 (GMT) Subject: First steps in Dovecot; IMAP not working In-Reply-To: <2a8be41a293fc4b4cf4b21870b15d453@rapunzel.local> Message-ID: <91b68633-6564-4cb6-b9c2-cb5bd9cd7ad5@me.com> Le 15 octobre 2016 ? 07:35, mick crane a ?crit: http://wiki.dovecot.org/FindMailLocation Ok, there are non-standard facts about my user. When I do: eval echo ~webuser I'm getting: /var/www/html/ This is because webuser is for an FTP account in that directory. I'm now trying with a "regular" user. Thank you for the link. From anic297 at mac.com Mon Oct 17 11:42:53 2016 From: anic297 at mac.com (Marnaud) Date: Mon, 17 Oct 2016 11:42:53 +0000 (GMT) Subject: First steps in Dovecot; IMAP not working In-Reply-To: Message-ID: <44607cf7-35de-4c5a-b315-1eb17ae13c68@me.com> Le 14 octobre 2016 ? 14:28, Joseph Tam a ?crit: (Sorry I read this list in digest form so frequently I'm half a step behind.) No problem. ? No, it's quite explicit. User "webuser" has uid/gid = 1001(webuser)/1000(ftpusers). Your mail spool has permission uid/gid = root(0)/mail(8), neither of which allows webuser to write to this mail spool to creates its own mail folder. You're right (I don't have enough Unix habits, it seems...). I couldn't change this user (it must be in the ftpusers group for other purposes), so I tried adding another user for testing mail. "mailtest", the new user, is in group mail(8). In addition, I've added write permission for "others" to /var/mail. Now, I'm trying to send a message to "mailtest" from another, working, e-mail account and nothing happens. This time, "doveadm log errors" is empty. In short, I don't get any error but no mail either. Aki Tuomi replies with several solutions: In your configuration, dovecot uses whatever user/group returned by PAM. Since the webuser has never logged in, it has no directory under /var/mail. If you want, you can a) override mail_uid and mail_gid in userdb/passdb b) pre-create /var/mail/webuser and chown it to webuser:ftpusers c) you can let ftpusers write to /var/mail. Ok, I thought I had to do all of them (and didn't understand step a)). So I've done step c) by allowing everyone write access. ? Or if you dynamically/frequently onboard mail accounts, and users cannot arbitrarily write into this directory, you can "chmod 1777 /var/mail/" and let dovecot auto-create it (might also want to set "lda_mailbox_autocreate = yes". I've done it right now; same problem. Since "doveadm log errors" returns an empty result, where should I look for the problem? Thank you. From anic297 at mac.com Mon Oct 17 11:51:22 2016 From: anic297 at mac.com (Marnaud) Date: Mon, 17 Oct 2016 11:51:22 +0000 (GMT) Subject: First steps in Dovecot; IMAP not working Message-ID: <7caba934-2f53-4161-b562-ac4c47e8e90f@me.com> Sorry, my previous message got mangled. I'm re-writing it, quoting manually. I apologize for the traffic. >(Sorry I read this list in digest form so frequently I'm half a step behind.) No problem. >No, it's quite explicit. User "webuser" has uid/gid = 1001(webuser)/1000(ftpusers). Your mail spool has permission uid/gid = root(0)/mail(8), neither of which allows webuser to write to this mail spool to creates its own mail folder. You're right (I don't have enough Unix habits, it seems...). I couldn't change this user (it must be in the ftpusers group for other purposes), so I tried adding another user for testing mail. "mailtest", the new user, is in group mail(8). In addition, I've added write permission for "others" to /var/mail. Now, I'm trying to send a message to "mailtest" from another, working, e-mail account and nothing happens. This time, "doveadm log errors" is empty. In short, I don't get any error but no mail either. >Aki Tuomi replies with several solutions: >>In your configuration, dovecot uses whatever user/group returned by PAM. Since the webuser has never logged in, it has no directory under /var/mail. If you want, you can >>a) override mail_uid and mail_gid in userdb/passdb >>b) pre-create /var/mail/webuser and chown it to webuser:ftpusers >>c) you can let ftpusers write to /var/mail. Ok, I thought I had to do all of them (and didn't understand step a)). So I've done step c) by allowing everyone write access. ? >Or if you dynamically/frequently onboard mail accounts, and users cannot arbitrarily write into this directory, you can "chmod 1777 /var/mail/" and let dovecot auto-create it (might also want to set "lda_mailbox_autocreate = yes". I've done it right now; same problem. Since "doveadm log errors" returns an empty result, where should I look for the problem? Thank you. From Ralf.Hildebrandt at charite.de Mon Oct 17 13:24:09 2016 From: Ralf.Hildebrandt at charite.de (Ralf Hildebrandt) Date: Mon, 17 Oct 2016 15:24:09 +0200 Subject: Massive LMTP Problems with dovecot Message-ID: <20161017132409.sxjgbmesb2o7s43y@charite.de> Currently I'm having massive problems with LMTP delivery into dovcot. dovecot/lmtp processes are piling up, eas using considerable amounts of CPU: # ps auxwww|fgrep dove root 20537 0.0 0.0 18124 1196 ? Ss 15:18 0:00 /usr/sbin/dovecot -c /etc/dovecot/dovecot.conf dovecot 20541 0.0 0.0 9620 1084 ? S 15:18 0:00 dovecot/anvil root 20542 0.0 0.0 9752 1264 ? S 15:18 0:00 dovecot/log root 20544 0.0 0.0 21168 2276 ? S 15:18 0:00 dovecot/config copymail 20580 72.8 0.0 39556 7036 ? R 15:18 2:00 dovecot/lmtp dovecot 20582 0.0 0.0 18568 1756 ? S 15:18 0:00 dovecot/auth copymail 20597 77.2 0.0 35688 5136 ? R 15:18 2:06 dovecot/lmtp copymail 20598 39.3 0.0 38060 5596 ? R 15:18 1:04 dovecot/lmtp copymail 20613 62.3 0.0 38036 5600 ? R 15:18 1:41 dovecot/lmtp copymail 20619 56.4 0.0 37732 7448 ? R 15:18 1:31 dovecot/lmtp copymail 20620 75.9 0.0 35872 5336 ? R 15:18 2:03 dovecot/lmtp copymail 20627 37.8 0.0 36480 5892 ? R 15:18 1:01 dovecot/lmtp copymail 20838 60.5 0.0 35640 5036 ? R 15:19 0:59 dovecot/lmtp copymail 20840 66.3 0.0 35920 5296 ? R 15:19 1:04 dovecot/lmtp copymail 20841 66.0 0.0 37456 6852 ? R 15:19 1:04 dovecot/lmtp copymail 20842 64.5 0.0 36424 5808 ? R 15:19 1:02 dovecot/lmtp copymail 20843 67.6 0.0 39612 7064 ? R 15:19 1:05 dovecot/lmtp doveadm stop won't stop these, I have to use kill -9 on them. I already tried disabling fts (entirely), still things won't speed up. I can't strace: # strace -p 20841 Process 20841 attached (and that's it) # dpkg -l|grep dovecot ii dovecot-core 2:2.2.25-1~auto+57 amd64 secure POP3/IMAP server - core files ii dovecot-imapd 2:2.2.25-1~auto+57 amd64 secure POP3/IMAP server - IMAP daemon ii dovecot-lmtpd 2:2.2.25-1~auto+57 amd64 secure POP3/IMAP server - LMTP server ii dovecot-lucene 2:2.2.25-1~auto+57 amd64 secure POP3/IMAP server - Lucene support ii dovecot-sieve 2:2.2.25-1~auto+57 amd64 secure POP3/IMAP server - Sieve filters support I also tried deleting the mdboxes, that also didn't change anything. Ideas? -- Ralf Hildebrandt Gesch?ftsbereich IT | Abteilung Netzwerk Charit? - Universit?tsmedizin Berlin Campus Benjamin Franklin Hindenburgdamm 30 | D-12203 Berlin Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962 ralf.hildebrandt at charite.de | http://www.charite.de From stephan at rename-it.nl Mon Oct 17 13:45:54 2016 From: stephan at rename-it.nl (Stephan Bosch) Date: Mon, 17 Oct 2016 15:45:54 +0200 Subject: Massive LMTP Problems with dovecot In-Reply-To: <20161017132409.sxjgbmesb2o7s43y@charite.de> References: <20161017132409.sxjgbmesb2o7s43y@charite.de> Message-ID: <3c8fdd70-f345-55d6-1151-f82dc6dfb396@rename-it.nl> Op 10/17/2016 om 3:24 PM schreef Ralf Hildebrandt: > Currently I'm having massive problems with LMTP delivery into dovcot. > dovecot/lmtp processes are piling up, eas using considerable amounts > of CPU: > > # ps auxwww|fgrep dove > > root 20537 0.0 0.0 18124 1196 ? Ss 15:18 0:00 /usr/sbin/dovecot -c /etc/dovecot/dovecot.conf > dovecot 20541 0.0 0.0 9620 1084 ? S 15:18 0:00 dovecot/anvil > root 20542 0.0 0.0 9752 1264 ? S 15:18 0:00 dovecot/log > root 20544 0.0 0.0 21168 2276 ? S 15:18 0:00 dovecot/config > copymail 20580 72.8 0.0 39556 7036 ? R 15:18 2:00 dovecot/lmtp > dovecot 20582 0.0 0.0 18568 1756 ? S 15:18 0:00 dovecot/auth > copymail 20597 77.2 0.0 35688 5136 ? R 15:18 2:06 dovecot/lmtp > copymail 20598 39.3 0.0 38060 5596 ? R 15:18 1:04 dovecot/lmtp > copymail 20613 62.3 0.0 38036 5600 ? R 15:18 1:41 dovecot/lmtp > copymail 20619 56.4 0.0 37732 7448 ? R 15:18 1:31 dovecot/lmtp > copymail 20620 75.9 0.0 35872 5336 ? R 15:18 2:03 dovecot/lmtp > copymail 20627 37.8 0.0 36480 5892 ? R 15:18 1:01 dovecot/lmtp > copymail 20838 60.5 0.0 35640 5036 ? R 15:19 0:59 dovecot/lmtp > copymail 20840 66.3 0.0 35920 5296 ? R 15:19 1:04 dovecot/lmtp > copymail 20841 66.0 0.0 37456 6852 ? R 15:19 1:04 dovecot/lmtp > copymail 20842 64.5 0.0 36424 5808 ? R 15:19 1:02 dovecot/lmtp > copymail 20843 67.6 0.0 39612 7064 ? R 15:19 1:05 dovecot/lmtp > > doveadm stop won't stop these, I have to use kill -9 on them. > I already tried disabling fts (entirely), still things won't speed up. > > I can't strace: > # strace -p 20841 > Process 20841 attached > > (and that's it) > > # dpkg -l|grep dovecot > ii dovecot-core 2:2.2.25-1~auto+57 amd64 secure POP3/IMAP server - core files > ii dovecot-imapd 2:2.2.25-1~auto+57 amd64 secure POP3/IMAP server - IMAP daemon > ii dovecot-lmtpd 2:2.2.25-1~auto+57 amd64 secure POP3/IMAP server - LMTP server > ii dovecot-lucene 2:2.2.25-1~auto+57 amd64 secure POP3/IMAP server - Lucene support > ii dovecot-sieve 2:2.2.25-1~auto+57 amd64 secure POP3/IMAP server - Sieve filters support > > I also tried deleting the mdboxes, that also didn't change anything. > Ideas? We'll need the `dovecot -n` output first. Also, you could attach gdb to one of these processes and find out where it is stuck in what looks to be an infinite loop (produce a backtrace). Regards, Stephan. From Ralf.Hildebrandt at charite.de Mon Oct 17 13:48:30 2016 From: Ralf.Hildebrandt at charite.de (Ralf Hildebrandt) Date: Mon, 17 Oct 2016 15:48:30 +0200 Subject: Massive LMTP Problems with dovecot In-Reply-To: <3syKJD4Vj9z20sts@mail-cbf.charite.de> <3c8fdd70-f345-55d6-1151-f82dc6dfb396@rename-it.nl> Message-ID: <20161017134829.x4qp4opkorg32sd2@charite.de> > We'll need the `dovecot -n` output first. Here we go: > # 2.2.devel (933d16f): /etc/dovecot/dovecot.conf > # Pigeonhole version 0.4.devel (63f9b42) > # OS: Linux 3.13.0-98-generic x86_64 Ubuntu 14.04.5 LTS > default_vsz_limit = 2 G > lmtp_user_concurrency_limit = 10000 > mail_attachment_dir = /home/copymail/attachments > mail_location = mdbox:~/mdbox > mail_plugins = zlib fts fts_lucene > mdbox_rotate_size = 128 M > namespace inbox { > inbox = yes > location = > mailbox Drafts { > special_use = \Drafts > } > mailbox Junk { > special_use = \Junk > } > mailbox Sent { > special_use = \Sent > } > mailbox "Sent Messages" { > special_use = \Sent > } > mailbox Trash { > special_use = \Trash > } > prefix = > } > passdb { > args = username_format=%u /etc/dovecot/passwd > driver = passwd-file > } > plugin { > fts = lucene > fts_autoindex = yes > fts_languages = de,en > fts_lucene = whitespace_chars=@. > sieve = file:~/sieve;active=~/.dovecot.sieve > zlib_save = gz > zlib_save_level = 5 > } > protocols = " imap lmtp" > service imap-login { > inet_listener imap { > address = 127.0.0.1 > port = 143 > } > inet_listener imaps { > port = 993 > ssl = yes > } > } > service lmtp { > inet_listener lmtp { > address = 141.42.1.208 > port = 1025 > } > unix_listener /var/spool/postfix/private/dovecot-lmtp { > group = postfix > mode = 0660 > user = postfix > } > } > ssl_ca = /etc/ssl/certs/ca-certificates.crt > ssl_cert = ssl_cipher_list = EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS:!RC4 > ssl_key = # hidden, use -P to show it > ssl_prefer_server_ciphers = yes > ssl_protocols = !SSLv2 !SSLv3 > userdb { > args = username_format=%u /etc/dovecot/passwd > driver = passwd-file > } > protocol lmtp { > mail_plugins = zlib fts fts_lucene > } Ralf Hildebrandt Gesch?ftsbereich IT | Abteilung Netzwerk Charit? - Universit?tsmedizin Berlin Campus Benjamin Franklin Hindenburgdamm 30 | D-12203 Berlin Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962 ralf.hildebrandt at charite.de | http://www.charite.de From Ralf.Hildebrandt at charite.de Mon Oct 17 14:00:00 2016 From: Ralf.Hildebrandt at charite.de (Ralf Hildebrandt) Date: Mon, 17 Oct 2016 16:00:00 +0200 Subject: Massive LMTP Problems with dovecot In-Reply-To: <20161017134829.x4qp4opkorg32sd2@charite.de> References: <3syKJD4Vj9z20sts@mail-cbf.charite.de> <3c8fdd70-f345-55d6-1151-f82dc6dfb396@rename-it.nl> <20161017134829.x4qp4opkorg32sd2@charite.de> Message-ID: <20161017140000.mdfm3sp3eqzve35b@charite.de> I attached gdb top a long running LMTP process: #0 sha1_loop (ctxt=0x7f3b1a4d7fa0, input=0x7f3b1a524860, len=0) at sha1.c:216 input_c = 0x7f3b1a524860 "\211PNG\r\n\032\n" gaplen = gapstart = off = 0 copysiz = #1 0x00007f3b19195b29 in hash_format_loop (format=, data=0x7f3b1a524860, size=0) at hash-format.c:150 list = 0x7f3b1a4d7f80 #2 0x00007f3b1916f5b8 in astream_decode_base64(astream=0x7f3b1a4cb030) at istream-attachment-extractor.c:388 part = 0x7f3b1a4cb228 output = 0x7f3b1a5288c0 size = 0 buf = 0x7f3b1a528070 outfd = 24 extra_buf = 0x0 data = 0x7f3b1a52484e "iW" ret = input = 0x7f3b1a528530 base64_input = 0x7f3b1a5286f0 failed = false #3 astream_part_finish (error_r=0x7ffc00bc2518, astream=) at istream-attachment-extractor.c:485 info = {hash = 0x7f3b1a414c08 "ebd67eb141828144e22a6123b7c9e8ce3401a0db", start_offset = 41786, encoded_size = 331456, base64_blocks_per_line = 0, base64_have_crlf = false, part = 0x0} digest_str = 0x7f3b1a414bd0 data = 0x0 ret = 0 input = 0x7f3b1a5288c0 output = 0x7f3b1a52484e size = 139891821412464 #4 astream_end_of_part (astream=astream at entry=0x7f3b1a4cb030, error_r=error_r at entry=0x7ffc00bc2518) at istream-attachment-extractor.c:571 part = 0x7f3b1a4cb228 old_size = 0 ret = 0 #5 0x00007f3b1916fbdb in astream_read_next (retry_r=, astream=0x7f3b1a4cb030) at istream-attachment-extractor.c:633 stream = 0x7f3b1a4cb030 block = {part = 0x7f3b1a4d8770, hdr = 0x0, data = 0x7f3b19bf91e1 "\n--_008_VI1PR02MB139090A81DDBB9A3973922378AD00VI1PR02MB1390eurp_\nContent-Type: image/png; name=\"image013.png\"\nContent-Description: image013.png\nContent-Disposition: inline; filename=\"image013.png\"; si"..., size = 64} new_size = ret = old_size = 0 error = 0xcf803a94af74800 #6 i_stream_attachment_extractor_read (stream=0x7f3b1a4cb030) at istream-attachment-extractor.c:668 astream = 0x7f3b1a4cb030 retry = false ret = #7 0x00007f3b1919a1c3 in i_stream_read (stream=0x7f3b1a4cb0a0) at istream.c:174 _stream = 0x7f3b1a4cb030 old_size = 0 ret = __FUNCTION__ = "i_stream_read" #8 0x00007f3b194c2c0b in index_attachment_save_continue (ctx=0x7f3b1a4c59a0) at index-attachment.c:218 storage = 0x7f3b1a4907a0 attach = 0x7f3b1a4d8360 data = size = 1 ret = #9 0x00007f3b1945dcd2 in mailbox_save_continue (ctx=ctx at entry=0x7f3b1a4c59a0) at mail-storage.c:2113 _data_stack_cur_id = 4 ret = #10 0x00007f3b194540ee in mail_storage_try_copy (mail=0x7ffc00bc2658, _ctx=0x7ffc00bc2658) at mail-copy.c:81 ctx = 0x7f3b1a4c59a0 pmail = 0x7ffc00bc2658 ret = input = 0x7f3b1a4c4140 #11 mail_storage_copy (ctx=ctx at entry=0x7f3b1a4c59a0, mail=mail at entry=0x7f3b1a48b770) at mail-copy.c:107 __FUNCTION__ = "mail_storage_copy" #12 0x00007f3b19474806 in mdbox_copy (_ctx=0x7f3b1a4c59a0, mail=0x7f3b1a48b770) at mdbox-save.c:468 ctx = 0x7f3b1a4c59a0 save_mail = 0x7f3b1a48b770 src_mbox = rec = {map_uid = 440968640, save_date = 32571} guid_data = 0x7f3b1a4c59a0 wanted_guid = "p\245H\032;\177\000\000\267\225u\031;\177\000" #13 0x00007f3b180bd2f1 in fts_copy (ctx=0x7f3b1a4c59a0, mail=0x7f3b1a48b770) at fts-storage.c:735 ft = 0x7f3b1a4c4c10 fbox = #14 0x00007f3b1945e16d in mailbox_copy_int (_ctx=_ctx at entry=0x7ffc00bc27a0, mail=0x7f3b1a48b770) at mail-storage.c:2244 _data_stack_cur_id = 3 ctx = 0x7f3b1a4c59a0 t = 0x7f3b1a4c4c40 keywords = 0x0 pvt_flags = 0 backend_mail = 0x7f3b1a48b770 ret = __FUNCTION__ = "mailbox_copy_int" #15 0x00007f3b1945e3e2 in mailbox_save_using_mail (_ctx=_ctx at entry=0x7ffc00bc27a0, mail=) at mail-storage.c:2295 ctx = __FUNCTION__ = "mailbox_save_using_mail" #16 0x00007f3b19759789 in mail_deliver_save (ctx=ctx at entry=0x7ffc00bc2930, mailbox=, flags=flags at entry=0, keywords=keywords at entry=0x0, storage_r=storage_r at entry=0x7ffc00bc2908) at mail-deliver.c:383 open_ctx = {user = 0x7f3b1a470e40, lda_mailbox_autocreate = false, lda_mailbox_autosubscribe = false} box = 0x7f3b1a4bb610 trans_flags = t = 0x7f3b1a4c4c40 save_ctx = 0x0 headers_ctx = 0x0 kw = 0x0 error = MAIL_ERROR_NONE mailbox_name = 0x7f3b19b8f258 "INBOX" errstr = 0x0 guid = 0x0 changes = {pool = 0x7f3b1a4502b0, uid_validity = 440869512, saved_uids = {arr = {buffer = 0x7f3b1995e400 , element_size = 1476711940}, v = 0x7f3b1995e400 , v_modifiable = 0x7f3b1995e400 }, ignored_modseq_changes = 440968560, changed = 59, no_read_perm = 127} default_save = ret = 0 __FUNCTION__ = "mail_deliver_save" #17 0x00007f3b19759be3 in mail_deliver (ctx=ctx at entry=0x7ffc00bc2930, storage_r=storage_r at entry=0x7ffc00bc2908) at mail-deliver.c:493 ret = #18 0x00007f3b19b8c24e in client_deliver (session=0x7f3b1a48a570, src_mail=0x7f3b1a48b770, rcpt=0x7f3b1a44b748, client=0x7f3b1a4502d0) at commands.c:890 set_parser = line = str = mail_error = 440730320 ret = input = ns = delivery_time_started = {tv_sec = 1476711940, tv_usec = 46227} sets = storage = 0x7f3b1a4907a0 mail_set = username = dctx = {pool = 0x7f3b1a48a550, set = 0x7f3b1a45df80, session = 0x7f3b1a48a570, timeout_secs = 30, session_time_msecs = 2, delivery_time_started = { tv_sec = 1476711940, tv_usec = 46227}, dup_ctx = 0x0, session_id = 0x7f3b1a44b4d0 "sl0sAgTWBFiZLwAAplP5LA", src_mail = 0x7f3b1a48b770, src_envelope_sender = 0x7f3b1a44b4e8 "hartmut.xxxxxr at getinge.com", dest_user = 0x7f3b1a470e40, dest_addr = 0x7f3b1a44b788 "backup+alexander.xxxxx=charite.de at backup.invalid", final_dest_addr = 0x7f3b1a44b788 "backup+alexander.xxxxx=charite.de at backup.invalid", dest_mailbox_name = 0x7f3b19b8f258 "INBOX", dest_mail = 0x7f3b1a4cb700, var_expand_table = 0x0, tempfail_error = 0x0, tried_default_save = true, saved_mail = false, save_dest_mail = false, mailbox_full = false, dsn = false} lda_set = error = #19 client_deliver_next (session=0x7f3b1a48a570, src_mail=0x7f3b1a48b770, client=0x7f3b1a4502d0) at commands.c:930 count = #20 client_input_data_write_local (input=, client=0x7f3b1a4502d0) at commands.c:1026 src_mail = 0x7f3b1a48b770 first_uid = 4294967295 session = 0x7f3b1a48a570 old_uid = 0 #21 client_input_data_write (client=0x7f3b1a4502d0) at commands.c:1161 input = 0x7f3b1a4682f0 #22 client_input_data_handle (client=0x7f3b1a4502d0) at commands.c:1256 data = size = 2366 ret = #23 0x00007f3b191a3e4c in io_loop_call_io (io=0x7f3b1a421610) at ioloop.c:584 ioloop = 0x7f3b1a419750 t_id = 2 __FUNCTION__ = "io_loop_call_io" #24 0x00007f3b191a530a in io_loop_handler_run_internal (ioloop=ioloop at entry=0x7f3b1a419750) at ioloop-epoll.c:222 ctx = 0x7f3b1a41f3b0 io = tv = {tv_sec = 299, tv_usec = 999727} events_count = msecs = ret = 1 i = 0 j = call = __FUNCTION__ = "io_loop_handler_run_internal" #25 0x00007f3b191a3ed5 in io_loop_handler_run (ioloop=ioloop at entry=0x7f3b1a419750) at ioloop.c:632 No locals. #26 0x00007f3b191a4078 in io_loop_run (ioloop=0x7f3b1a419750) at ioloop.c:608 __FUNCTION__ = "io_loop_run" #27 0x00007f3b1912fbe3 in master_service_run (service=0x7f3b1a4195f0, callback=) at master-service.c:641 No locals. #28 0x00007f3b19b8a382 in main (argc=1, argv=0x7f3b1a419390) at main.c:125 set_roots = {0x7f3b1995e400 , 0x7f3b19d91580 , 0x0} service_flags = storage_service_flags = c = -- Ralf Hildebrandt Gesch?ftsbereich IT | Abteilung Netzwerk Charit? - Universit?tsmedizin Berlin Campus Benjamin Franklin Hindenburgdamm 30 | D-12203 Berlin Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962 ralf.hildebrandt at charite.de | http://www.charite.de From Ralf.Hildebrandt at charite.de Mon Oct 17 14:02:32 2016 From: Ralf.Hildebrandt at charite.de (Ralf Hildebrandt) Date: Mon, 17 Oct 2016 16:02:32 +0200 Subject: Massive LMTP Problems with dovecot In-Reply-To: <20161017140000.mdfm3sp3eqzve35b@charite.de> References: <3syKJD4Vj9z20sts@mail-cbf.charite.de> <3c8fdd70-f345-55d6-1151-f82dc6dfb396@rename-it.nl> <20161017134829.x4qp4opkorg32sd2@charite.de> <20161017140000.mdfm3sp3eqzve35b@charite.de> Message-ID: <20161017140232.o7z4qyu4kdlwwveb@charite.de> * Ralf Hildebrandt : > I attached gdb top a long running LMTP process: > > #0 sha1_loop (ctxt=0x7f3b1a4d7fa0, input=0x7f3b1a524860, len=0) at sha1.c:216 > input_c = 0x7f3b1a524860 "\211PNG\r\n\032\n" > gaplen = > gapstart = > off = 0 > copysiz = > > #1 0x00007f3b19195b29 in hash_format_loop (format=, data=0x7f3b1a524860, size=0) at hash-format.c:150 > list = 0x7f3b1a4d7f80 It seems to loop in sha1_loop & hash_format_loop -- Ralf Hildebrandt Gesch?ftsbereich IT | Abteilung Netzwerk Charit? - Universit?tsmedizin Berlin Campus Benjamin Franklin Hindenburgdamm 30 | D-12203 Berlin Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962 ralf.hildebrandt at charite.de | http://www.charite.de From Ralf.Hildebrandt at charite.de Mon Oct 17 14:08:20 2016 From: Ralf.Hildebrandt at charite.de (Ralf Hildebrandt) Date: Mon, 17 Oct 2016 16:08:20 +0200 Subject: Massive LMTP Problems with dovecot In-Reply-To: <20161017140232.o7z4qyu4kdlwwveb@charite.de> References: <3syKJD4Vj9z20sts@mail-cbf.charite.de> <3c8fdd70-f345-55d6-1151-f82dc6dfb396@rename-it.nl> <20161017134829.x4qp4opkorg32sd2@charite.de> <20161017140000.mdfm3sp3eqzve35b@charite.de> <20161017140232.o7z4qyu4kdlwwveb@charite.de> Message-ID: <20161017140820.zspfu6herylymp55@charite.de> * Ralf Hildebrandt : > * Ralf Hildebrandt : > > I attached gdb top a long running LMTP process: > > > > #0 sha1_loop (ctxt=0x7f3b1a4d7fa0, input=0x7f3b1a524860, len=0) at sha1.c:216 > > input_c = 0x7f3b1a524860 "\211PNG\r\n\032\n" > > gaplen = > > gapstart = > > off = 0 > > copysiz = > > > > #1 0x00007f3b19195b29 in hash_format_loop (format=, data=0x7f3b1a524860, size=0) at hash-format.c:150 > > list = 0x7f3b1a4d7f80 > > It seems to loop in sha1_loop & hash_format_loop The problem occurs in both 2.3 and 2.2 (I just updated to 2.3 to check). -- Ralf Hildebrandt Gesch?ftsbereich IT | Abteilung Netzwerk Charit? - Universit?tsmedizin Berlin Campus Benjamin Franklin Hindenburgdamm 30 | D-12203 Berlin Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962 ralf.hildebrandt at charite.de | http://www.charite.de From Ralf.Hildebrandt at charite.de Mon Oct 17 14:31:07 2016 From: Ralf.Hildebrandt at charite.de (Ralf Hildebrandt) Date: Mon, 17 Oct 2016 16:31:07 +0200 Subject: Massive LMTP Problems with dovecot In-Reply-To: <20161017140820.zspfu6herylymp55@charite.de> References: <3syKJD4Vj9z20sts@mail-cbf.charite.de> <3c8fdd70-f345-55d6-1151-f82dc6dfb396@rename-it.nl> <20161017134829.x4qp4opkorg32sd2@charite.de> <20161017140000.mdfm3sp3eqzve35b@charite.de> <20161017140232.o7z4qyu4kdlwwveb@charite.de> <20161017140820.zspfu6herylymp55@charite.de> Message-ID: <20161017143107.yzh5denau3kzj37w@charite.de> * Ralf Hildebrandt : > > It seems to loop in sha1_loop & hash_format_loop > > The problem occurs in both 2.3 and 2.2 (I just updated to 2.3 to check). I'm seeing the first occurence of that problem on the 10th of october! I was using (prior to the 10th) : 2.3.0~alpha0-1~auto+371 On the 10th I upgraded (16:04) to: 2.3.0~alpha0-1~auto+376 I'd think the change must have been introduced between 371 and 376 :) I then went back to, issues went away: 2.2.25-1~auto+49 and the issues reappeared with 2.2.25-1~auto+57 Does that help? -- Ralf Hildebrandt Gesch?ftsbereich IT | Abteilung Netzwerk Charit? - Universit?tsmedizin Berlin Campus Benjamin Franklin Hindenburgdamm 30 | D-12203 Berlin Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962 ralf.hildebrandt at charite.de | http://www.charite.de From kevin at my.walr.us Mon Oct 17 15:08:36 2016 From: kevin at my.walr.us (KT Walrus) Date: Mon, 17 Oct 2016 11:08:36 -0400 Subject: logging TLS SNI hostname In-Reply-To: <201610170841.38721.arekm@maven.pl> References: <201605300829.17351.arekm@maven.pl> <201610170841.38721.arekm@maven.pl> Message-ID: > On Oct 17, 2016, at 2:41 AM, Arkadiusz Mi?kiewicz wrote: > > On Monday 30 of May 2016, Arkadiusz Mi?kiewicz wrote: >> Is there a way to log SNI hostname used in TLS session? Info is there in >> SSL_CTX_set_tlsext_servername_callback, dovecot copies it to >> ssl_io->host. >> >> Unfortunately I don't see it expanded to any variables ( >> http://wiki.dovecot.org/Variables ). Please consider this to be a feature >> request. >> >> The goal is to be able to see which hostname client used like: >> >> May 30 08:21:19 xxx dovecot: pop3-login: Login: user=, method=PLAIN, >> rip=1.1.1.1, lip=2.2.2.2, mpid=17135, TLS, SNI=pop3.somehost.org, >> session= > > Dear dovecot team, would be possible to add such variable ^^^^^ ? > > That would be neat feature because server operator would know what hostname > client uses to connect to server (which is really usefull in case of many > hostnames pointing to single IP). I?d love to be able to use this SNI domain name in the Dovecot IMAP proxy for use in the SQL password_query. This would allow the proxy to support multiple IMAP server domains each with their own set of users. And, it would save me money by using only the IP of the proxy for all the IMAP server domains instead of giving each domain a unique IP. Kevin From pierre at jaury.eu Mon Oct 17 13:14:55 2016 From: pierre at jaury.eu (Pierre Jaury) Date: Mon, 17 Oct 2016 15:14:55 +0200 Subject: Dict proxy client returning empty string instead of multiline string In-Reply-To: <5b037246-7d96-b1bb-525c-7a47937c8f81@jaury.eu> References: <612182c9-326d-2983-329b-248cfa6d7804@jaury.eu> <8e81cd26-5d0a-7260-864d-507bc28c2d95@jaury.eu> <5b037246-7d96-b1bb-525c-7a47937c8f81@jaury.eu> Message-ID: Okay, it seems to me that the bug is due to "t_str_tabunescape" using the unsafe datastack ("t_strdup_noconst") while the string is actually returned in an async callback. Before it is handled by "client_dict_lookup", "client_dict_wait" actually fires some IO loops that are more than likely to call "t_pop" and free/flush the result string (I checked, it does call "t_pop" a couple times indeed). Maybe it would be safer to use a standard datastack pool when unescaping a string in that context. I could work on a patch that would: - add an optional "pool" attribute to the "client_dict_cmd" structure, - pass the pool to the async lookup function, - use the pool when escaping strings that should survive the callback chain What do you think? Regards, kaiyou On 10/17/2016 09:52 AM, Pierre Jaury wrote: > While trying to isolate properly and reproduce, I was able to trigger > the same bug with the following code: > > struct dict *dict; > char* dict_uri = "proxy::sieve"; > char* key = "priv/key"; > char* username = "admin at domain.tld"; > char* value, error; > > dict_drivers_register_builtin(); > dict_init(dict_uri, DICT_DATA_TYPE_STRING, username, > doveadm_settings->base_dir, &dict, &error); > dict_lookup(dict, pool_datastack_create(), key, &value); > printf(">%s\n", value); // outputs an empty string > dict_deinit(&dict); > > I trimmed it to the bare minimal string manipulation functions involved > but cannot reproduce in that case: > > pool_t pool = pool_datastack_create(); > > char* s1 = "test\001n\001rtest"; > char* s2 = t_str_tabunescape(s1); > char* s3 = p_strdup(pool, s2); > > printf("1>%s\n", s1); > printf("2>%s\n", s2); > printf("3>%s\n", s3); // all three output the string with NL and CR > > Maybe I am missing a function call in the process or maybe the issue is > related to the way unescaping is performed in the async callback > function in "dict-client.c", or maybe even some other edge case. > > Finally, I was able to run the first snippet without bug by removing the > string duplication in "t_str_tabunescape" (which I realize is not a > proper solution), or by explicitely using the following pool: > > return str_tabunescape(p_strdup(pool_datastack_create(), str)); > > > Hope this helps. > > kaiyou > > > On 10/17/2016 07:51 AM, Aki Tuomi wrote: >> Hi! >> >> This does sound like a bug, we'll have look. >> >> Aki >> >> >> On 17.10.2016 01:26, Pierre Jaury wrote: >>> I dived a little bit further into the rabbit hole, up to the point where >>> debugging has become unpracticle but I still haven't found the root >>> cause for sure. >>> >>> I read most of the code for "p_strdup" based on datastack memory pools >>> (which are used for dictionary lookups both with doveadm and by extdata) >>> and it seems ok. Still, after "t_malloc_real" is called in "t_malloc0", >>> the allocated buffer has the same address as the source string. >>> >>> The only sensible explanation I can come up with is that during >>> unescaping, strings are not allocated properly, leading to the memory >>> pool reusing the string address and zeroing it in the process before the >>> string copy operation. >>> >>> I will follow on this path tomorrow, any lead is more than welcome. >>> >>> kaiyou. >>> >>> On 10/16/2016 11:16 PM, Pierre Jaury wrote: >>>> Hello, >>>> >>>> I am using a dict proxy for my sieve extdata plugin to access some >>>> fields from an SQLite database (autoreply text and other >>>> database-configured items). >>>> >>>> All tests are performed against version 2.2.25. >>>> >>>> $ dovecot --version >>>> 2.2.25 (7be1766) >>>> >>>> My configuration looks like: >>>> >>>> dict { >>>> sieve = sqlite:/etc/dovecot/pigeonhole-sieve.dict >>>> } >>>> >>>> [...] >>>> sieve_extdata_dict_uri = proxy::sieve >>>> >>>> I am able to read pretty much any attribute without any issue, except >>>> when the value contains a special character like "\r" or "\n". By using >>>> the doveadm dict client, I narrowed it down to the dictionary management >>>> part (either server, protocol or client). >>>> >>>> I was suspecting escaping functions from "lib/strescape.c" (mostly >>>> str_tabescape and its counterpart, used by "lib-dict/client.c"), so I >>>> monitored socket communications. It seems that escaping is done properly >>>> on the server and the socket is not an issue either. >>>> >>>> The following strace dump results from running "doveadm dict get" >>>> against the dict socket: >>>> >>>> connect(8, {sa_family=AF_UNIX, sun_path="..."}, 110) = 0 >>>> fstat(8, {st_mode=S_IFSOCK|0777, st_size=0, ...}) = 0 >>>> [...] >>>> write(8, "H2\t0\t0\tadmin at domain.tld\tsieve\n", 30) = 30 >>>> [...] >>>> read(8, "Otest\1r\1ntest\n", 8192) = 14 >>>> >>>> Indeed "\1r" and "\1n" are the escape sequences used by >>>> "lib/strescape.c". I went deeped and debugged the call to "dict_lookup" >>>> performed by doveadm. Indeed the client gets the proper string from the >>>> socket and to my surprise, it is properly unescaped. >>>> >>>> Then, in "client_dict_lookup" ("lib-dict/dict-client.c"), the call to >>>> "p_strdup" returns an empty string (null byte set at the target address). >>>> >>>> Before the call to the dict "->lookup" attribute (client_dict_lookup): >>>> >>>> RAX: 0x7ffff73a37c0 (push r14) >>>> RBX: 0x6831b8 ("priv/reply_body") >>>> RCX: 0x7fffffffe240 --> 0x682a60 --> 0x6831b8 ("priv/reply_body") >>>> RDX: 0x6831b8 ("priv/reply_body") >>>> RSI: 0x683288 --> 0x7ffff7653120 --> 0x7ffff73ea620 ([...]) >>>> RDI: 0x690ad0 --> 0x7ffff7400713 --> 0x75250079786f7270 ('proxy') >>>> >>>> 0x7ffff73a1f10 : mov rcx,r11 (value_r) >>>> 0x7ffff73a1f13 : mov rdx,r8 (key) >>>> 0x7ffff73a1f16 : mov rsi,r10 (pool) >>>> 0x7ffff73a1f19 : mov rdi,r9 (dict) >>>> 0x7ffff73a1f1c : add rsp,0x8 >>>> => 0x7ffff73a1f20 : jmp rax >>>> >>>> Before the call to p_strdup in "client_dict_lookup": >>>> >>>> RSI: 0x6832d8 ("test\r\ntest") (lookup.result.value) >>>> RDI: 0x683288 --> 0x7ffff7653120 --> [...] (pool) >>>> RAX: 0x0 (result) >>>> >>>> 0x7ffff73a384f: nop >>>> 0x7ffff73a3850: mov rsi,QWORD PTR [rsp+0x8] >>>> 0x7ffff73a3855: mov rdi,r14 >>>> => 0x7ffff73a3858: call 0x7ffff736d3c0 >>>> 0x7ffff73a385d: mov QWORD PTR [r13+0x0],rax >>>> 0x7ffff73a3861: mov rsi,QWORD PTR [rsp+0x18] >>>> 0x7ffff73a3866: xor rsi,QWORD PTR fs:0x28 >>>> 0x7ffff73a386f: mov eax,ebx >>>> >>>> After the call: >>>> >>>> 0x7ffff73a3850: mov rsi,QWORD PTR [rsp+0x8] >>>> 0x7ffff73a3855: mov rdi,r14 >>>> 0x7ffff73a3858: call 0x7ffff736d3c0 >>>> => 0x7ffff73a385d: mov QWORD PTR [r13+0x0],rax >>>> 0x7ffff73a3861: mov rsi,QWORD PTR [rsp+0x18] >>>> 0x7ffff73a3866: xor rsi,QWORD PTR fs:0x28 >>>> 0x7ffff73a386f: mov eax,ebx >>>> 0x7ffff73a3871: jne 0x7ffff73a38da >>>> >>>> RSI: 0x0 >>>> RDI: 0x6832d8 --> 0x0 >>>> RAX: 0x6832d8 --> 0x0 (result) >>>> >>>> It is worth noting that I can reproduce the exact same execution flow >>>> with a non-multiline result string (lookup.result.value) that is >>>> properly copied by "p_strdup" and returned in RAX, then displayed by >>>> doveadm. >>>> >>>> I am not familiar with the pooling mechanism hidden behind the call to >>>> p_strdump and not quite sure why this behaviour is emerging. Maybe I am >>>> even miles away from an understanding of the issue here, but it sounds >>>> to me like something is wrong in the way "p_strdup" performs the copy. >>>> >>>> Hope this helps, >>>> kaiyou. >>>> >>>> >>>> From pierre at jaury.eu Mon Oct 17 07:52:49 2016 From: pierre at jaury.eu (Pierre Jaury) Date: Mon, 17 Oct 2016 09:52:49 +0200 Subject: Dict proxy client returning empty string instead of multiline string In-Reply-To: References: <612182c9-326d-2983-329b-248cfa6d7804@jaury.eu> <8e81cd26-5d0a-7260-864d-507bc28c2d95@jaury.eu> Message-ID: <5b037246-7d96-b1bb-525c-7a47937c8f81@jaury.eu> While trying to isolate properly and reproduce, I was able to trigger the same bug with the following code: struct dict *dict; char* dict_uri = "proxy::sieve"; char* key = "priv/key"; char* username = "admin at domain.tld"; char* value, error; dict_drivers_register_builtin(); dict_init(dict_uri, DICT_DATA_TYPE_STRING, username, doveadm_settings->base_dir, &dict, &error); dict_lookup(dict, pool_datastack_create(), key, &value); printf(">%s\n", value); // outputs an empty string dict_deinit(&dict); I trimmed it to the bare minimal string manipulation functions involved but cannot reproduce in that case: pool_t pool = pool_datastack_create(); char* s1 = "test\001n\001rtest"; char* s2 = t_str_tabunescape(s1); char* s3 = p_strdup(pool, s2); printf("1>%s\n", s1); printf("2>%s\n", s2); printf("3>%s\n", s3); // all three output the string with NL and CR Maybe I am missing a function call in the process or maybe the issue is related to the way unescaping is performed in the async callback function in "dict-client.c", or maybe even some other edge case. Finally, I was able to run the first snippet without bug by removing the string duplication in "t_str_tabunescape" (which I realize is not a proper solution), or by explicitely using the following pool: return str_tabunescape(p_strdup(pool_datastack_create(), str)); Hope this helps. kaiyou On 10/17/2016 07:51 AM, Aki Tuomi wrote: > Hi! > > This does sound like a bug, we'll have look. > > Aki > > > On 17.10.2016 01:26, Pierre Jaury wrote: >> I dived a little bit further into the rabbit hole, up to the point where >> debugging has become unpracticle but I still haven't found the root >> cause for sure. >> >> I read most of the code for "p_strdup" based on datastack memory pools >> (which are used for dictionary lookups both with doveadm and by extdata) >> and it seems ok. Still, after "t_malloc_real" is called in "t_malloc0", >> the allocated buffer has the same address as the source string. >> >> The only sensible explanation I can come up with is that during >> unescaping, strings are not allocated properly, leading to the memory >> pool reusing the string address and zeroing it in the process before the >> string copy operation. >> >> I will follow on this path tomorrow, any lead is more than welcome. >> >> kaiyou. >> >> On 10/16/2016 11:16 PM, Pierre Jaury wrote: >>> Hello, >>> >>> I am using a dict proxy for my sieve extdata plugin to access some >>> fields from an SQLite database (autoreply text and other >>> database-configured items). >>> >>> All tests are performed against version 2.2.25. >>> >>> $ dovecot --version >>> 2.2.25 (7be1766) >>> >>> My configuration looks like: >>> >>> dict { >>> sieve = sqlite:/etc/dovecot/pigeonhole-sieve.dict >>> } >>> >>> [...] >>> sieve_extdata_dict_uri = proxy::sieve >>> >>> I am able to read pretty much any attribute without any issue, except >>> when the value contains a special character like "\r" or "\n". By using >>> the doveadm dict client, I narrowed it down to the dictionary management >>> part (either server, protocol or client). >>> >>> I was suspecting escaping functions from "lib/strescape.c" (mostly >>> str_tabescape and its counterpart, used by "lib-dict/client.c"), so I >>> monitored socket communications. It seems that escaping is done properly >>> on the server and the socket is not an issue either. >>> >>> The following strace dump results from running "doveadm dict get" >>> against the dict socket: >>> >>> connect(8, {sa_family=AF_UNIX, sun_path="..."}, 110) = 0 >>> fstat(8, {st_mode=S_IFSOCK|0777, st_size=0, ...}) = 0 >>> [...] >>> write(8, "H2\t0\t0\tadmin at domain.tld\tsieve\n", 30) = 30 >>> [...] >>> read(8, "Otest\1r\1ntest\n", 8192) = 14 >>> >>> Indeed "\1r" and "\1n" are the escape sequences used by >>> "lib/strescape.c". I went deeped and debugged the call to "dict_lookup" >>> performed by doveadm. Indeed the client gets the proper string from the >>> socket and to my surprise, it is properly unescaped. >>> >>> Then, in "client_dict_lookup" ("lib-dict/dict-client.c"), the call to >>> "p_strdup" returns an empty string (null byte set at the target address). >>> >>> Before the call to the dict "->lookup" attribute (client_dict_lookup): >>> >>> RAX: 0x7ffff73a37c0 (push r14) >>> RBX: 0x6831b8 ("priv/reply_body") >>> RCX: 0x7fffffffe240 --> 0x682a60 --> 0x6831b8 ("priv/reply_body") >>> RDX: 0x6831b8 ("priv/reply_body") >>> RSI: 0x683288 --> 0x7ffff7653120 --> 0x7ffff73ea620 ([...]) >>> RDI: 0x690ad0 --> 0x7ffff7400713 --> 0x75250079786f7270 ('proxy') >>> >>> 0x7ffff73a1f10 : mov rcx,r11 (value_r) >>> 0x7ffff73a1f13 : mov rdx,r8 (key) >>> 0x7ffff73a1f16 : mov rsi,r10 (pool) >>> 0x7ffff73a1f19 : mov rdi,r9 (dict) >>> 0x7ffff73a1f1c : add rsp,0x8 >>> => 0x7ffff73a1f20 : jmp rax >>> >>> Before the call to p_strdup in "client_dict_lookup": >>> >>> RSI: 0x6832d8 ("test\r\ntest") (lookup.result.value) >>> RDI: 0x683288 --> 0x7ffff7653120 --> [...] (pool) >>> RAX: 0x0 (result) >>> >>> 0x7ffff73a384f: nop >>> 0x7ffff73a3850: mov rsi,QWORD PTR [rsp+0x8] >>> 0x7ffff73a3855: mov rdi,r14 >>> => 0x7ffff73a3858: call 0x7ffff736d3c0 >>> 0x7ffff73a385d: mov QWORD PTR [r13+0x0],rax >>> 0x7ffff73a3861: mov rsi,QWORD PTR [rsp+0x18] >>> 0x7ffff73a3866: xor rsi,QWORD PTR fs:0x28 >>> 0x7ffff73a386f: mov eax,ebx >>> >>> After the call: >>> >>> 0x7ffff73a3850: mov rsi,QWORD PTR [rsp+0x8] >>> 0x7ffff73a3855: mov rdi,r14 >>> 0x7ffff73a3858: call 0x7ffff736d3c0 >>> => 0x7ffff73a385d: mov QWORD PTR [r13+0x0],rax >>> 0x7ffff73a3861: mov rsi,QWORD PTR [rsp+0x18] >>> 0x7ffff73a3866: xor rsi,QWORD PTR fs:0x28 >>> 0x7ffff73a386f: mov eax,ebx >>> 0x7ffff73a3871: jne 0x7ffff73a38da >>> >>> RSI: 0x0 >>> RDI: 0x6832d8 --> 0x0 >>> RAX: 0x6832d8 --> 0x0 (result) >>> >>> It is worth noting that I can reproduce the exact same execution flow >>> with a non-multiline result string (lookup.result.value) that is >>> properly copied by "p_strdup" and returned in RAX, then displayed by >>> doveadm. >>> >>> I am not familiar with the pooling mechanism hidden behind the call to >>> p_strdump and not quite sure why this behaviour is emerging. Maybe I am >>> even miles away from an understanding of the issue here, but it sounds >>> to me like something is wrong in the way "p_strdup" performs the copy. >>> >>> Hope this helps, >>> kaiyou. >>> >>> >>> From leo at strike.wu.ac.at Mon Oct 17 16:05:17 2016 From: leo at strike.wu.ac.at (Alexander 'Leo' Bergolth) Date: Mon, 17 Oct 2016 18:05:17 +0200 Subject: sieve duplicate locking Message-ID: <5804F6BD.3090302@strike.wu.ac.at> Hi! Does the duplicate sieve plugin do any locking to avoid duplicate parallel delivery of the same message? I sometimes experience duplicate mail delivery of messages with the same message-id, despite the use of a sieve duplicate filter. According to the log files, those messages are delivered in the same second by two parallel dovecot-lda processes. (Duplicate filtering works fine in all other cases.) RFC7352 states that the ID of a message may only be committed to the duplicate tracking list at the _end_ of a successful script execution, which may lead to race conditions. Maybe I am running into this? Is there an easy way to serialize mail delivery using some locking inside sieve? Or do I have to serialize per-user dovecot-lda delivery? Any experiences with that? I am using dovecot-2.2.25 and pidgeonhole-0.4.15. Mail is delivered using postfix-2.10 and dovecot-lda as mailbox_command. Mailbox format is maildir with LAYOUT=fs. Cheers, --leo -- e-mail ::: Leo.Bergolth (at) wu.ac.at fax ::: +43-1-31336-906050 location ::: IT-Services | Vienna University of Economics | Austria From aki.tuomi at dovecot.fi Mon Oct 17 16:23:58 2016 From: aki.tuomi at dovecot.fi (Aki Tuomi) Date: Mon, 17 Oct 2016 19:23:58 +0300 (EEST) Subject: Dict proxy client returning empty string instead of multiline string In-Reply-To: References: <612182c9-326d-2983-329b-248cfa6d7804@jaury.eu> <8e81cd26-5d0a-7260-864d-507bc28c2d95@jaury.eu> <5b037246-7d96-b1bb-525c-7a47937c8f81@jaury.eu> Message-ID: <137204017.2231.1476721439502@appsuite-dev.open-xchange.com> Hi! Looking at the code, I think the bug is that it just copies pointer to the value, it should, instead, duplicate the value to some memory region. Can you see if this following patch fixes it? Aki > On October 17, 2016 at 4:14 PM Pierre Jaury wrote: > > > Okay, it seems to me that the bug is due to "t_str_tabunescape" using > the unsafe datastack ("t_strdup_noconst") while the string is actually > returned in an async callback. > > Before it is handled by "client_dict_lookup", "client_dict_wait" > actually fires some IO loops that are more than likely to call "t_pop" > and free/flush the result string (I checked, it does call "t_pop" a > couple times indeed). Maybe it would be safer to use a standard > datastack pool when unescaping a string in that context. > > I could work on a patch that would: > > - add an optional "pool" attribute to the "client_dict_cmd" structure, > - pass the pool to the async lookup function, > - use the pool when escaping strings that should survive the callback > chain > > What do you think? > > Regards, > kaiyou > > > On 10/17/2016 09:52 AM, Pierre Jaury wrote: > > While trying to isolate properly and reproduce, I was able to trigger > > the same bug with the following code: > > > > struct dict *dict; > > char* dict_uri = "proxy::sieve"; > > char* key = "priv/key"; > > char* username = "admin at domain.tld"; > > char* value, error; > > > > dict_drivers_register_builtin(); > > dict_init(dict_uri, DICT_DATA_TYPE_STRING, username, > > doveadm_settings->base_dir, &dict, &error); > > dict_lookup(dict, pool_datastack_create(), key, &value); > > printf(">%s\n", value); // outputs an empty string > > dict_deinit(&dict); > > > > I trimmed it to the bare minimal string manipulation functions involved > > but cannot reproduce in that case: > > > > pool_t pool = pool_datastack_create(); > > > > char* s1 = "test\001n\001rtest"; > > char* s2 = t_str_tabunescape(s1); > > char* s3 = p_strdup(pool, s2); > > > > printf("1>%s\n", s1); > > printf("2>%s\n", s2); > > printf("3>%s\n", s3); // all three output the string with NL and CR > > > > Maybe I am missing a function call in the process or maybe the issue is > > related to the way unescaping is performed in the async callback > > function in "dict-client.c", or maybe even some other edge case. > > > > Finally, I was able to run the first snippet without bug by removing the > > string duplication in "t_str_tabunescape" (which I realize is not a > > proper solution), or by explicitely using the following pool: > > > > return str_tabunescape(p_strdup(pool_datastack_create(), str)); > > > > > > Hope this helps. > > > > kaiyou > > > > > > On 10/17/2016 07:51 AM, Aki Tuomi wrote: > >> Hi! > >> > >> This does sound like a bug, we'll have look. > >> > >> Aki > >> > >> > >> On 17.10.2016 01:26, Pierre Jaury wrote: > >>> I dived a little bit further into the rabbit hole, up to the point where > >>> debugging has become unpracticle but I still haven't found the root > >>> cause for sure. > >>> > >>> I read most of the code for "p_strdup" based on datastack memory pools > >>> (which are used for dictionary lookups both with doveadm and by extdata) > >>> and it seems ok. Still, after "t_malloc_real" is called in "t_malloc0", > >>> the allocated buffer has the same address as the source string. > >>> > >>> The only sensible explanation I can come up with is that during > >>> unescaping, strings are not allocated properly, leading to the memory > >>> pool reusing the string address and zeroing it in the process before the > >>> string copy operation. > >>> > >>> I will follow on this path tomorrow, any lead is more than welcome. > >>> > >>> kaiyou. > >>> > >>> On 10/16/2016 11:16 PM, Pierre Jaury wrote: > >>>> Hello, > >>>> > >>>> I am using a dict proxy for my sieve extdata plugin to access some > >>>> fields from an SQLite database (autoreply text and other > >>>> database-configured items). > >>>> > >>>> All tests are performed against version 2.2.25. > >>>> > >>>> $ dovecot --version > >>>> 2.2.25 (7be1766) > >>>> > >>>> My configuration looks like: > >>>> > >>>> dict { > >>>> sieve = sqlite:/etc/dovecot/pigeonhole-sieve.dict > >>>> } > >>>> > >>>> [...] > >>>> sieve_extdata_dict_uri = proxy::sieve > >>>> > >>>> I am able to read pretty much any attribute without any issue, except > >>>> when the value contains a special character like "\r" or "\n". By using > >>>> the doveadm dict client, I narrowed it down to the dictionary management > >>>> part (either server, protocol or client). > >>>> > >>>> I was suspecting escaping functions from "lib/strescape.c" (mostly > >>>> str_tabescape and its counterpart, used by "lib-dict/client.c"), so I > >>>> monitored socket communications. It seems that escaping is done properly > >>>> on the server and the socket is not an issue either. > >>>> > >>>> The following strace dump results from running "doveadm dict get" > >>>> against the dict socket: > >>>> > >>>> connect(8, {sa_family=AF_UNIX, sun_path="..."}, 110) = 0 > >>>> fstat(8, {st_mode=S_IFSOCK|0777, st_size=0, ...}) = 0 > >>>> [...] > >>>> write(8, "H2\t0\t0\tadmin at domain.tld\tsieve\n", 30) = 30 > >>>> [...] > >>>> read(8, "Otest\1r\1ntest\n", 8192) = 14 > >>>> > >>>> Indeed "\1r" and "\1n" are the escape sequences used by > >>>> "lib/strescape.c". I went deeped and debugged the call to "dict_lookup" > >>>> performed by doveadm. Indeed the client gets the proper string from the > >>>> socket and to my surprise, it is properly unescaped. > >>>> > >>>> Then, in "client_dict_lookup" ("lib-dict/dict-client.c"), the call to > >>>> "p_strdup" returns an empty string (null byte set at the target address). > >>>> > >>>> Before the call to the dict "->lookup" attribute (client_dict_lookup): > >>>> > >>>> RAX: 0x7ffff73a37c0 (push r14) > >>>> RBX: 0x6831b8 ("priv/reply_body") > >>>> RCX: 0x7fffffffe240 --> 0x682a60 --> 0x6831b8 ("priv/reply_body") > >>>> RDX: 0x6831b8 ("priv/reply_body") > >>>> RSI: 0x683288 --> 0x7ffff7653120 --> 0x7ffff73ea620 ([...]) > >>>> RDI: 0x690ad0 --> 0x7ffff7400713 --> 0x75250079786f7270 ('proxy') > >>>> > >>>> 0x7ffff73a1f10 : mov rcx,r11 (value_r) > >>>> 0x7ffff73a1f13 : mov rdx,r8 (key) > >>>> 0x7ffff73a1f16 : mov rsi,r10 (pool) > >>>> 0x7ffff73a1f19 : mov rdi,r9 (dict) > >>>> 0x7ffff73a1f1c : add rsp,0x8 > >>>> => 0x7ffff73a1f20 : jmp rax > >>>> > >>>> Before the call to p_strdup in "client_dict_lookup": > >>>> > >>>> RSI: 0x6832d8 ("test\r\ntest") (lookup.result.value) > >>>> RDI: 0x683288 --> 0x7ffff7653120 --> [...] (pool) > >>>> RAX: 0x0 (result) > >>>> > >>>> 0x7ffff73a384f: nop > >>>> 0x7ffff73a3850: mov rsi,QWORD PTR [rsp+0x8] > >>>> 0x7ffff73a3855: mov rdi,r14 > >>>> => 0x7ffff73a3858: call 0x7ffff736d3c0 > >>>> 0x7ffff73a385d: mov QWORD PTR [r13+0x0],rax > >>>> 0x7ffff73a3861: mov rsi,QWORD PTR [rsp+0x18] > >>>> 0x7ffff73a3866: xor rsi,QWORD PTR fs:0x28 > >>>> 0x7ffff73a386f: mov eax,ebx > >>>> > >>>> After the call: > >>>> > >>>> 0x7ffff73a3850: mov rsi,QWORD PTR [rsp+0x8] > >>>> 0x7ffff73a3855: mov rdi,r14 > >>>> 0x7ffff73a3858: call 0x7ffff736d3c0 > >>>> => 0x7ffff73a385d: mov QWORD PTR [r13+0x0],rax > >>>> 0x7ffff73a3861: mov rsi,QWORD PTR [rsp+0x18] > >>>> 0x7ffff73a3866: xor rsi,QWORD PTR fs:0x28 > >>>> 0x7ffff73a386f: mov eax,ebx > >>>> 0x7ffff73a3871: jne 0x7ffff73a38da > >>>> > >>>> RSI: 0x0 > >>>> RDI: 0x6832d8 --> 0x0 > >>>> RAX: 0x6832d8 --> 0x0 (result) > >>>> > >>>> It is worth noting that I can reproduce the exact same execution flow > >>>> with a non-multiline result string (lookup.result.value) that is > >>>> properly copied by "p_strdup" and returned in RAX, then displayed by > >>>> doveadm. > >>>> > >>>> I am not familiar with the pooling mechanism hidden behind the call to > >>>> p_strdump and not quite sure why this behaviour is emerging. Maybe I am > >>>> even miles away from an understanding of the issue here, but it sounds > >>>> to me like something is wrong in the way "p_strdup" performs the copy. > >>>> > >>>> Hope this helps, > >>>> kaiyou. > >>>> > >>>> > >>>> -------------- next part -------------- A non-text attachment was scrubbed... Name: 0001-lib-dict-Duplicate-result-value-in-mempool.patch Type: text/x-diff Size: 1797 bytes Desc: not available URL: From pierre at jaury.eu Mon Oct 17 18:59:57 2016 From: pierre at jaury.eu (Pierre Jaury) Date: Mon, 17 Oct 2016 20:59:57 +0200 Subject: Dict proxy client returning empty string instead of multiline string In-Reply-To: <137204017.2231.1476721439502@appsuite-dev.open-xchange.com> References: <612182c9-326d-2983-329b-248cfa6d7804@jaury.eu> <8e81cd26-5d0a-7260-864d-507bc28c2d95@jaury.eu> <5b037246-7d96-b1bb-525c-7a47937c8f81@jaury.eu> <137204017.2231.1476721439502@appsuite-dev.open-xchange.com> Message-ID: <1e882b6a-2912-5db2-2469-acae7b3b2ee9@jaury.eu> Thanks for your help, indeed duplicating the result sounds cleaner than duplicating before escaping. However, you patch still fails, when allocating in "pool_data_stack_malloc" this time. I get the following stack trace: Panic: pool_data_stack_malloc(): stack frame changed Error: Raw backtrace: /usr/local/lib/dovecot/libdovecot.so.0(+0x91662) [0x7f4106ba1662] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x916e9) [0x7f4106ba16e9] -> /usr/local/lib/dovecot/libdovecot.so.0(i_fatal+0) [0x7f4106b3aae1] -> /usr/local/lib/dovecot/libdovecot.so.0(+0xac14e) [0x7f4106bbc14e] -> /usr/local/lib/dovecot/libdovecot.so.0(p_strdup+0x28) [0x7f4106bcbd88] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x5ef0c) [0x7f4106b6ef0c] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x5f30f) [0x7f4106b6f30f] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x5ebef) [0x7f4106b6ebef] -> /usr/local/lib/dovecot/libdovecot.so.0(connection_input_default+0xb1) [0x7f4106b9ee81] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x5f) [0x7f4106bb5fdf] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_handler_run_internal+0x109) [0x7f4106bb7499] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0x25) [0x7f4106bb6085] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_run+0x38) [0x7f4106bb6228] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x5ef6c) [0x7f4106b6ef6c] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x5fafc) [0x7f4106b6fafc] -> /home/kaiyou/projects/dovecot/src/doveadm/.libs/doveadm(+0x1b4b0) [0x55f11cd3e4b0] -> /home/kaiyou/projects/dovecot/src/doveadm/.libs/doveadm(doveadm_cmd_ver2_to_cmd_wrapper+0x23a) [0x55f11cd5ae6a] -> /home/kaiyou/projects/dovecot/src/doveadm/.libs/doveadm(doveadm_cmd_run_ver2+0x555) [0x55f11cd5bc15] -> /home/kaiyou/projects/dovecot/src/doveadm/.libs/doveadm(doveadm_cmd_try_run_ver2+0x37) [0x55f11cd5bc67] -> /home/kaiyou/projects/dovecot/src/doveadm/.libs/doveadm(main+0x1da) [0x55f11cd3c31a] -> /usr/lib/libc.so.6(__libc_start_main+0xf1) [0x7f4106792291] -> /home/kaiyou/projects/dovecot/src/doveadm/.libs/doveadm(_start+0x2a) [0x55f11cd3c6fa] The trace is missing some symbols, I will debug tomorrow and see where the call comes from exactly. Regards, On 10/17/2016 06:23 PM, Aki Tuomi wrote: > Hi! > > Looking at the code, I think the bug is that it just copies pointer to the value, it should, instead, duplicate the value to some memory region. Can you see if this following patch fixes it? > > Aki > >> On October 17, 2016 at 4:14 PM Pierre Jaury wrote: >> >> >> Okay, it seems to me that the bug is due to "t_str_tabunescape" using >> the unsafe datastack ("t_strdup_noconst") while the string is actually >> returned in an async callback. >> >> Before it is handled by "client_dict_lookup", "client_dict_wait" >> actually fires some IO loops that are more than likely to call "t_pop" >> and free/flush the result string (I checked, it does call "t_pop" a >> couple times indeed). Maybe it would be safer to use a standard >> datastack pool when unescaping a string in that context. >> >> I could work on a patch that would: >> >> - add an optional "pool" attribute to the "client_dict_cmd" structure, >> - pass the pool to the async lookup function, >> - use the pool when escaping strings that should survive the callback >> chain >> >> What do you think? >> >> Regards, >> kaiyou >> >> >> On 10/17/2016 09:52 AM, Pierre Jaury wrote: >>> While trying to isolate properly and reproduce, I was able to trigger >>> the same bug with the following code: >>> >>> struct dict *dict; >>> char* dict_uri = "proxy::sieve"; >>> char* key = "priv/key"; >>> char* username = "admin at domain.tld"; >>> char* value, error; >>> >>> dict_drivers_register_builtin(); >>> dict_init(dict_uri, DICT_DATA_TYPE_STRING, username, >>> doveadm_settings->base_dir, &dict, &error); >>> dict_lookup(dict, pool_datastack_create(), key, &value); >>> printf(">%s\n", value); // outputs an empty string >>> dict_deinit(&dict); >>> >>> I trimmed it to the bare minimal string manipulation functions involved >>> but cannot reproduce in that case: >>> >>> pool_t pool = pool_datastack_create(); >>> >>> char* s1 = "test\001n\001rtest"; >>> char* s2 = t_str_tabunescape(s1); >>> char* s3 = p_strdup(pool, s2); >>> >>> printf("1>%s\n", s1); >>> printf("2>%s\n", s2); >>> printf("3>%s\n", s3); // all three output the string with NL and CR >>> >>> Maybe I am missing a function call in the process or maybe the issue is >>> related to the way unescaping is performed in the async callback >>> function in "dict-client.c", or maybe even some other edge case. >>> >>> Finally, I was able to run the first snippet without bug by removing the >>> string duplication in "t_str_tabunescape" (which I realize is not a >>> proper solution), or by explicitely using the following pool: >>> >>> return str_tabunescape(p_strdup(pool_datastack_create(), str)); >>> >>> >>> Hope this helps. >>> >>> kaiyou >>> >>> >>> On 10/17/2016 07:51 AM, Aki Tuomi wrote: >>>> Hi! >>>> >>>> This does sound like a bug, we'll have look. >>>> >>>> Aki >>>> >>>> >>>> On 17.10.2016 01:26, Pierre Jaury wrote: >>>>> I dived a little bit further into the rabbit hole, up to the point where >>>>> debugging has become unpracticle but I still haven't found the root >>>>> cause for sure. >>>>> >>>>> I read most of the code for "p_strdup" based on datastack memory pools >>>>> (which are used for dictionary lookups both with doveadm and by extdata) >>>>> and it seems ok. Still, after "t_malloc_real" is called in "t_malloc0", >>>>> the allocated buffer has the same address as the source string. >>>>> >>>>> The only sensible explanation I can come up with is that during >>>>> unescaping, strings are not allocated properly, leading to the memory >>>>> pool reusing the string address and zeroing it in the process before the >>>>> string copy operation. >>>>> >>>>> I will follow on this path tomorrow, any lead is more than welcome. >>>>> >>>>> kaiyou. >>>>> >>>>> On 10/16/2016 11:16 PM, Pierre Jaury wrote: >>>>>> Hello, >>>>>> >>>>>> I am using a dict proxy for my sieve extdata plugin to access some >>>>>> fields from an SQLite database (autoreply text and other >>>>>> database-configured items). >>>>>> >>>>>> All tests are performed against version 2.2.25. >>>>>> >>>>>> $ dovecot --version >>>>>> 2.2.25 (7be1766) >>>>>> >>>>>> My configuration looks like: >>>>>> >>>>>> dict { >>>>>> sieve = sqlite:/etc/dovecot/pigeonhole-sieve.dict >>>>>> } >>>>>> >>>>>> [...] >>>>>> sieve_extdata_dict_uri = proxy::sieve >>>>>> >>>>>> I am able to read pretty much any attribute without any issue, except >>>>>> when the value contains a special character like "\r" or "\n". By using >>>>>> the doveadm dict client, I narrowed it down to the dictionary management >>>>>> part (either server, protocol or client). >>>>>> >>>>>> I was suspecting escaping functions from "lib/strescape.c" (mostly >>>>>> str_tabescape and its counterpart, used by "lib-dict/client.c"), so I >>>>>> monitored socket communications. It seems that escaping is done properly >>>>>> on the server and the socket is not an issue either. >>>>>> >>>>>> The following strace dump results from running "doveadm dict get" >>>>>> against the dict socket: >>>>>> >>>>>> connect(8, {sa_family=AF_UNIX, sun_path="..."}, 110) = 0 >>>>>> fstat(8, {st_mode=S_IFSOCK|0777, st_size=0, ...}) = 0 >>>>>> [...] >>>>>> write(8, "H2\t0\t0\tadmin at domain.tld\tsieve\n", 30) = 30 >>>>>> [...] >>>>>> read(8, "Otest\1r\1ntest\n", 8192) = 14 >>>>>> >>>>>> Indeed "\1r" and "\1n" are the escape sequences used by >>>>>> "lib/strescape.c". I went deeped and debugged the call to "dict_lookup" >>>>>> performed by doveadm. Indeed the client gets the proper string from the >>>>>> socket and to my surprise, it is properly unescaped. >>>>>> >>>>>> Then, in "client_dict_lookup" ("lib-dict/dict-client.c"), the call to >>>>>> "p_strdup" returns an empty string (null byte set at the target address). >>>>>> >>>>>> Before the call to the dict "->lookup" attribute (client_dict_lookup): >>>>>> >>>>>> RAX: 0x7ffff73a37c0 (push r14) >>>>>> RBX: 0x6831b8 ("priv/reply_body") >>>>>> RCX: 0x7fffffffe240 --> 0x682a60 --> 0x6831b8 ("priv/reply_body") >>>>>> RDX: 0x6831b8 ("priv/reply_body") >>>>>> RSI: 0x683288 --> 0x7ffff7653120 --> 0x7ffff73ea620 ([...]) >>>>>> RDI: 0x690ad0 --> 0x7ffff7400713 --> 0x75250079786f7270 ('proxy') >>>>>> >>>>>> 0x7ffff73a1f10 : mov rcx,r11 (value_r) >>>>>> 0x7ffff73a1f13 : mov rdx,r8 (key) >>>>>> 0x7ffff73a1f16 : mov rsi,r10 (pool) >>>>>> 0x7ffff73a1f19 : mov rdi,r9 (dict) >>>>>> 0x7ffff73a1f1c : add rsp,0x8 >>>>>> => 0x7ffff73a1f20 : jmp rax >>>>>> >>>>>> Before the call to p_strdup in "client_dict_lookup": >>>>>> >>>>>> RSI: 0x6832d8 ("test\r\ntest") (lookup.result.value) >>>>>> RDI: 0x683288 --> 0x7ffff7653120 --> [...] (pool) >>>>>> RAX: 0x0 (result) >>>>>> >>>>>> 0x7ffff73a384f: nop >>>>>> 0x7ffff73a3850: mov rsi,QWORD PTR [rsp+0x8] >>>>>> 0x7ffff73a3855: mov rdi,r14 >>>>>> => 0x7ffff73a3858: call 0x7ffff736d3c0 >>>>>> 0x7ffff73a385d: mov QWORD PTR [r13+0x0],rax >>>>>> 0x7ffff73a3861: mov rsi,QWORD PTR [rsp+0x18] >>>>>> 0x7ffff73a3866: xor rsi,QWORD PTR fs:0x28 >>>>>> 0x7ffff73a386f: mov eax,ebx >>>>>> >>>>>> After the call: >>>>>> >>>>>> 0x7ffff73a3850: mov rsi,QWORD PTR [rsp+0x8] >>>>>> 0x7ffff73a3855: mov rdi,r14 >>>>>> 0x7ffff73a3858: call 0x7ffff736d3c0 >>>>>> => 0x7ffff73a385d: mov QWORD PTR [r13+0x0],rax >>>>>> 0x7ffff73a3861: mov rsi,QWORD PTR [rsp+0x18] >>>>>> 0x7ffff73a3866: xor rsi,QWORD PTR fs:0x28 >>>>>> 0x7ffff73a386f: mov eax,ebx >>>>>> 0x7ffff73a3871: jne 0x7ffff73a38da >>>>>> >>>>>> RSI: 0x0 >>>>>> RDI: 0x6832d8 --> 0x0 >>>>>> RAX: 0x6832d8 --> 0x0 (result) >>>>>> >>>>>> It is worth noting that I can reproduce the exact same execution flow >>>>>> with a non-multiline result string (lookup.result.value) that is >>>>>> properly copied by "p_strdup" and returned in RAX, then displayed by >>>>>> doveadm. >>>>>> >>>>>> I am not familiar with the pooling mechanism hidden behind the call to >>>>>> p_strdump and not quite sure why this behaviour is emerging. Maybe I am >>>>>> even miles away from an understanding of the issue here, but it sounds >>>>>> to me like something is wrong in the way "p_strdup" performs the copy. >>>>>> >>>>>> Hope this helps, >>>>>> kaiyou. >>>>>> >>>>>> >>>>> > From aki.tuomi at dovecot.fi Mon Oct 17 19:18:02 2016 From: aki.tuomi at dovecot.fi (Aki Tuomi) Date: Mon, 17 Oct 2016 22:18:02 +0300 (EEST) Subject: Dict proxy client returning empty string instead of multiline string In-Reply-To: <1e882b6a-2912-5db2-2469-acae7b3b2ee9@jaury.eu> References: <612182c9-326d-2983-329b-248cfa6d7804@jaury.eu> <8e81cd26-5d0a-7260-864d-507bc28c2d95@jaury.eu> <5b037246-7d96-b1bb-525c-7a47937c8f81@jaury.eu> <137204017.2231.1476721439502@appsuite-dev.open-xchange.com> <1e882b6a-2912-5db2-2469-acae7b3b2ee9@jaury.eu> Message-ID: <1175200760.54.1476731884054@appsuite-dev.open-xchange.com> Oh duh, it used datastack pool. Try again with the attached patch? Please remember to clear the previous one out before trying. Aki > > The trace is missing some symbols, I will debug tomorrow and see where > the call comes from exactly. > > Regards, > > > On 10/17/2016 06:23 PM, Aki Tuomi wrote: > > Hi! > > > > Looking at the code, I think the bug is that it just copies pointer to the value, it should, instead, duplicate the value to some memory region. Can you see if this following patch fixes it? > > > > Aki > > > >> On October 17, 2016 at 4:14 PM Pierre Jaury wrote: > >> > >> > >> Okay, it seems to me that the bug is due to "t_str_tabunescape" using > >> the unsafe datastack ("t_strdup_noconst") while the string is actually > >> returned in an async callback. > >> > >> Before it is handled by "client_dict_lookup", "client_dict_wait" > >> actually fires some IO loops that are more than likely to call "t_pop" > >> and free/flush the result string (I checked, it does call "t_pop" a > >> couple times indeed). Maybe it would be safer to use a standard > >> datastack pool when unescaping a string in that context. > >> > >> I could work on a patch that would: > >> > >> - add an optional "pool" attribute to the "client_dict_cmd" structure, > >> - pass the pool to the async lookup function, > >> - use the pool when escaping strings that should survive the callback > >> chain > >> > >> What do you think? > >> > >> Regards, > >> kaiyou > >> > >> > >> On 10/17/2016 09:52 AM, Pierre Jaury wrote: > >>> While trying to isolate properly and reproduce, I was able to trigger > >>> the same bug with the following code: > >>> > >>> struct dict *dict; > >>> char* dict_uri = "proxy::sieve"; > >>> char* key = "priv/key"; > >>> char* username = "admin at domain.tld"; > >>> char* value, error; > >>> > >>> dict_drivers_register_builtin(); > >>> dict_init(dict_uri, DICT_DATA_TYPE_STRING, username, > >>> doveadm_settings->base_dir, &dict, &error); > >>> dict_lookup(dict, pool_datastack_create(), key, &value); > >>> printf(">%s\n", value); // outputs an empty string > >>> dict_deinit(&dict); > >>> > >>> I trimmed it to the bare minimal string manipulation functions involved > >>> but cannot reproduce in that case: > >>> > >>> pool_t pool = pool_datastack_create(); > >>> > >>> char* s1 = "test\001n\001rtest"; > >>> char* s2 = t_str_tabunescape(s1); > >>> char* s3 = p_strdup(pool, s2); > >>> > >>> printf("1>%s\n", s1); > >>> printf("2>%s\n", s2); > >>> printf("3>%s\n", s3); // all three output the string with NL and CR > >>> > >>> Maybe I am missing a function call in the process or maybe the issue is > >>> related to the way unescaping is performed in the async callback > >>> function in "dict-client.c", or maybe even some other edge case. > >>> > >>> Finally, I was able to run the first snippet without bug by removing the > >>> string duplication in "t_str_tabunescape" (which I realize is not a > >>> proper solution), or by explicitely using the following pool: > >>> > >>> return str_tabunescape(p_strdup(pool_datastack_create(), str)); > >>> > >>> > >>> Hope this helps. > >>> > >>> kaiyou > >>> > >>> > >>> On 10/17/2016 07:51 AM, Aki Tuomi wrote: > >>>> Hi! > >>>> > >>>> This does sound like a bug, we'll have look. > >>>> > >>>> Aki > >>>> > >>>> > >>>> On 17.10.2016 01:26, Pierre Jaury wrote: > >>>>> I dived a little bit further into the rabbit hole, up to the point where > >>>>> debugging has become unpracticle but I still haven't found the root > >>>>> cause for sure. > >>>>> > >>>>> I read most of the code for "p_strdup" based on datastack memory pools > >>>>> (which are used for dictionary lookups both with doveadm and by extdata) > >>>>> and it seems ok. Still, after "t_malloc_real" is called in "t_malloc0", > >>>>> the allocated buffer has the same address as the source string. > >>>>> > >>>>> The only sensible explanation I can come up with is that during > >>>>> unescaping, strings are not allocated properly, leading to the memory > >>>>> pool reusing the string address and zeroing it in the process before the > >>>>> string copy operation. > >>>>> > >>>>> I will follow on this path tomorrow, any lead is more than welcome. > >>>>> > >>>>> kaiyou. > >>>>> > >>>>> On 10/16/2016 11:16 PM, Pierre Jaury wrote: > >>>>>> Hello, > >>>>>> > >>>>>> I am using a dict proxy for my sieve extdata plugin to access some > >>>>>> fields from an SQLite database (autoreply text and other > >>>>>> database-configured items). > >>>>>> > >>>>>> All tests are performed against version 2.2.25. > >>>>>> > >>>>>> $ dovecot --version > >>>>>> 2.2.25 (7be1766) > >>>>>> > >>>>>> My configuration looks like: > >>>>>> > >>>>>> dict { > >>>>>> sieve = sqlite:/etc/dovecot/pigeonhole-sieve.dict > >>>>>> } > >>>>>> > >>>>>> [...] > >>>>>> sieve_extdata_dict_uri = proxy::sieve > >>>>>> > >>>>>> I am able to read pretty much any attribute without any issue, except > >>>>>> when the value contains a special character like "\r" or "\n". By using > >>>>>> the doveadm dict client, I narrowed it down to the dictionary management > >>>>>> part (either server, protocol or client). > >>>>>> > >>>>>> I was suspecting escaping functions from "lib/strescape.c" (mostly > >>>>>> str_tabescape and its counterpart, used by "lib-dict/client.c"), so I > >>>>>> monitored socket communications. It seems that escaping is done properly > >>>>>> on the server and the socket is not an issue either. > >>>>>> > >>>>>> The following strace dump results from running "doveadm dict get" > >>>>>> against the dict socket: > >>>>>> > >>>>>> connect(8, {sa_family=AF_UNIX, sun_path="..."}, 110) = 0 > >>>>>> fstat(8, {st_mode=S_IFSOCK|0777, st_size=0, ...}) = 0 > >>>>>> [...] > >>>>>> write(8, "H2\t0\t0\tadmin at domain.tld\tsieve\n", 30) = 30 > >>>>>> [...] > >>>>>> read(8, "Otest\1r\1ntest\n", 8192) = 14 > >>>>>> > >>>>>> Indeed "\1r" and "\1n" are the escape sequences used by > >>>>>> "lib/strescape.c". I went deeped and debugged the call to "dict_lookup" > >>>>>> performed by doveadm. Indeed the client gets the proper string from the > >>>>>> socket and to my surprise, it is properly unescaped. > >>>>>> > >>>>>> Then, in "client_dict_lookup" ("lib-dict/dict-client.c"), the call to > >>>>>> "p_strdup" returns an empty string (null byte set at the target address). > >>>>>> > >>>>>> Before the call to the dict "->lookup" attribute (client_dict_lookup): > >>>>>> > >>>>>> RAX: 0x7ffff73a37c0 (push r14) > >>>>>> RBX: 0x6831b8 ("priv/reply_body") > >>>>>> RCX: 0x7fffffffe240 --> 0x682a60 --> 0x6831b8 ("priv/reply_body") > >>>>>> RDX: 0x6831b8 ("priv/reply_body") > >>>>>> RSI: 0x683288 --> 0x7ffff7653120 --> 0x7ffff73ea620 ([...]) > >>>>>> RDI: 0x690ad0 --> 0x7ffff7400713 --> 0x75250079786f7270 ('proxy') > >>>>>> > >>>>>> 0x7ffff73a1f10 : mov rcx,r11 (value_r) > >>>>>> 0x7ffff73a1f13 : mov rdx,r8 (key) > >>>>>> 0x7ffff73a1f16 : mov rsi,r10 (pool) > >>>>>> 0x7ffff73a1f19 : mov rdi,r9 (dict) > >>>>>> 0x7ffff73a1f1c : add rsp,0x8 > >>>>>> => 0x7ffff73a1f20 : jmp rax > >>>>>> > >>>>>> Before the call to p_strdup in "client_dict_lookup": > >>>>>> > >>>>>> RSI: 0x6832d8 ("test\r\ntest") (lookup.result.value) > >>>>>> RDI: 0x683288 --> 0x7ffff7653120 --> [...] (pool) > >>>>>> RAX: 0x0 (result) > >>>>>> > >>>>>> 0x7ffff73a384f: nop > >>>>>> 0x7ffff73a3850: mov rsi,QWORD PTR [rsp+0x8] > >>>>>> 0x7ffff73a3855: mov rdi,r14 > >>>>>> => 0x7ffff73a3858: call 0x7ffff736d3c0 > >>>>>> 0x7ffff73a385d: mov QWORD PTR [r13+0x0],rax > >>>>>> 0x7ffff73a3861: mov rsi,QWORD PTR [rsp+0x18] > >>>>>> 0x7ffff73a3866: xor rsi,QWORD PTR fs:0x28 > >>>>>> 0x7ffff73a386f: mov eax,ebx > >>>>>> > >>>>>> After the call: > >>>>>> > >>>>>> 0x7ffff73a3850: mov rsi,QWORD PTR [rsp+0x8] > >>>>>> 0x7ffff73a3855: mov rdi,r14 > >>>>>> 0x7ffff73a3858: call 0x7ffff736d3c0 > >>>>>> => 0x7ffff73a385d: mov QWORD PTR [r13+0x0],rax > >>>>>> 0x7ffff73a3861: mov rsi,QWORD PTR [rsp+0x18] > >>>>>> 0x7ffff73a3866: xor rsi,QWORD PTR fs:0x28 > >>>>>> 0x7ffff73a386f: mov eax,ebx > >>>>>> 0x7ffff73a3871: jne 0x7ffff73a38da > >>>>>> > >>>>>> RSI: 0x0 > >>>>>> RDI: 0x6832d8 --> 0x0 > >>>>>> RAX: 0x6832d8 --> 0x0 (result) > >>>>>> > >>>>>> It is worth noting that I can reproduce the exact same execution flow > >>>>>> with a non-multiline result string (lookup.result.value) that is > >>>>>> properly copied by "p_strdup" and returned in RAX, then displayed by > >>>>>> doveadm. > >>>>>> > >>>>>> I am not familiar with the pooling mechanism hidden behind the call to > >>>>>> p_strdump and not quite sure why this behaviour is emerging. Maybe I am > >>>>>> even miles away from an understanding of the issue here, but it sounds > >>>>>> to me like something is wrong in the way "p_strdup" performs the copy. > >>>>>> > >>>>>> Hope this helps, > >>>>>> kaiyou. > >>>>>> > >>>>>> > >>>>> > -------------- next part -------------- A non-text attachment was scrubbed... Name: 0001-lib-dict-Duplicate-result-value-in-mempool.patch Type: text/x-diff Size: 1759 bytes Desc: not available URL: From aki.tuomi at dovecot.fi Mon Oct 17 19:21:03 2016 From: aki.tuomi at dovecot.fi (Aki Tuomi) Date: Mon, 17 Oct 2016 22:21:03 +0300 (EEST) Subject: Dict proxy client returning empty string instead of multiline string In-Reply-To: <1175200760.54.1476731884054@appsuite-dev.open-xchange.com> References: <612182c9-326d-2983-329b-248cfa6d7804@jaury.eu> <8e81cd26-5d0a-7260-864d-507bc28c2d95@jaury.eu> <5b037246-7d96-b1bb-525c-7a47937c8f81@jaury.eu> <137204017.2231.1476721439502@appsuite-dev.open-xchange.com> <1e882b6a-2912-5db2-2469-acae7b3b2ee9@jaury.eu> <1175200760.54.1476731884054@appsuite-dev.open-xchange.com> Message-ID: <859568203.60.1476732064014@appsuite-dev.open-xchange.com> Sorry, sent the wrong version, please see amended one attached. Aki > On October 17, 2016 at 10:18 PM Aki Tuomi wrote: > > > Oh duh, it used datastack pool. Try again with the attached patch? Please remember to clear the previous one out before trying. > > Aki > > > > > The trace is missing some symbols, I will debug tomorrow and see where > > the call comes from exactly. > > > > Regards, > > > > > > On 10/17/2016 06:23 PM, Aki Tuomi wrote: > > > Hi! > > > > > > Looking at the code, I think the bug is that it just copies pointer to the value, it should, instead, duplicate the value to some memory region. Can you see if this following patch fixes it? > > > > > > Aki > > > > > >> On October 17, 2016 at 4:14 PM Pierre Jaury wrote: > > >> > > >> > > >> Okay, it seems to me that the bug is due to "t_str_tabunescape" using > > >> the unsafe datastack ("t_strdup_noconst") while the string is actually > > >> returned in an async callback. > > >> > > >> Before it is handled by "client_dict_lookup", "client_dict_wait" > > >> actually fires some IO loops that are more than likely to call "t_pop" > > >> and free/flush the result string (I checked, it does call "t_pop" a > > >> couple times indeed). Maybe it would be safer to use a standard > > >> datastack pool when unescaping a string in that context. > > >> > > >> I could work on a patch that would: > > >> > > >> - add an optional "pool" attribute to the "client_dict_cmd" structure, > > >> - pass the pool to the async lookup function, > > >> - use the pool when escaping strings that should survive the callback > > >> chain > > >> > > >> What do you think? > > >> > > >> Regards, > > >> kaiyou > > >> > > >> > > >> On 10/17/2016 09:52 AM, Pierre Jaury wrote: > > >>> While trying to isolate properly and reproduce, I was able to trigger > > >>> the same bug with the following code: > > >>> > > >>> struct dict *dict; > > >>> char* dict_uri = "proxy::sieve"; > > >>> char* key = "priv/key"; > > >>> char* username = "admin at domain.tld"; > > >>> char* value, error; > > >>> > > >>> dict_drivers_register_builtin(); > > >>> dict_init(dict_uri, DICT_DATA_TYPE_STRING, username, > > >>> doveadm_settings->base_dir, &dict, &error); > > >>> dict_lookup(dict, pool_datastack_create(), key, &value); > > >>> printf(">%s\n", value); // outputs an empty string > > >>> dict_deinit(&dict); > > >>> > > >>> I trimmed it to the bare minimal string manipulation functions involved > > >>> but cannot reproduce in that case: > > >>> > > >>> pool_t pool = pool_datastack_create(); > > >>> > > >>> char* s1 = "test\001n\001rtest"; > > >>> char* s2 = t_str_tabunescape(s1); > > >>> char* s3 = p_strdup(pool, s2); > > >>> > > >>> printf("1>%s\n", s1); > > >>> printf("2>%s\n", s2); > > >>> printf("3>%s\n", s3); // all three output the string with NL and CR > > >>> > > >>> Maybe I am missing a function call in the process or maybe the issue is > > >>> related to the way unescaping is performed in the async callback > > >>> function in "dict-client.c", or maybe even some other edge case. > > >>> > > >>> Finally, I was able to run the first snippet without bug by removing the > > >>> string duplication in "t_str_tabunescape" (which I realize is not a > > >>> proper solution), or by explicitely using the following pool: > > >>> > > >>> return str_tabunescape(p_strdup(pool_datastack_create(), str)); > > >>> > > >>> > > >>> Hope this helps. > > >>> > > >>> kaiyou > > >>> > > >>> > > >>> On 10/17/2016 07:51 AM, Aki Tuomi wrote: > > >>>> Hi! > > >>>> > > >>>> This does sound like a bug, we'll have look. > > >>>> > > >>>> Aki > > >>>> > > >>>> > > >>>> On 17.10.2016 01:26, Pierre Jaury wrote: > > >>>>> I dived a little bit further into the rabbit hole, up to the point where > > >>>>> debugging has become unpracticle but I still haven't found the root > > >>>>> cause for sure. > > >>>>> > > >>>>> I read most of the code for "p_strdup" based on datastack memory pools > > >>>>> (which are used for dictionary lookups both with doveadm and by extdata) > > >>>>> and it seems ok. Still, after "t_malloc_real" is called in "t_malloc0", > > >>>>> the allocated buffer has the same address as the source string. > > >>>>> > > >>>>> The only sensible explanation I can come up with is that during > > >>>>> unescaping, strings are not allocated properly, leading to the memory > > >>>>> pool reusing the string address and zeroing it in the process before the > > >>>>> string copy operation. > > >>>>> > > >>>>> I will follow on this path tomorrow, any lead is more than welcome. > > >>>>> > > >>>>> kaiyou. > > >>>>> > > >>>>> On 10/16/2016 11:16 PM, Pierre Jaury wrote: > > >>>>>> Hello, > > >>>>>> > > >>>>>> I am using a dict proxy for my sieve extdata plugin to access some > > >>>>>> fields from an SQLite database (autoreply text and other > > >>>>>> database-configured items). > > >>>>>> > > >>>>>> All tests are performed against version 2.2.25. > > >>>>>> > > >>>>>> $ dovecot --version > > >>>>>> 2.2.25 (7be1766) > > >>>>>> > > >>>>>> My configuration looks like: > > >>>>>> > > >>>>>> dict { > > >>>>>> sieve = sqlite:/etc/dovecot/pigeonhole-sieve.dict > > >>>>>> } > > >>>>>> > > >>>>>> [...] > > >>>>>> sieve_extdata_dict_uri = proxy::sieve > > >>>>>> > > >>>>>> I am able to read pretty much any attribute without any issue, except > > >>>>>> when the value contains a special character like "\r" or "\n". By using > > >>>>>> the doveadm dict client, I narrowed it down to the dictionary management > > >>>>>> part (either server, protocol or client). > > >>>>>> > > >>>>>> I was suspecting escaping functions from "lib/strescape.c" (mostly > > >>>>>> str_tabescape and its counterpart, used by "lib-dict/client.c"), so I > > >>>>>> monitored socket communications. It seems that escaping is done properly > > >>>>>> on the server and the socket is not an issue either. > > >>>>>> > > >>>>>> The following strace dump results from running "doveadm dict get" > > >>>>>> against the dict socket: > > >>>>>> > > >>>>>> connect(8, {sa_family=AF_UNIX, sun_path="..."}, 110) = 0 > > >>>>>> fstat(8, {st_mode=S_IFSOCK|0777, st_size=0, ...}) = 0 > > >>>>>> [...] > > >>>>>> write(8, "H2\t0\t0\tadmin at domain.tld\tsieve\n", 30) = 30 > > >>>>>> [...] > > >>>>>> read(8, "Otest\1r\1ntest\n", 8192) = 14 > > >>>>>> > > >>>>>> Indeed "\1r" and "\1n" are the escape sequences used by > > >>>>>> "lib/strescape.c". I went deeped and debugged the call to "dict_lookup" > > >>>>>> performed by doveadm. Indeed the client gets the proper string from the > > >>>>>> socket and to my surprise, it is properly unescaped. > > >>>>>> > > >>>>>> Then, in "client_dict_lookup" ("lib-dict/dict-client.c"), the call to > > >>>>>> "p_strdup" returns an empty string (null byte set at the target address). > > >>>>>> > > >>>>>> Before the call to the dict "->lookup" attribute (client_dict_lookup): > > >>>>>> > > >>>>>> RAX: 0x7ffff73a37c0 (push r14) > > >>>>>> RBX: 0x6831b8 ("priv/reply_body") > > >>>>>> RCX: 0x7fffffffe240 --> 0x682a60 --> 0x6831b8 ("priv/reply_body") > > >>>>>> RDX: 0x6831b8 ("priv/reply_body") > > >>>>>> RSI: 0x683288 --> 0x7ffff7653120 --> 0x7ffff73ea620 ([...]) > > >>>>>> RDI: 0x690ad0 --> 0x7ffff7400713 --> 0x75250079786f7270 ('proxy') > > >>>>>> > > >>>>>> 0x7ffff73a1f10 : mov rcx,r11 (value_r) > > >>>>>> 0x7ffff73a1f13 : mov rdx,r8 (key) > > >>>>>> 0x7ffff73a1f16 : mov rsi,r10 (pool) > > >>>>>> 0x7ffff73a1f19 : mov rdi,r9 (dict) > > >>>>>> 0x7ffff73a1f1c : add rsp,0x8 > > >>>>>> => 0x7ffff73a1f20 : jmp rax > > >>>>>> > > >>>>>> Before the call to p_strdup in "client_dict_lookup": > > >>>>>> > > >>>>>> RSI: 0x6832d8 ("test\r\ntest") (lookup.result.value) > > >>>>>> RDI: 0x683288 --> 0x7ffff7653120 --> [...] (pool) > > >>>>>> RAX: 0x0 (result) > > >>>>>> > > >>>>>> 0x7ffff73a384f: nop > > >>>>>> 0x7ffff73a3850: mov rsi,QWORD PTR [rsp+0x8] > > >>>>>> 0x7ffff73a3855: mov rdi,r14 > > >>>>>> => 0x7ffff73a3858: call 0x7ffff736d3c0 > > >>>>>> 0x7ffff73a385d: mov QWORD PTR [r13+0x0],rax > > >>>>>> 0x7ffff73a3861: mov rsi,QWORD PTR [rsp+0x18] > > >>>>>> 0x7ffff73a3866: xor rsi,QWORD PTR fs:0x28 > > >>>>>> 0x7ffff73a386f: mov eax,ebx > > >>>>>> > > >>>>>> After the call: > > >>>>>> > > >>>>>> 0x7ffff73a3850: mov rsi,QWORD PTR [rsp+0x8] > > >>>>>> 0x7ffff73a3855: mov rdi,r14 > > >>>>>> 0x7ffff73a3858: call 0x7ffff736d3c0 > > >>>>>> => 0x7ffff73a385d: mov QWORD PTR [r13+0x0],rax > > >>>>>> 0x7ffff73a3861: mov rsi,QWORD PTR [rsp+0x18] > > >>>>>> 0x7ffff73a3866: xor rsi,QWORD PTR fs:0x28 > > >>>>>> 0x7ffff73a386f: mov eax,ebx > > >>>>>> 0x7ffff73a3871: jne 0x7ffff73a38da > > >>>>>> > > >>>>>> RSI: 0x0 > > >>>>>> RDI: 0x6832d8 --> 0x0 > > >>>>>> RAX: 0x6832d8 --> 0x0 (result) > > >>>>>> > > >>>>>> It is worth noting that I can reproduce the exact same execution flow > > >>>>>> with a non-multiline result string (lookup.result.value) that is > > >>>>>> properly copied by "p_strdup" and returned in RAX, then displayed by > > >>>>>> doveadm. > > >>>>>> > > >>>>>> I am not familiar with the pooling mechanism hidden behind the call to > > >>>>>> p_strdump and not quite sure why this behaviour is emerging. Maybe I am > > >>>>>> even miles away from an understanding of the issue here, but it sounds > > >>>>>> to me like something is wrong in the way "p_strdup" performs the copy. > > >>>>>> > > >>>>>> Hope this helps, > > >>>>>> kaiyou. > > >>>>>> > > >>>>>> > > >>>>> > -------------- next part -------------- A non-text attachment was scrubbed... Name: 0001-lib-dict-Duplicate-result-value-in-mempool.patch Type: text/x-diff Size: 1799 bytes Desc: not available URL: From tss at iki.fi Mon Oct 17 20:09:01 2016 From: tss at iki.fi (Timo Sirainen) Date: Mon, 17 Oct 2016 23:09:01 +0300 Subject: Massive LMTP Problems with dovecot In-Reply-To: <20161017143107.yzh5denau3kzj37w@charite.de> References: <3syKJD4Vj9z20sts@mail-cbf.charite.de> <3c8fdd70-f345-55d6-1151-f82dc6dfb396@rename-it.nl> <20161017134829.x4qp4opkorg32sd2@charite.de> <20161017140000.mdfm3sp3eqzve35b@charite.de> <20161017140232.o7z4qyu4kdlwwveb@charite.de> <20161017140820.zspfu6herylymp55@charite.de> <20161017143107.yzh5denau3kzj37w@charite.de> Message-ID: On 17 Oct 2016, at 17:31, Ralf Hildebrandt wrote: > > * Ralf Hildebrandt : > >>> It seems to loop in sha1_loop & hash_format_loop >> >> The problem occurs in both 2.3 and 2.2 (I just updated to 2.3 to check). > > I'm seeing the first occurence of that problem on the 10th of october! > > I was using (prior to the 10th) : 2.3.0~alpha0-1~auto+371 > On the 10th I upgraded (16:04) to: 2.3.0~alpha0-1~auto+376 > > I'd think the change must have been introduced between 371 and 376 :) > > I then went back to, issues went away: 2.2.25-1~auto+49 > and the issues reappeared with 2.2.25-1~auto+57 https://github.com/dovecot/core/commit/9b5fa7fdd9b9f1f61eaddda48036df200fc5e56e should fix this. From jtam.home at gmail.com Mon Oct 17 20:15:31 2016 From: jtam.home at gmail.com (Joseph Tam) Date: Mon, 17 Oct 2016 13:15:31 -0700 (PDT) Subject: First steps in Dovecot; IMAP not working In-Reply-To: References: Message-ID: Marnaud writes: > "mailtest", the new user, is in group mail(8). In addition, I've added > write permission for "others" to /var/mail. Now, I'm trying to send a > message to "mailtest" from another, working, e-mail account and nothing > happens. This time, "doveadm log errors" is empty. > > In short, I don't get any error but no mail either. Two things to check: # What does dovecot think user "mailtest" has? doveadm user mailtest # The sticky bit should be set on /var/mail (you didn't mention # setting it. It probably doesn't have bearing on this problem, # but it will make it a little more secure. chmod 1777 /var/mail > Since "doveadm log errors" returns an empty result, where should I look > for the problem? I usually don't use this command, I look at the log file which seems to have more details. Try looking there for more diagnostics. Also, look at your MTA's logs as well. Joseph Tam From aki.tuomi at dovecot.fi Mon Oct 17 21:42:41 2016 From: aki.tuomi at dovecot.fi (Aki Tuomi) Date: Tue, 18 Oct 2016 00:42:41 +0300 Subject: dovecot 2.2.25 BUG: local_name is not matching correctly In-Reply-To: <201610131509.10597.arekm@maven.pl> References: <201610131509.10597.arekm@maven.pl> Message-ID: On 13.10.2016 16:09, Arkadiusz Mi?kiewicz wrote: > Bug report: > > When using dovecot 2.2.25 SNI capability it doesn't always match proper vhost > config. For example if we have such config: > > local_name imap.example.com { > ssl_cert = ssl_key = } > > but imap client sends mixedcase SNI hostname like "IMAP.example.com" then > dovecot won't match above local_name imap.example.coml config section. > > IMO dovecot should do case insensitive comparison. Case sensitive matching for > DNS names makes little sense. > Hi! Fixed in https://github.com/dovecot/core/commit/c19c44f87ef3fe40cae4be9a86ee9327a7370e46 Aki Tuomi Dovecot oy From aki.tuomi at dovecot.fi Mon Oct 17 21:45:51 2016 From: aki.tuomi at dovecot.fi (Aki Tuomi) Date: Tue, 18 Oct 2016 00:45:51 +0300 (EEST) Subject: Dict proxy client returning empty string instead of multiline string In-Reply-To: <859568203.60.1476732064014@appsuite-dev.open-xchange.com> References: <612182c9-326d-2983-329b-248cfa6d7804@jaury.eu> <8e81cd26-5d0a-7260-864d-507bc28c2d95@jaury.eu> <5b037246-7d96-b1bb-525c-7a47937c8f81@jaury.eu> <137204017.2231.1476721439502@appsuite-dev.open-xchange.com> <1e882b6a-2912-5db2-2469-acae7b3b2ee9@jaury.eu> <1175200760.54.1476731884054@appsuite-dev.open-xchange.com> <859568203.60.1476732064014@appsuite-dev.open-xchange.com> Message-ID: <2070308704.143.1476740752060@appsuite-dev.open-xchange.com> > > > >>>>> On 10/16/2016 11:16 PM, Pierre Jaury wrote: > > > >>>>>> Hello, > > > >>>>>> > > > >>>>>> I am using a dict proxy for my sieve extdata plugin to access some > > > >>>>>> fields from an SQLite database (autoreply text and other > > > >>>>>> database-configured items). > > > >>>>>> > > > >>>>>> All tests are performed against version 2.2.25. > > > >>>>>> > > > >>>>>> $ dovecot --version > > > >>>>>> 2.2.25 (7be1766) > > > >>>>>> > > > >>>>>> My configuration looks like: > > > >>>>>> > > > >>>>>> dict { > > > >>>>>> sieve = sqlite:/etc/dovecot/pigeonhole-sieve.dict > > > >>>>>> } > > > >>>>>> > > > >>>>>> [...] > > > >>>>>> sieve_extdata_dict_uri = proxy::sieve > > > >>>>>> > > > >>>>>> I am able to read pretty much any attribute without any issue, except > > > >>>>>> when the value contains a special character like "\r" or "\n". By using > > > >>>>>> the doveadm dict client, I narrowed it down to the dictionary management > > > >>>>>> part (either server, protocol or client). > > > >>>>>> > > > >>>>>> I was suspecting escaping functions from "lib/strescape.c" (mostly > > > >>>>>> str_tabescape and its counterpart, used by "lib-dict/client.c"), so I > > > >>>>>> monitored socket communications. It seems that escaping is done properly > > > >>>>>> on the server and the socket is not an issue either. > > > >>>>>> > > > >>>>>> The following strace dump results from running "doveadm dict get" > > > >>>>>> against the dict socket: > > > >>>>>> > > > >>>>>> connect(8, {sa_family=AF_UNIX, sun_path="..."}, 110) = 0 > > > >>>>>> fstat(8, {st_mode=S_IFSOCK|0777, st_size=0, ...}) = 0 > > > >>>>>> [...] > > > >>>>>> write(8, "H2\t0\t0\tadmin at domain.tld\tsieve\n", 30) = 30 > > > >>>>>> [...] > > > >>>>>> read(8, "Otest\1r\1ntest\n", 8192) = 14 > > > >>>>>> > > > >>>>>> Indeed "\1r" and "\1n" are the escape sequences used by > > > >>>>>> "lib/strescape.c". I went deeped and debugged the call to "dict_lookup" > > > >>>>>> performed by doveadm. Indeed the client gets the proper string from the > > > >>>>>> socket and to my surprise, it is properly unescaped. > > > >>>>>> > > > >>>>>> Then, in "client_dict_lookup" ("lib-dict/dict-client.c"), the call to > > > >>>>>> "p_strdup" returns an empty string (null byte set at the target address). > > > >>>>>> > > > >>>>>> Before the call to the dict "->lookup" attribute (client_dict_lookup): > > > >>>>>> > > > >>>>>> RAX: 0x7ffff73a37c0 (push r14) > > > >>>>>> RBX: 0x6831b8 ("priv/reply_body") > > > >>>>>> RCX: 0x7fffffffe240 --> 0x682a60 --> 0x6831b8 ("priv/reply_body") > > > >>>>>> RDX: 0x6831b8 ("priv/reply_body") > > > >>>>>> RSI: 0x683288 --> 0x7ffff7653120 --> 0x7ffff73ea620 ([...]) > > > >>>>>> RDI: 0x690ad0 --> 0x7ffff7400713 --> 0x75250079786f7270 ('proxy') > > > >>>>>> > > > >>>>>> 0x7ffff73a1f10 : mov rcx,r11 (value_r) > > > >>>>>> 0x7ffff73a1f13 : mov rdx,r8 (key) > > > >>>>>> 0x7ffff73a1f16 : mov rsi,r10 (pool) > > > >>>>>> 0x7ffff73a1f19 : mov rdi,r9 (dict) > > > >>>>>> 0x7ffff73a1f1c : add rsp,0x8 > > > >>>>>> => 0x7ffff73a1f20 : jmp rax > > > >>>>>> > > > >>>>>> Before the call to p_strdup in "client_dict_lookup": > > > >>>>>> > > > >>>>>> RSI: 0x6832d8 ("test\r\ntest") (lookup.result.value) > > > >>>>>> RDI: 0x683288 --> 0x7ffff7653120 --> [...] (pool) > > > >>>>>> RAX: 0x0 (result) > > > >>>>>> > > > >>>>>> 0x7ffff73a384f: nop > > > >>>>>> 0x7ffff73a3850: mov rsi,QWORD PTR [rsp+0x8] > > > >>>>>> 0x7ffff73a3855: mov rdi,r14 > > > >>>>>> => 0x7ffff73a3858: call 0x7ffff736d3c0 > > > >>>>>> 0x7ffff73a385d: mov QWORD PTR [r13+0x0],rax > > > >>>>>> 0x7ffff73a3861: mov rsi,QWORD PTR [rsp+0x18] > > > >>>>>> 0x7ffff73a3866: xor rsi,QWORD PTR fs:0x28 > > > >>>>>> 0x7ffff73a386f: mov eax,ebx > > > >>>>>> > > > >>>>>> After the call: > > > >>>>>> > > > >>>>>> 0x7ffff73a3850: mov rsi,QWORD PTR [rsp+0x8] > > > >>>>>> 0x7ffff73a3855: mov rdi,r14 > > > >>>>>> 0x7ffff73a3858: call 0x7ffff736d3c0 > > > >>>>>> => 0x7ffff73a385d: mov QWORD PTR [r13+0x0],rax > > > >>>>>> 0x7ffff73a3861: mov rsi,QWORD PTR [rsp+0x18] > > > >>>>>> 0x7ffff73a3866: xor rsi,QWORD PTR fs:0x28 > > > >>>>>> 0x7ffff73a386f: mov eax,ebx > > > >>>>>> 0x7ffff73a3871: jne 0x7ffff73a38da > > > >>>>>> > > > >>>>>> RSI: 0x0 > > > >>>>>> RDI: 0x6832d8 --> 0x0 > > > >>>>>> RAX: 0x6832d8 --> 0x0 (result) > > > >>>>>> > > > >>>>>> It is worth noting that I can reproduce the exact same execution flow > > > >>>>>> with a non-multiline result string (lookup.result.value) that is > > > >>>>>> properly copied by "p_strdup" and returned in RAX, then displayed by > > > >>>>>> doveadm. > > > >>>>>> > > > >>>>>> I am not familiar with the pooling mechanism hidden behind the call to > > > >>>>>> p_strdump and not quite sure why this behaviour is emerging. Maybe I am > > > >>>>>> even miles away from an understanding of the issue here, but it sounds > > > >>>>>> to me like something is wrong in the way "p_strdup" performs the copy. > > > >>>>>> > > > >>>>>> Hope this helps, > > > >>>>>> kaiyou. > > > >>>>>> > > > >>>>>> > > > >>>>> > Fixed with https://github.com/dovecot/core/commit/4f051c3082080b9d69ef12c3720c683cff34b0da Aki Tuomi From anic297 at mac.com Tue Oct 18 06:55:53 2016 From: anic297 at mac.com (Moi) Date: Tue, 18 Oct 2016 08:55:53 +0200 Subject: First steps in Dovecot; IMAP not working In-Reply-To: References: Message-ID: <00e001d2290c$aca470d0$05ed5270$@mac.com> >Two things to check: > > # What does dovecot think user "mailtest" has? > doveadm user mailtest I get this: field value uid 1002 gid 8 home /home/mailtest mail mbox:~/mail:INBOX=/var/mail/mailtest system_groups_user mailtest > # The sticky bit should be set on /var/mail (you didn't mention > # setting it. It probably doesn't have bearing on this problem, > # but it will make it a little more secure. > chmod 1777 /var/mail You're right, I didn't do it. >I usually don't use this command, I look at the log file which seems to have more details. Try > looking there for more diagnostics. Also, look at your MTA's logs as well. I'll try to locate them. Thank you. From ximo at openmomo.com Tue Oct 18 08:47:01 2016 From: ximo at openmomo.com (Ximo Mira) Date: Tue, 18 Oct 2016 10:47:01 +0200 (CEST) Subject: Warning: Sent SIGKILL to 100 imap-login processes In-Reply-To: <805380044.1058862.1476779915616.JavaMail.zimbra@openmomo.com> Message-ID: <1717127623.1059101.1476780421814.JavaMail.zimbra@openmomo.com> Hi, Last night we tried a traffic bypass in an existing mail environment before migration to new Dovecot backend platfom using Dovecot proxy. We are using an LDAP value for checking the proxy host of the user. POP traffic was running flawlessly, but IMAP connections started to drop when few clients connected: Oct 17 22 :26:51 master: Warning: Sent SIGKILL to 100 imap-login processes There are hundreds of these lines, always between 99 and 100 processes. Looks like some kind of limit, but not sure if its related to the Dovecot proxy machines (pool of 3 in total) or the final destination (the same IMAP server for +10000 users). Concurrect IMAP connections may rise to around 2000. Thanks. From aki.tuomi at dovecot.fi Tue Oct 18 10:43:44 2016 From: aki.tuomi at dovecot.fi (Aki Tuomi) Date: Tue, 18 Oct 2016 13:43:44 +0300 Subject: Warning: Sent SIGKILL to 100 imap-login processes In-Reply-To: <1717127623.1059101.1476780421814.JavaMail.zimbra@openmomo.com> References: <1717127623.1059101.1476780421814.JavaMail.zimbra@openmomo.com> Message-ID: On 18.10.2016 11:47, Ximo Mira wrote: > Hi, > > Last night we tried a traffic bypass in an existing mail environment before migration to new Dovecot backend platfom using Dovecot proxy. We are using an LDAP value for checking the proxy host of the user. > > POP traffic was running flawlessly, but IMAP connections started to drop when few clients connected: > > Oct 17 22 :26:51 master: Warning: Sent SIGKILL to 100 imap-login processes > > There are hundreds of these lines, always between 99 and 100 processes. Looks like some kind of limit, but not sure if its related to the Dovecot proxy machines (pool of 3 in total) or the final destination (the same IMAP server for +10000 users). Concurrect IMAP connections may rise to around 2000. > > Thanks. Hi! You apparently have also opened support ticket about this? Aki From mick.crane at gmail.com Tue Oct 18 10:42:45 2016 From: mick.crane at gmail.com (mick crane) Date: Tue, 18 Oct 2016 11:42:45 +0100 Subject: First steps in Dovecot; IMAP not working In-Reply-To: <00e001d2290c$aca470d0$05ed5270$@mac.com> References: <00e001d2290c$aca470d0$05ed5270$@mac.com> Message-ID: On 2016-10-18 07:55, Moi wrote: >> Two things to check: >> > > # What does dovecot think user "mailtest" has? > > doveadm user mailtest > > I get this: > field value > uid 1002 > gid 8 > home /home/mailtest > mail mbox:~/mail:INBOX=/var/mail/mailtest this looks wrong I don't think /var should be your home directory. > system_groups_user mailtest > > > # The sticky bit should be set on /var/mail (you didn't mention > > # setting it. It probably doesn't have bearing on this problem, > > # but it will make it a little more secure. > > chmod 1777 /var/mail > > You're right, I didn't do it. > >> I usually don't use this command, I look at the log file which seems >> to > have more details. Try > looking there for more diagnostics. Also, > look at > your MTA's logs as well. > > I'll try to locate them. > Thank you. I always install "locate" and whenever I change anything do as root "/usr/bin/updatedb" then you can type "locate *.log" -- key ID: 0x4BFEBB31 From aki.tuomi at dovecot.fi Tue Oct 18 10:54:22 2016 From: aki.tuomi at dovecot.fi (Aki Tuomi) Date: Tue, 18 Oct 2016 13:54:22 +0300 Subject: First steps in Dovecot; IMAP not working In-Reply-To: References: <00e001d2290c$aca470d0$05ed5270$@mac.com> Message-ID: On 18.10.2016 13:42, mick crane wrote: > On 2016-10-18 07:55, Moi wrote: >>> Two things to check: >>> >> > # What does dovecot think user "mailtest" has? >> > doveadm user mailtest >> >> I get this: >> field value >> uid 1002 >> gid 8 >> home /home/mailtest >> mail mbox:~/mail:INBOX=/var/mail/mailtest > > this looks wrong I don't think /var should be your home directory. > It's just location of INBOX file, which is in this case just right. > >> system_groups_user mailtest >> >> > # The sticky bit should be set on /var/mail (you didn't mention >> > # setting it. It probably doesn't have bearing on this problem, >> > # but it will make it a little more secure. >> > chmod 1777 /var/mail >> >> You're right, I didn't do it. >> >>> I usually don't use this command, I look at the log file which seems to >> have more details. Try > looking there for more diagnostics. Also, >> look at >> your MTA's logs as well. >> >> I'll try to locate them. >> Thank you. > > I always install "locate" and whenever I change anything do as root > "/usr/bin/updatedb" > then you can type > "locate *.log" Aki From anic297 at mac.com Tue Oct 18 11:12:39 2016 From: anic297 at mac.com (Moi) Date: Tue, 18 Oct 2016 13:12:39 +0200 Subject: First steps in Dovecot; IMAP not working In-Reply-To: References: <00e001d2290c$aca470d0$05ed5270$@mac.com> Message-ID: <00fb01d22930$8bf78830$a3e69890$@mac.com> > I always install "locate" and whenever I change anything do as root "/usr/bin/updatedb" > then you can type > "locate *.log" Thank you. It worked and I now have several log files to check. In the meantime, I've tried once again to send a message to "mailtest" from an outside address; this time, I got an error reply: This report relates to a message you sent with the following header fields: Message-id: <00e201d2290f$67da8c70$378fa550$@mac.com> Date: Tue, 18 Oct 2016 09:15:21 +0200 From: Moi To: 'Mail Test' Subject: Test Your message cannot be delivered to the following recipients: Recipient address: mailtest at barbu.sytes.net Reason: Illegal host/domain name found Yet another area with a problem; at least this is now a valid reason for it to not work. Is this a misconfiguration of my DNS server? From arekm at maven.pl Tue Oct 18 11:16:06 2016 From: arekm at maven.pl (Arkadiusz =?utf-8?q?Mi=C5=9Bkiewicz?=) Date: Tue, 18 Oct 2016 13:16:06 +0200 Subject: logging TLS SNI hostname In-Reply-To: References: <201605300829.17351.arekm@maven.pl> <201610170841.38721.arekm@maven.pl> Message-ID: <201610181316.06948.arekm@maven.pl> On Monday 17 of October 2016, KT Walrus wrote: > > On Oct 17, 2016, at 2:41 AM, Arkadiusz Mi?kiewicz wrote: > > > > On Monday 30 of May 2016, Arkadiusz Mi?kiewicz wrote: > >> Is there a way to log SNI hostname used in TLS session? Info is there in > >> SSL_CTX_set_tlsext_servername_callback, dovecot copies it to > >> ssl_io->host. > >> > >> Unfortunately I don't see it expanded to any variables ( > >> http://wiki.dovecot.org/Variables ). Please consider this to be a > >> feature request. > >> > >> The goal is to be able to see which hostname client used like: > >> > >> May 30 08:21:19 xxx dovecot: pop3-login: Login: user=, > >> method=PLAIN, rip=1.1.1.1, lip=2.2.2.2, mpid=17135, TLS, > >> SNI=pop3.somehost.org, session= > > > > Dear dovecot team, would be possible to add such variable ^^^^^ ? > > > > That would be neat feature because server operator would know what > > hostname client uses to connect to server (which is really usefull in > > case of many hostnames pointing to single IP). > > I?d love to be able to use this SNI domain name in the Dovecot IMAP proxy > for use in the SQL password_query. This would allow the proxy to support > multiple IMAP server domains each with their own set of users. And, it > would save me money by using only the IP of the proxy for all the IMAP > server domains instead of giving each domain a unique IP. It only needs to be carefuly implemented on dovecot side as TLS SNI hostname is information passed directly by client. So some fqdn name validation would need to happen in case if client has malicious intents. > Kevin -- Arkadiusz Mi?kiewicz, arekm / ( maven.pl | pld-linux.org ) From inbound-dovecot at listmail.innovate.net Tue Oct 18 11:32:44 2016 From: inbound-dovecot at listmail.innovate.net (Richard) Date: Tue, 18 Oct 2016 11:32:44 +0000 Subject: First steps in Dovecot; IMAP not working Message-ID: <4D2382EAA7F462AB046BF3BF@ritz.innovate.net> > Date: Tuesday, October 18, 2016 13:12:39 +0200 > From: Moi > > Thank you. It worked and I now have several log files to check. > > In the meantime, I've tried once again to send a message to > "mailtest" from an outside address; this time, I got an error reply: > > This report relates to a message you sent with the following header > fields: > > Message-id: <00e201d2290f$67da8c70$378fa550$@mac.com> > Date: Tue, 18 Oct 2016 09:15:21 +0200 > From: Moi > To: 'Mail Test' > Subject: Test > > Your message cannot be delivered to the following recipients: > > Recipient address: mailtest at barbu.sytes.net > Reason: Illegal host/domain name found > > > Yet another area with a problem; at least this is now a valid > reason for it to not work. > Is this a misconfiguration of my DNS server? Assuming that "barbu.sytes.net" is the intended hostname (not something made up to obscure the real name), there is an MX-record for that that points to "mail.barbu.sytes.net", but there is no A-record for the "mail." hostname. There is an A-record for "mail.sytes.net", in case that is what you were intending, in which case you'd need to fix the MX on "barbu.sytes.net". From pierre at jaury.eu Tue Oct 18 12:36:33 2016 From: pierre at jaury.eu (Pierre Jaury) Date: Tue, 18 Oct 2016 14:36:33 +0200 Subject: Dict proxy client returning empty string instead of multiline string In-Reply-To: <2070308704.143.1476740752060@appsuite-dev.open-xchange.com> References: <612182c9-326d-2983-329b-248cfa6d7804@jaury.eu> <8e81cd26-5d0a-7260-864d-507bc28c2d95@jaury.eu> <5b037246-7d96-b1bb-525c-7a47937c8f81@jaury.eu> <137204017.2231.1476721439502@appsuite-dev.open-xchange.com> <1e882b6a-2912-5db2-2469-acae7b3b2ee9@jaury.eu> <1175200760.54.1476731884054@appsuite-dev.open-xchange.com> <859568203.60.1476732064014@appsuite-dev.open-xchange.com> <2070308704.143.1476740752060@appsuite-dev.open-xchange.com> Message-ID: <77278b2a-e263-f72f-aaf3-a1aaf0169be4@jaury.eu> Hello, I can confirm the issue is fixed. Do you have a policy to backport the patch at least to the latest stable? Regards, On 10/17/2016 11:45 PM, Aki Tuomi wrote: >>>>>>>>> On 10/16/2016 11:16 PM, Pierre Jaury wrote: >>>>>>>>>> Hello, >>>>>>>>>> >>>>>>>>>> I am using a dict proxy for my sieve extdata plugin to access some >>>>>>>>>> fields from an SQLite database (autoreply text and other >>>>>>>>>> database-configured items). >>>>>>>>>> >>>>>>>>>> All tests are performed against version 2.2.25. >>>>>>>>>> >>>>>>>>>> $ dovecot --version >>>>>>>>>> 2.2.25 (7be1766) >>>>>>>>>> >>>>>>>>>> My configuration looks like: >>>>>>>>>> >>>>>>>>>> dict { >>>>>>>>>> sieve = sqlite:/etc/dovecot/pigeonhole-sieve.dict >>>>>>>>>> } >>>>>>>>>> >>>>>>>>>> [...] >>>>>>>>>> sieve_extdata_dict_uri = proxy::sieve >>>>>>>>>> >>>>>>>>>> I am able to read pretty much any attribute without any issue, except >>>>>>>>>> when the value contains a special character like "\r" or "\n". By using >>>>>>>>>> the doveadm dict client, I narrowed it down to the dictionary management >>>>>>>>>> part (either server, protocol or client). >>>>>>>>>> >>>>>>>>>> I was suspecting escaping functions from "lib/strescape.c" (mostly >>>>>>>>>> str_tabescape and its counterpart, used by "lib-dict/client.c"), so I >>>>>>>>>> monitored socket communications. It seems that escaping is done properly >>>>>>>>>> on the server and the socket is not an issue either. >>>>>>>>>> >>>>>>>>>> The following strace dump results from running "doveadm dict get" >>>>>>>>>> against the dict socket: >>>>>>>>>> >>>>>>>>>> connect(8, {sa_family=AF_UNIX, sun_path="..."}, 110) = 0 >>>>>>>>>> fstat(8, {st_mode=S_IFSOCK|0777, st_size=0, ...}) = 0 >>>>>>>>>> [...] >>>>>>>>>> write(8, "H2\t0\t0\tadmin at domain.tld\tsieve\n", 30) = 30 >>>>>>>>>> [...] >>>>>>>>>> read(8, "Otest\1r\1ntest\n", 8192) = 14 >>>>>>>>>> >>>>>>>>>> Indeed "\1r" and "\1n" are the escape sequences used by >>>>>>>>>> "lib/strescape.c". I went deeped and debugged the call to "dict_lookup" >>>>>>>>>> performed by doveadm. Indeed the client gets the proper string from the >>>>>>>>>> socket and to my surprise, it is properly unescaped. >>>>>>>>>> >>>>>>>>>> Then, in "client_dict_lookup" ("lib-dict/dict-client.c"), the call to >>>>>>>>>> "p_strdup" returns an empty string (null byte set at the target address). >>>>>>>>>> >>>>>>>>>> Before the call to the dict "->lookup" attribute (client_dict_lookup): >>>>>>>>>> >>>>>>>>>> RAX: 0x7ffff73a37c0 (push r14) >>>>>>>>>> RBX: 0x6831b8 ("priv/reply_body") >>>>>>>>>> RCX: 0x7fffffffe240 --> 0x682a60 --> 0x6831b8 ("priv/reply_body") >>>>>>>>>> RDX: 0x6831b8 ("priv/reply_body") >>>>>>>>>> RSI: 0x683288 --> 0x7ffff7653120 --> 0x7ffff73ea620 ([...]) >>>>>>>>>> RDI: 0x690ad0 --> 0x7ffff7400713 --> 0x75250079786f7270 ('proxy') >>>>>>>>>> >>>>>>>>>> 0x7ffff73a1f10 : mov rcx,r11 (value_r) >>>>>>>>>> 0x7ffff73a1f13 : mov rdx,r8 (key) >>>>>>>>>> 0x7ffff73a1f16 : mov rsi,r10 (pool) >>>>>>>>>> 0x7ffff73a1f19 : mov rdi,r9 (dict) >>>>>>>>>> 0x7ffff73a1f1c : add rsp,0x8 >>>>>>>>>> => 0x7ffff73a1f20 : jmp rax >>>>>>>>>> >>>>>>>>>> Before the call to p_strdup in "client_dict_lookup": >>>>>>>>>> >>>>>>>>>> RSI: 0x6832d8 ("test\r\ntest") (lookup.result.value) >>>>>>>>>> RDI: 0x683288 --> 0x7ffff7653120 --> [...] (pool) >>>>>>>>>> RAX: 0x0 (result) >>>>>>>>>> >>>>>>>>>> 0x7ffff73a384f: nop >>>>>>>>>> 0x7ffff73a3850: mov rsi,QWORD PTR [rsp+0x8] >>>>>>>>>> 0x7ffff73a3855: mov rdi,r14 >>>>>>>>>> => 0x7ffff73a3858: call 0x7ffff736d3c0 >>>>>>>>>> 0x7ffff73a385d: mov QWORD PTR [r13+0x0],rax >>>>>>>>>> 0x7ffff73a3861: mov rsi,QWORD PTR [rsp+0x18] >>>>>>>>>> 0x7ffff73a3866: xor rsi,QWORD PTR fs:0x28 >>>>>>>>>> 0x7ffff73a386f: mov eax,ebx >>>>>>>>>> >>>>>>>>>> After the call: >>>>>>>>>> >>>>>>>>>> 0x7ffff73a3850: mov rsi,QWORD PTR [rsp+0x8] >>>>>>>>>> 0x7ffff73a3855: mov rdi,r14 >>>>>>>>>> 0x7ffff73a3858: call 0x7ffff736d3c0 >>>>>>>>>> => 0x7ffff73a385d: mov QWORD PTR [r13+0x0],rax >>>>>>>>>> 0x7ffff73a3861: mov rsi,QWORD PTR [rsp+0x18] >>>>>>>>>> 0x7ffff73a3866: xor rsi,QWORD PTR fs:0x28 >>>>>>>>>> 0x7ffff73a386f: mov eax,ebx >>>>>>>>>> 0x7ffff73a3871: jne 0x7ffff73a38da >>>>>>>>>> >>>>>>>>>> RSI: 0x0 >>>>>>>>>> RDI: 0x6832d8 --> 0x0 >>>>>>>>>> RAX: 0x6832d8 --> 0x0 (result) >>>>>>>>>> >>>>>>>>>> It is worth noting that I can reproduce the exact same execution flow >>>>>>>>>> with a non-multiline result string (lookup.result.value) that is >>>>>>>>>> properly copied by "p_strdup" and returned in RAX, then displayed by >>>>>>>>>> doveadm. >>>>>>>>>> >>>>>>>>>> I am not familiar with the pooling mechanism hidden behind the call to >>>>>>>>>> p_strdump and not quite sure why this behaviour is emerging. Maybe I am >>>>>>>>>> even miles away from an understanding of the issue here, but it sounds >>>>>>>>>> to me like something is wrong in the way "p_strdup" performs the copy. >>>>>>>>>> >>>>>>>>>> Hope this helps, >>>>>>>>>> kaiyou. >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> > > Fixed with https://github.com/dovecot/core/commit/4f051c3082080b9d69ef12c3720c683cff34b0da > > Aki Tuomi > From aki.tuomi at dovecot.fi Tue Oct 18 12:42:09 2016 From: aki.tuomi at dovecot.fi (Aki Tuomi) Date: Tue, 18 Oct 2016 15:42:09 +0300 Subject: Dict proxy client returning empty string instead of multiline string In-Reply-To: <77278b2a-e263-f72f-aaf3-a1aaf0169be4@jaury.eu> References: <612182c9-326d-2983-329b-248cfa6d7804@jaury.eu> <8e81cd26-5d0a-7260-864d-507bc28c2d95@jaury.eu> <5b037246-7d96-b1bb-525c-7a47937c8f81@jaury.eu> <137204017.2231.1476721439502@appsuite-dev.open-xchange.com> <1e882b6a-2912-5db2-2469-acae7b3b2ee9@jaury.eu> <1175200760.54.1476731884054@appsuite-dev.open-xchange.com> <859568203.60.1476732064014@appsuite-dev.open-xchange.com> <2070308704.143.1476740752060@appsuite-dev.open-xchange.com> <77278b2a-e263-f72f-aaf3-a1aaf0169be4@jaury.eu> Message-ID: <403a4380-e5ea-ec4d-0ca4-cda73b142cf8@dovecot.fi> This will be fixed on next release, 2.2.26. We won't backport it to latest stable, sorry. Aki On 18.10.2016 15:36, Pierre Jaury wrote: > Hello, > > I can confirm the issue is fixed. Do you have a policy to backport the > patch at least to the latest stable? > > Regards, > > On 10/17/2016 11:45 PM, Aki Tuomi wrote: >>>>>>>>>> On 10/16/2016 11:16 PM, Pierre Jaury wrote: >>>>>>>>>>> Hello, >>>>>>>>>>> >>>>>>>>>>> I am using a dict proxy for my sieve extdata plugin to access some >>>>>>>>>>> fields from an SQLite database (autoreply text and other >>>>>>>>>>> database-configured items). >>>>>>>>>>> >>>>>>>>>>> All tests are performed against version 2.2.25. >>>>>>>>>>> >>>>>>>>>>> $ dovecot --version >>>>>>>>>>> 2.2.25 (7be1766) >>>>>>>>>>> >>>>>>>>>>> My configuration looks like: >>>>>>>>>>> >>>>>>>>>>> dict { >>>>>>>>>>> sieve = sqlite:/etc/dovecot/pigeonhole-sieve.dict >>>>>>>>>>> } >>>>>>>>>>> >>>>>>>>>>> [...] >>>>>>>>>>> sieve_extdata_dict_uri = proxy::sieve >>>>>>>>>>> >>>>>>>>>>> I am able to read pretty much any attribute without any issue, except >>>>>>>>>>> when the value contains a special character like "\r" or "\n". By using >>>>>>>>>>> the doveadm dict client, I narrowed it down to the dictionary management >>>>>>>>>>> part (either server, protocol or client). >>>>>>>>>>> >>>>>>>>>>> I was suspecting escaping functions from "lib/strescape.c" (mostly >>>>>>>>>>> str_tabescape and its counterpart, used by "lib-dict/client.c"), so I >>>>>>>>>>> monitored socket communications. It seems that escaping is done properly >>>>>>>>>>> on the server and the socket is not an issue either. >>>>>>>>>>> >>>>>>>>>>> The following strace dump results from running "doveadm dict get" >>>>>>>>>>> against the dict socket: >>>>>>>>>>> >>>>>>>>>>> connect(8, {sa_family=AF_UNIX, sun_path="..."}, 110) = 0 >>>>>>>>>>> fstat(8, {st_mode=S_IFSOCK|0777, st_size=0, ...}) = 0 >>>>>>>>>>> [...] >>>>>>>>>>> write(8, "H2\t0\t0\tadmin at domain.tld\tsieve\n", 30) = 30 >>>>>>>>>>> [...] >>>>>>>>>>> read(8, "Otest\1r\1ntest\n", 8192) = 14 >>>>>>>>>>> >>>>>>>>>>> Indeed "\1r" and "\1n" are the escape sequences used by >>>>>>>>>>> "lib/strescape.c". I went deeped and debugged the call to "dict_lookup" >>>>>>>>>>> performed by doveadm. Indeed the client gets the proper string from the >>>>>>>>>>> socket and to my surprise, it is properly unescaped. >>>>>>>>>>> >>>>>>>>>>> Then, in "client_dict_lookup" ("lib-dict/dict-client.c"), the call to >>>>>>>>>>> "p_strdup" returns an empty string (null byte set at the target address). >>>>>>>>>>> >>>>>>>>>>> Before the call to the dict "->lookup" attribute (client_dict_lookup): >>>>>>>>>>> >>>>>>>>>>> RAX: 0x7ffff73a37c0 (push r14) >>>>>>>>>>> RBX: 0x6831b8 ("priv/reply_body") >>>>>>>>>>> RCX: 0x7fffffffe240 --> 0x682a60 --> 0x6831b8 ("priv/reply_body") >>>>>>>>>>> RDX: 0x6831b8 ("priv/reply_body") >>>>>>>>>>> RSI: 0x683288 --> 0x7ffff7653120 --> 0x7ffff73ea620 ([...]) >>>>>>>>>>> RDI: 0x690ad0 --> 0x7ffff7400713 --> 0x75250079786f7270 ('proxy') >>>>>>>>>>> >>>>>>>>>>> 0x7ffff73a1f10 : mov rcx,r11 (value_r) >>>>>>>>>>> 0x7ffff73a1f13 : mov rdx,r8 (key) >>>>>>>>>>> 0x7ffff73a1f16 : mov rsi,r10 (pool) >>>>>>>>>>> 0x7ffff73a1f19 : mov rdi,r9 (dict) >>>>>>>>>>> 0x7ffff73a1f1c : add rsp,0x8 >>>>>>>>>>> => 0x7ffff73a1f20 : jmp rax >>>>>>>>>>> >>>>>>>>>>> Before the call to p_strdup in "client_dict_lookup": >>>>>>>>>>> >>>>>>>>>>> RSI: 0x6832d8 ("test\r\ntest") (lookup.result.value) >>>>>>>>>>> RDI: 0x683288 --> 0x7ffff7653120 --> [...] (pool) >>>>>>>>>>> RAX: 0x0 (result) >>>>>>>>>>> >>>>>>>>>>> 0x7ffff73a384f: nop >>>>>>>>>>> 0x7ffff73a3850: mov rsi,QWORD PTR [rsp+0x8] >>>>>>>>>>> 0x7ffff73a3855: mov rdi,r14 >>>>>>>>>>> => 0x7ffff73a3858: call 0x7ffff736d3c0 >>>>>>>>>>> 0x7ffff73a385d: mov QWORD PTR [r13+0x0],rax >>>>>>>>>>> 0x7ffff73a3861: mov rsi,QWORD PTR [rsp+0x18] >>>>>>>>>>> 0x7ffff73a3866: xor rsi,QWORD PTR fs:0x28 >>>>>>>>>>> 0x7ffff73a386f: mov eax,ebx >>>>>>>>>>> >>>>>>>>>>> After the call: >>>>>>>>>>> >>>>>>>>>>> 0x7ffff73a3850: mov rsi,QWORD PTR [rsp+0x8] >>>>>>>>>>> 0x7ffff73a3855: mov rdi,r14 >>>>>>>>>>> 0x7ffff73a3858: call 0x7ffff736d3c0 >>>>>>>>>>> => 0x7ffff73a385d: mov QWORD PTR [r13+0x0],rax >>>>>>>>>>> 0x7ffff73a3861: mov rsi,QWORD PTR [rsp+0x18] >>>>>>>>>>> 0x7ffff73a3866: xor rsi,QWORD PTR fs:0x28 >>>>>>>>>>> 0x7ffff73a386f: mov eax,ebx >>>>>>>>>>> 0x7ffff73a3871: jne 0x7ffff73a38da >>>>>>>>>>> >>>>>>>>>>> RSI: 0x0 >>>>>>>>>>> RDI: 0x6832d8 --> 0x0 >>>>>>>>>>> RAX: 0x6832d8 --> 0x0 (result) >>>>>>>>>>> >>>>>>>>>>> It is worth noting that I can reproduce the exact same execution flow >>>>>>>>>>> with a non-multiline result string (lookup.result.value) that is >>>>>>>>>>> properly copied by "p_strdup" and returned in RAX, then displayed by >>>>>>>>>>> doveadm. >>>>>>>>>>> >>>>>>>>>>> I am not familiar with the pooling mechanism hidden behind the call to >>>>>>>>>>> p_strdump and not quite sure why this behaviour is emerging. Maybe I am >>>>>>>>>>> even miles away from an understanding of the issue here, but it sounds >>>>>>>>>>> to me like something is wrong in the way "p_strdup" performs the copy. >>>>>>>>>>> >>>>>>>>>>> Hope this helps, >>>>>>>>>>> kaiyou. >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >> Fixed with https://github.com/dovecot/core/commit/4f051c3082080b9d69ef12c3720c683cff34b0da >> >> Aki Tuomi >> From sven_roellig at yahoo.de Tue Oct 18 20:28:32 2016 From: sven_roellig at yahoo.de (Sven Roellig) Date: Tue, 18 Oct 2016 20:28:32 +0000 (UTC) Subject: Lmtp Fatal Error References: <1310725266.5694605.1476822512074.ref@mail.yahoo.com> Message-ID: <1310725266.5694605.1476822512074@mail.yahoo.com> Hi,dovecot is create an Fatal Panik Error. : Fatal: master: service(lmtp): child 3369 killed with signal 6 (core dumps disabled) <1jx3DhuCBlg1DQAAWm89Cw>: Panic: file lda-sieve-plugin.c: line 447 (lda_sieve_execute_scripts): assertion failed: (script != NULL) <1jx3DhuCBlg1DQAAWm89Cw>: Error: Raw backtrace: /usr/lib/dovecot/libdovecot.so.0(+0x93fae) [0x7f547f6a3fae] -> /usr/lib/dovecot/libdovecot.so.0(+0x9409c) [0x7f547f6a409c] -> /usr/lib/dovecot/libdovecot.so.0(i_fatal+0) [0x7f547f63d56e] -> /usr/lib/dovecot/modules/lib90_sieve_plugin.so(+0x3ae8) [0x7f547d765ae8] -> /usr/lib/dovecot/libdovecot-lda.so.0(mail_deliver+0x49) [0x7f547fc709a9] -> dovecot/lmtp(+0x7201) [0x7f54800a1201] -> /usr/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x5f) [0x7f547f6b88bf] -> /usr/lib/dovecot/libdovecot.so.0(io_loop_handler_run_internal+0x10a) [0x7f547f6b9d8a] -> /usr/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0x25) [0x7f547f6b8965] -> /usr/lib/dovecot/libdovecot.so.0(io_loop_run+0x30) [0x7f547f6b8b00] -> /usr/lib/dovecot/libdovecot.so.0(master_service_run+0x13) [0x7f547f643ac3] -> dovecot/lmtp(main+0x1c9) [0x7f548009f2c9] -> /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5) [0x7f547f286b45] -> dovecot/lmtp(+0x53ae) [0x7f548009f3ae] Normal Mails are not deliverd but filterd mails are deliverd. System is Debian 8.6 Dovecot Core is 2%3a2.3.0-alpha0-1-auto+385 AMD64 can anyone help. Sven From stephan at rename-it.nl Tue Oct 18 21:47:23 2016 From: stephan at rename-it.nl (Stephan Bosch) Date: Tue, 18 Oct 2016 23:47:23 +0200 Subject: Lmtp Fatal Error In-Reply-To: <1310725266.5694605.1476822512074@mail.yahoo.com> References: <1310725266.5694605.1476822512074.ref@mail.yahoo.com> <1310725266.5694605.1476822512074@mail.yahoo.com> Message-ID: <74883bb4-a329-8fb8-6ac6-05e01dad242b@rename-it.nl> Op 10/18/2016 om 10:28 PM schreef Sven Roellig: > Hi,dovecot is create an Fatal Panik Error. > : Fatal: master: service(lmtp): child 3369 killed with signal 6 (core dumps disabled) > <1jx3DhuCBlg1DQAAWm89Cw>: Panic: file lda-sieve-plugin.c: line 447 (lda_sieve_execute_scripts): assertion failed: (script != NULL) > <1jx3DhuCBlg1DQAAWm89Cw>: Error: Raw backtrace: /usr/lib/dovecot/libdovecot.so.0(+0x93fae) [0x7f547f6a3fae] -> /usr/lib/dovecot/libdovecot.so.0(+0x9409c) [0x7f547f6a409c] -> /usr/lib/dovecot/libdovecot.so.0(i_fatal+0) [0x7f547f63d56e] -> /usr/lib/dovecot/modules/lib90_sieve_plugin.so(+0x3ae8) [0x7f547d765ae8] -> /usr/lib/dovecot/libdovecot-lda.so.0(mail_deliver+0x49) [0x7f547fc709a9] -> dovecot/lmtp(+0x7201) [0x7f54800a1201] -> /usr/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x5f) [0x7f547f6b88bf] -> /usr/lib/dovecot/libdovecot.so.0(io_loop_handler_run_internal+0x10a) [0x7f547f6b9d8a] -> /usr/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0x25) [0x7f547f6b8965] -> /usr/lib/dovecot/libdovecot.so.0(io_loop_run+0x30) [0x7f547f6b8b00] -> /usr/lib/dovecot/libdovecot.so.0(master_service_run+0x13) [0x7f547f643ac3] -> dovecot/lmtp(main+0x1c9) [0x7f548009f2c9] -> /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5) [0x7f547f286b45] -> dovecot/lmtp(+0x53ae) [0x7f548009f3ae] > > Normal Mails are not deliverd but filterd mails are deliverd. > System is Debian 8.6 Dovecot Core is 2%3a2.3.0-alpha0-1-auto+385 AMD64 Problem is known and fixed. New builds are currently blocked by an unrelated build failure, which will be resolved soon. Regards, Stephan. From jtam.home at gmail.com Tue Oct 18 21:58:14 2016 From: jtam.home at gmail.com (Joseph Tam) Date: Tue, 18 Oct 2016 14:58:14 -0700 (PDT) Subject: First steps in Dovecot; IMAP not working In-Reply-To: References: Message-ID: > > doveadm user mailtest > field value > uid 1002 > gid 8 > home /home/mailtest > mail mbox:~/mail:INBOX=/var/mail/mailtest > system_groups_user mailtest Looks fine. > [log files] I'll try to locate them. Your best hope. Maybe this helps doveconf -n | grep log_ Or just force it somewhere by settings the *log_path configuration values. Also your mail server's log when trying to deliver. Joseph Tam From devetzis+dovecot at tarc.net Tue Oct 18 23:26:02 2016 From: devetzis+dovecot at tarc.net (Taso N. Devetzis) Date: Tue, 18 Oct 2016 18:26:02 -0500 Subject: Iteration base for LDAP Message-ID: Greetings, The iteration machinery uses the LDAP search base set with the "base" directive (typically from dovecot-ldap.conf.ext); the same base used during nominal operations (e.g., passdb/userdb searches). Consider a directory: dc=ROOT |_ dc=foo,dc=com,dc=ROOT (foo.com subtree) |_ dc=bar,dc=net,dc=ROOT (bar.net subtree) A search base setting appropriate for mail operations might be: base = dc=%Dd,dc=ROOT # e.g. dc=foo,dc=com,dc=ROOT for user at foo.com This fails when iterating, as the variable substitution is meaningless in this context (and even a static subtree search base would only cover a portion of the overall directory during iterative searches). Setting the base to "dc=ROOT" obviously solves the issue at the expense of searching the entire directory for all operations. This is less than optimal. I could not find a way to override this setting at runtime via a doveadm option or similar. Ideally, a separate "iterate_base" setting would solve this issue. Any other solutions? Thanks, /taso From matthew.broadhead at nbmlaw.co.uk Wed Oct 19 09:42:46 2016 From: matthew.broadhead at nbmlaw.co.uk (Matthew Broadhead) Date: Wed, 19 Oct 2016 11:42:46 +0200 Subject: sieve sending vacation message from vmail@ns1.domain.tld In-Reply-To: <71b362e8-3a69-076d-6376-2f3bbd39d0eb@nbmlaw.co.uk> References: <71b362e8-3a69-076d-6376-2f3bbd39d0eb@nbmlaw.co.uk> Message-ID: hi, does anyone have any ideas about this issue? i have not had any response yet i tried changing /etc/postfix/master.cf line: dovecot unix - n n - - pipe flags=DRhu user=vmail:mail argv=/usr/libexec/dovecot/deliver -d ${recipient} to flags=DRhu user=vmail:mail argv=/usr/libexec/dovecot/dovecot-lda -f ${sender} -d ${user}@${nexthop} -a ${original_recipient} and -d ${user}@${domain} -a {recipient} -f ${sender} -m ${extension} but it didn't work On 12/10/2016 13:57, Matthew Broadhead wrote: > I have a server running centos-release-7-2.1511.el7.centos.2.10.x86_64 > with dovecot version 2.2.10. I am also using roundcube for webmail. > when a vacation filter (reply with message) is created in roundcube it > adds a rule to managesieve.sieve in the user's mailbox. everything > works fine except the reply comes from vmail at ns1.domain.tld instead of > user at domain.tld. ns1.domain.tld is the fully qualified name of the > server. > > it used to work fine on my old CentOS 6 server so I am not sure what > has changed. Can anyone point me in the direction of where I can > configure this behaviour? From stephan at rename-it.nl Wed Oct 19 10:29:23 2016 From: stephan at rename-it.nl (Stephan Bosch) Date: Wed, 19 Oct 2016 12:29:23 +0200 Subject: sieve sending vacation message from vmail@ns1.domain.tld In-Reply-To: References: <71b362e8-3a69-076d-6376-2f3bbd39d0eb@nbmlaw.co.uk> Message-ID: <94941225-09d0-1440-1733-3884cc6dcd67@rename-it.nl> Could you send your configuration (output from `dovecot -n`)? Also, please provide an example scenario; i.e., for one problematic delivery provide: - The values of the variables substituted below. - The incoming e-mail message. - The Sieve script (or at least that vacation command). Regards, Stephan. Op 19-10-2016 om 11:42 schreef Matthew Broadhead: > hi, does anyone have any ideas about this issue? i have not had any > response yet > > i tried changing /etc/postfix/master.cf line: > dovecot unix - n n - - pipe > flags=DRhu user=vmail:mail argv=/usr/libexec/dovecot/deliver -d > ${recipient} > > to > flags=DRhu user=vmail:mail argv=/usr/libexec/dovecot/dovecot-lda -f > ${sender} -d ${user}@${nexthop} -a ${original_recipient} > > and > -d ${user}@${domain} -a {recipient} -f ${sender} -m ${extension} > > but it didn't work > > On 12/10/2016 13:57, Matthew Broadhead wrote: >> I have a server running >> centos-release-7-2.1511.el7.centos.2.10.x86_64 with dovecot version >> 2.2.10. I am also using roundcube for webmail. when a vacation >> filter (reply with message) is created in roundcube it adds a rule to >> managesieve.sieve in the user's mailbox. everything works fine except >> the reply comes from vmail at ns1.domain.tld instead of >> user at domain.tld. ns1.domain.tld is the fully qualified name of the >> server. >> >> it used to work fine on my old CentOS 6 server so I am not sure what >> has changed. Can anyone point me in the direction of where I can >> configure this behaviour? From matthew.broadhead at nbmlaw.co.uk Wed Oct 19 10:43:02 2016 From: matthew.broadhead at nbmlaw.co.uk (Matthew Broadhead) Date: Wed, 19 Oct 2016 12:43:02 +0200 Subject: sieve sending vacation message from vmail@ns1.domain.tld In-Reply-To: <94941225-09d0-1440-1733-3884cc6dcd67@rename-it.nl> References: <71b362e8-3a69-076d-6376-2f3bbd39d0eb@nbmlaw.co.uk> <94941225-09d0-1440-1733-3884cc6dcd67@rename-it.nl> Message-ID: <7cdadba3-fd03-7d8c-1235-b428018a081c@nbmlaw.co.uk> dovecot is configured by sentora control panel to a certain extent. if you want those configs i can send them as well dovecot -n debug_log_path = /var/log/dovecot-debug.log dict { quotadict = mysql:/etc/sentora/configs/dovecot2/dovecot-dict-quota.conf } disable_plaintext_auth = no first_valid_gid = 12 first_valid_uid = 996 info_log_path = /var/log/dovecot-info.log lda_mailbox_autocreate = yes lda_mailbox_autosubscribe = yes listen = * lmtp_save_to_detail_mailbox = yes log_path = /var/log/dovecot.log log_timestamp = %Y-%m-%d %H:%M:%S mail_fsync = never mail_location = maildir:/var/sentora/vmail/%d/%n managesieve_notify_capability = mailto managesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy include variables body enotify environment mailbox date ihave passdb { args = /etc/sentora/configs/dovecot2/dovecot-mysql.conf driver = sql } plugin { acl = vfile:/etc/dovecot/acls quota = maildir:User quota sieve = ~/dovecot.sieve sieve_dir = ~/sieve sieve_global_dir = /var/sentora/sieve/ sieve_global_path = /var/sentora/sieve/globalfilter.sieve sieve_max_script_size = 1M sieve_vacation_send_from_recipient = yes trash = /etc/sentora/configs/dovecot2/dovecot-trash.conf } protocols = imap pop3 lmtp sieve service auth { unix_listener /var/spool/postfix/private/auth { group = postfix mode = 0666 user = postfix } unix_listener auth-userdb { group = mail mode = 0666 user = vmail } } service dict { unix_listener dict { group = mail mode = 0666 user = vmail } } service imap-login { inet_listener imap { port = 143 } process_limit = 500 process_min_avail = 2 } service imap { vsz_limit = 256 M } service managesieve-login { inet_listener sieve { port = 4190 } process_min_avail = 0 service_count = 1 vsz_limit = 64 M } service pop3-login { inet_listener pop3 { port = 110 } } ssl_cert = Could you send your configuration (output from `dovecot -n`)? > > Also, please provide an example scenario; i.e., for one problematic > delivery provide: > > - The values of the variables substituted below. > > - The incoming e-mail message. > > - The Sieve script (or at least that vacation command). > > Regards, > > > Stephan. > > Op 19-10-2016 om 11:42 schreef Matthew Broadhead: >> hi, does anyone have any ideas about this issue? i have not had any >> response yet >> >> i tried changing /etc/postfix/master.cf line: >> dovecot unix - n n - - pipe >> flags=DRhu user=vmail:mail argv=/usr/libexec/dovecot/deliver -d >> ${recipient} >> >> to >> flags=DRhu user=vmail:mail argv=/usr/libexec/dovecot/dovecot-lda -f >> ${sender} -d ${user}@${nexthop} -a ${original_recipient} >> >> and >> -d ${user}@${domain} -a {recipient} -f ${sender} -m ${extension} >> >> but it didn't work >> >> On 12/10/2016 13:57, Matthew Broadhead wrote: >>> I have a server running >>> centos-release-7-2.1511.el7.centos.2.10.x86_64 with dovecot version >>> 2.2.10. I am also using roundcube for webmail. when a vacation >>> filter (reply with message) is created in roundcube it adds a rule >>> to managesieve.sieve in the user's mailbox. everything works fine >>> except the reply comes from vmail at ns1.domain.tld instead of >>> user at domain.tld. ns1.domain.tld is the fully qualified name of the >>> server. >>> >>> it used to work fine on my old CentOS 6 server so I am not sure what >>> has changed. Can anyone point me in the direction of where I can >>> configure this behaviour? -- Matthew Broadhead NBM Solicitors See the latest jobs available at NBM @www.nbmlaw.co.uk/recruitment.htm 32 Rainsford Road Chelmsford Essex CM1 2QG Tel: 01245 269909 Fax: 01245 261932 www.nbmlaw.co.uk Partners: WJ Broadhead NP Eason SJ Lacey CR Broadhead D Seepaul T Carley NBM Solicitors are authorised and regulated by the Solicitors Regulation Authority. We are also bound by their code of conduct. Registered no. 00061052 NBM also provide a will writing service, see http://www.nbmlaw.co.uk/wills.htm for more information Confidentiality Information in this message is confidential and may be legally privileged. It is intended solely for the recipient to whom it is addressed. If you receive the message in error, please notify the sender and immediately destroy all copies. Security warning Please note that this e-mail has been created in the knowledge that e-mail is not a 100% secure communications medium. We advise you that you understand and observe this lack of security when e-mailing us. This e-mail does not constitute a legally binding document. No contracts may be concluded on behalf of Nigel Broadhead Mynard Solicitors by e-mail communications. If you have any queries, please contact administrator at nbmlaw.co.uk From stephan at rename-it.nl Wed Oct 19 10:51:00 2016 From: stephan at rename-it.nl (Stephan Bosch) Date: Wed, 19 Oct 2016 12:51:00 +0200 Subject: sieve duplicate locking In-Reply-To: <5804F6BD.3090302@strike.wu.ac.at> References: <5804F6BD.3090302@strike.wu.ac.at> Message-ID: Op 17-10-2016 om 18:05 schreef Alexander 'Leo' Bergolth: > Hi! > > Does the duplicate sieve plugin do any locking to avoid duplicate > parallel delivery of the same message? > > I sometimes experience duplicate mail delivery of messages with the same > message-id, despite the use of a sieve duplicate filter. According to > the log files, those messages are delivered in the same second by two > parallel dovecot-lda processes. (Duplicate filtering works fine in all > other cases.) > > RFC7352 states that the ID of a message may only be committed to the > duplicate tracking list at the _end_ of a successful script execution, > which may lead to race conditions. > Maybe I am running into this? > > Is there an easy way to serialize mail delivery using some locking > inside sieve? We've seen this before I think. It would require some changes to the duplicate tracking system. I'd expect the vacation command to be affected as well. > Or do I have to serialize per-user dovecot-lda delivery? Any experiences > with that? Very little. I know there is a new lmtp_user_concurrency_limit setting, but there is not much documentation apart from the commit message: https://github.com/dovecot/core/commit/42abccd9b2a5a4190bd3c14ec2dcc10d51c0f491 There are possibilities from within the MTA as well I expect. Regards, Stephan. From stephan at rename-it.nl Wed Oct 19 10:54:49 2016 From: stephan at rename-it.nl (Stephan Bosch) Date: Wed, 19 Oct 2016 12:54:49 +0200 Subject: sieve sending vacation message from vmail@ns1.domain.tld In-Reply-To: <7cdadba3-fd03-7d8c-1235-b428018a081c@nbmlaw.co.uk> References: <71b362e8-3a69-076d-6376-2f3bbd39d0eb@nbmlaw.co.uk> <94941225-09d0-1440-1733-3884cc6dcd67@rename-it.nl> <7cdadba3-fd03-7d8c-1235-b428018a081c@nbmlaw.co.uk> Message-ID: <55712b3a-4812-f0a6-c9f9-59efcdac79f7@rename-it.nl> Also, please provide an example scenario; i.e., for one problematic delivery provide: - The values of the variables substituted in the dovecot-lda command line; i.e., provide that command line. - The incoming e-mail message. Regards, Stephan. Op 19-10-2016 om 12:43 schreef Matthew Broadhead: > dovecot is configured by sentora control panel to a certain extent. if > you want those configs i can send them as well > > dovecot -n > > debug_log_path = /var/log/dovecot-debug.log > dict { > quotadict = mysql:/etc/sentora/configs/dovecot2/dovecot-dict-quota.conf > } > disable_plaintext_auth = no > first_valid_gid = 12 > first_valid_uid = 996 > info_log_path = /var/log/dovecot-info.log > lda_mailbox_autocreate = yes > lda_mailbox_autosubscribe = yes > listen = * > lmtp_save_to_detail_mailbox = yes > log_path = /var/log/dovecot.log > log_timestamp = %Y-%m-%d %H:%M:%S > mail_fsync = never > mail_location = maildir:/var/sentora/vmail/%d/%n > managesieve_notify_capability = mailto > managesieve_sieve_capability = fileinto reject envelope > encoded-character vacation subaddress comparator-i;ascii-numeric > relational regex imap4flags copy include variables body enotify > environment mailbox date ihave > passdb { > args = /etc/sentora/configs/dovecot2/dovecot-mysql.conf > driver = sql > } > plugin { > acl = vfile:/etc/dovecot/acls > quota = maildir:User quota > sieve = ~/dovecot.sieve > sieve_dir = ~/sieve > sieve_global_dir = /var/sentora/sieve/ > sieve_global_path = /var/sentora/sieve/globalfilter.sieve > sieve_max_script_size = 1M > sieve_vacation_send_from_recipient = yes > trash = /etc/sentora/configs/dovecot2/dovecot-trash.conf > } > protocols = imap pop3 lmtp sieve > service auth { > unix_listener /var/spool/postfix/private/auth { > group = postfix > mode = 0666 > user = postfix > } > unix_listener auth-userdb { > group = mail > mode = 0666 > user = vmail > } > } > service dict { > unix_listener dict { > group = mail > mode = 0666 > user = vmail > } > } > service imap-login { > inet_listener imap { > port = 143 > } > process_limit = 500 > process_min_avail = 2 > } > service imap { > vsz_limit = 256 M > } > service managesieve-login { > inet_listener sieve { > port = 4190 > } > process_min_avail = 0 > service_count = 1 > vsz_limit = 64 M > } > service pop3-login { > inet_listener pop3 { > port = 110 > } > } > ssl_cert = ssl_key = ssl_protocols = !SSLv2 !SSLv3 > userdb { > driver = prefetch > } > userdb { > args = /etc/sentora/configs/dovecot2/dovecot-mysql.conf > driver = sql > } > protocol lda { > mail_fsync = optimized > mail_plugins = quota sieve > postmaster_address = postmaster at ns1.nbmlaw.co.uk > } > protocol imap { > imap_client_workarounds = delay-newmail > mail_fsync = optimized > mail_max_userip_connections = 60 > mail_plugins = quota imap_quota trash > } > protocol lmtp { > mail_plugins = quota sieve > } > protocol pop3 { > mail_plugins = quota > pop3_client_workarounds = outlook-no-nuls oe-ns-eoh > pop3_uidl_format = %08Xu%08Xv > } > protocol sieve { > managesieve_implementation_string = Dovecot Pigeonhole > managesieve_max_compile_errors = 5 > managesieve_max_line_length = 65536 > } > > managesieve.sieve > > require ["fileinto","vacation"]; > # rule:[vacation] > if true > { > vacation :days 1 :subject "Vacation subject" text: > i am currently out of the office > > trying some line breaks > > ...zzz > . > ; > } > > On 19/10/2016 12:29, Stephan Bosch wrote: >> Could you send your configuration (output from `dovecot -n`)? >> >> Also, please provide an example scenario; i.e., for one problematic >> delivery provide: >> >> - The values of the variables substituted below. >> >> - The incoming e-mail message. >> >> - The Sieve script (or at least that vacation command). >> >> Regards, >> >> >> Stephan. >> >> Op 19-10-2016 om 11:42 schreef Matthew Broadhead: >>> hi, does anyone have any ideas about this issue? i have not had any >>> response yet >>> >>> i tried changing /etc/postfix/master.cf line: >>> dovecot unix - n n - - pipe >>> flags=DRhu user=vmail:mail argv=/usr/libexec/dovecot/deliver -d >>> ${recipient} >>> >>> to >>> flags=DRhu user=vmail:mail argv=/usr/libexec/dovecot/dovecot-lda -f >>> ${sender} -d ${user}@${nexthop} -a ${original_recipient} >>> >>> and >>> -d ${user}@${domain} -a {recipient} -f ${sender} -m ${extension} >>> >>> but it didn't work >>> >>> On 12/10/2016 13:57, Matthew Broadhead wrote: >>>> I have a server running >>>> centos-release-7-2.1511.el7.centos.2.10.x86_64 with dovecot version >>>> 2.2.10. I am also using roundcube for webmail. when a vacation >>>> filter (reply with message) is created in roundcube it adds a rule >>>> to managesieve.sieve in the user's mailbox. everything works fine >>>> except the reply comes from vmail at ns1.domain.tld instead of >>>> user at domain.tld. ns1.domain.tld is the fully qualified name of the >>>> server. >>>> >>>> it used to work fine on my old CentOS 6 server so I am not sure >>>> what has changed. Can anyone point me in the direction of where I >>>> can configure this behaviour? > From Ralf.Hildebrandt at charite.de Wed Oct 19 11:34:48 2016 From: Ralf.Hildebrandt at charite.de (Ralf Hildebrandt) Date: Wed, 19 Oct 2016 13:34:48 +0200 Subject: Lmtp Fatal Error In-Reply-To: <74883bb4-a329-8fb8-6ac6-05e01dad242b@rename-it.nl> References: <1310725266.5694605.1476822512074.ref@mail.yahoo.com> <1310725266.5694605.1476822512074@mail.yahoo.com> <74883bb4-a329-8fb8-6ac6-05e01dad242b@rename-it.nl> Message-ID: <20161019113448.jkxzdvfohe72wqhs@charite.de> * Stephan Bosch : > Op 10/18/2016 om 10:28 PM schreef Sven Roellig: > > Hi,dovecot is create an Fatal Panik Error. > > : Fatal: master: service(lmtp): child 3369 killed with signal 6 (core dumps disabled) > > <1jx3DhuCBlg1DQAAWm89Cw>: Panic: file lda-sieve-plugin.c: line 447 (lda_sieve_execute_scripts): assertion failed: (script != NULL) > > <1jx3DhuCBlg1DQAAWm89Cw>: Error: Raw backtrace: /usr/lib/dovecot/libdovecot.so.0(+0x93fae) [0x7f547f6a3fae] -> /usr/lib/dovecot/libdovecot.so.0(+0x9409c) [0x7f547f6a409c] -> /usr/lib/dovecot/libdovecot.so.0(i_fatal+0) [0x7f547f63d56e] -> /usr/lib/dovecot/modules/lib90_sieve_plugin.so(+0x3ae8) [0x7f547d765ae8] -> /usr/lib/dovecot/libdovecot-lda.so.0(mail_deliver+0x49) [0x7f547fc709a9] -> dovecot/lmtp(+0x7201) [0x7f54800a1201] -> /usr/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x5f) [0x7f547f6b88bf] -> /usr/lib/dovecot/libdovecot.so.0(io_loop_handler_run_internal+0x10a) [0x7f547f6b9d8a] -> /usr/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0x25) [0x7f547f6b8965] -> /usr/lib/dovecot/libdovecot.so.0(io_loop_run+0x30) [0x7f547f6b8b00] -> /usr/lib/dovecot/libdovecot.so.0(master_service_run+0x13) [0x7f547f643ac3] -> dovecot/lmtp(main+0x1c9) [0x7f548009f2c9] -> /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5) [0x7f547f286b45] -> dovecot/lmtp(+0x53ae) [0x7f548009f3ae] > > > > Normal Mails are not deliverd but filterd mails are deliverd. > > System is Debian 8.6 Dovecot Core is 2%3a2.3.0-alpha0-1-auto+385 AMD64 > > Problem is known and fixed. New builds are currently blocked by an > unrelated build failure, which will be resolved soon. Ah, I was wondering about that. I tried rebuilding from source using your src packages and got an fatal error during make check :) -- Ralf Hildebrandt Gesch?ftsbereich IT | Abteilung Netzwerk Charit? - Universit?tsmedizin Berlin Campus Benjamin Franklin Hindenburgdamm 30 | D-12203 Berlin Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962 ralf.hildebrandt at charite.de | http://www.charite.de From matthew.broadhead at nbmlaw.co.uk Wed Oct 19 11:47:14 2016 From: matthew.broadhead at nbmlaw.co.uk (Matthew Broadhead) Date: Wed, 19 Oct 2016 13:47:14 +0200 Subject: sieve sending vacation message from vmail@ns1.domain.tld In-Reply-To: <55712b3a-4812-f0a6-c9f9-59efcdac79f7@rename-it.nl> References: <71b362e8-3a69-076d-6376-2f3bbd39d0eb@nbmlaw.co.uk> <94941225-09d0-1440-1733-3884cc6dcd67@rename-it.nl> <7cdadba3-fd03-7d8c-1235-b428018a081c@nbmlaw.co.uk> <55712b3a-4812-f0a6-c9f9-59efcdac79f7@rename-it.nl> Message-ID: i am not 100% sure how to give you the information you require. my current setup in /etc/postfix/master.cf is flags=DRhu user=vmail:mail argv=/usr/libexec/dovecot/deliver -d ${recipient} so recipient would presumably be user at domain.tld? or do you want the real email address of one of our users? is there some way i can output this information directly e.g. in logs? the incoming email message could be anything? again i can run an example directly if you can advise the best way to do this On 19/10/2016 12:54, Stephan Bosch wrote: > Also, please provide an example scenario; i.e., for one problematic > delivery provide: > > - The values of the variables substituted in the dovecot-lda command > line; i.e., provide that command line. > - The incoming e-mail message. > > Regards, > > Stephan. > > Op 19-10-2016 om 12:43 schreef Matthew Broadhead: >> dovecot is configured by sentora control panel to a certain extent. >> if you want those configs i can send them as well >> >> dovecot -n >> >> debug_log_path = /var/log/dovecot-debug.log >> dict { >> quotadict = >> mysql:/etc/sentora/configs/dovecot2/dovecot-dict-quota.conf >> } >> disable_plaintext_auth = no >> first_valid_gid = 12 >> first_valid_uid = 996 >> info_log_path = /var/log/dovecot-info.log >> lda_mailbox_autocreate = yes >> lda_mailbox_autosubscribe = yes >> listen = * >> lmtp_save_to_detail_mailbox = yes >> log_path = /var/log/dovecot.log >> log_timestamp = %Y-%m-%d %H:%M:%S >> mail_fsync = never >> mail_location = maildir:/var/sentora/vmail/%d/%n >> managesieve_notify_capability = mailto >> managesieve_sieve_capability = fileinto reject envelope >> encoded-character vacation subaddress comparator-i;ascii-numeric >> relational regex imap4flags copy include variables body enotify >> environment mailbox date ihave >> passdb { >> args = /etc/sentora/configs/dovecot2/dovecot-mysql.conf >> driver = sql >> } >> plugin { >> acl = vfile:/etc/dovecot/acls >> quota = maildir:User quota >> sieve = ~/dovecot.sieve >> sieve_dir = ~/sieve >> sieve_global_dir = /var/sentora/sieve/ >> sieve_global_path = /var/sentora/sieve/globalfilter.sieve >> sieve_max_script_size = 1M >> sieve_vacation_send_from_recipient = yes >> trash = /etc/sentora/configs/dovecot2/dovecot-trash.conf >> } >> protocols = imap pop3 lmtp sieve >> service auth { >> unix_listener /var/spool/postfix/private/auth { >> group = postfix >> mode = 0666 >> user = postfix >> } >> unix_listener auth-userdb { >> group = mail >> mode = 0666 >> user = vmail >> } >> } >> service dict { >> unix_listener dict { >> group = mail >> mode = 0666 >> user = vmail >> } >> } >> service imap-login { >> inet_listener imap { >> port = 143 >> } >> process_limit = 500 >> process_min_avail = 2 >> } >> service imap { >> vsz_limit = 256 M >> } >> service managesieve-login { >> inet_listener sieve { >> port = 4190 >> } >> process_min_avail = 0 >> service_count = 1 >> vsz_limit = 64 M >> } >> service pop3-login { >> inet_listener pop3 { >> port = 110 >> } >> } >> ssl_cert = > ssl_key = > ssl_protocols = !SSLv2 !SSLv3 >> userdb { >> driver = prefetch >> } >> userdb { >> args = /etc/sentora/configs/dovecot2/dovecot-mysql.conf >> driver = sql >> } >> protocol lda { >> mail_fsync = optimized >> mail_plugins = quota sieve >> postmaster_address = postmaster at ns1.nbmlaw.co.uk >> } >> protocol imap { >> imap_client_workarounds = delay-newmail >> mail_fsync = optimized >> mail_max_userip_connections = 60 >> mail_plugins = quota imap_quota trash >> } >> protocol lmtp { >> mail_plugins = quota sieve >> } >> protocol pop3 { >> mail_plugins = quota >> pop3_client_workarounds = outlook-no-nuls oe-ns-eoh >> pop3_uidl_format = %08Xu%08Xv >> } >> protocol sieve { >> managesieve_implementation_string = Dovecot Pigeonhole >> managesieve_max_compile_errors = 5 >> managesieve_max_line_length = 65536 >> } >> >> managesieve.sieve >> >> require ["fileinto","vacation"]; >> # rule:[vacation] >> if true >> { >> vacation :days 1 :subject "Vacation subject" text: >> i am currently out of the office >> >> trying some line breaks >> >> ...zzz >> . >> ; >> } >> >> On 19/10/2016 12:29, Stephan Bosch wrote: >>> Could you send your configuration (output from `dovecot -n`)? >>> >>> Also, please provide an example scenario; i.e., for one problematic >>> delivery provide: >>> >>> - The values of the variables substituted below. >>> >>> - The incoming e-mail message. >>> >>> - The Sieve script (or at least that vacation command). >>> >>> Regards, >>> >>> >>> Stephan. >>> >>> Op 19-10-2016 om 11:42 schreef Matthew Broadhead: >>>> hi, does anyone have any ideas about this issue? i have not had >>>> any response yet >>>> >>>> i tried changing /etc/postfix/master.cf line: >>>> dovecot unix - n n - - pipe >>>> flags=DRhu user=vmail:mail argv=/usr/libexec/dovecot/deliver -d >>>> ${recipient} >>>> >>>> to >>>> flags=DRhu user=vmail:mail argv=/usr/libexec/dovecot/dovecot-lda -f >>>> ${sender} -d ${user}@${nexthop} -a ${original_recipient} >>>> >>>> and >>>> -d ${user}@${domain} -a {recipient} -f ${sender} -m ${extension} >>>> >>>> but it didn't work >>>> >>>> On 12/10/2016 13:57, Matthew Broadhead wrote: >>>>> I have a server running >>>>> centos-release-7-2.1511.el7.centos.2.10.x86_64 with dovecot >>>>> version 2.2.10. I am also using roundcube for webmail. when a >>>>> vacation filter (reply with message) is created in roundcube it >>>>> adds a rule to managesieve.sieve in the user's mailbox. everything >>>>> works fine except the reply comes from vmail at ns1.domain.tld >>>>> instead of user at domain.tld. ns1.domain.tld is the fully qualified >>>>> name of the server. >>>>> >>>>> it used to work fine on my old CentOS 6 server so I am not sure >>>>> what has changed. Can anyone point me in the direction of where I >>>>> can configure this behaviour? >> > From stephan at rename-it.nl Wed Oct 19 11:54:45 2016 From: stephan at rename-it.nl (Stephan Bosch) Date: Wed, 19 Oct 2016 13:54:45 +0200 Subject: sieve sending vacation message from vmail@ns1.domain.tld In-Reply-To: References: <71b362e8-3a69-076d-6376-2f3bbd39d0eb@nbmlaw.co.uk> <94941225-09d0-1440-1733-3884cc6dcd67@rename-it.nl> <7cdadba3-fd03-7d8c-1235-b428018a081c@nbmlaw.co.uk> <55712b3a-4812-f0a6-c9f9-59efcdac79f7@rename-it.nl> Message-ID: <8260ce16-bc94-e3a9-13d1-f1204e6ae525@rename-it.nl> Op 19-10-2016 om 13:47 schreef Matthew Broadhead: > i am not 100% sure how to give you the information you require. > > my current setup in /etc/postfix/master.cf is > flags=DRhu user=vmail:mail argv=/usr/libexec/dovecot/deliver -d > ${recipient} > so recipient would presumably be user at domain.tld? or do you want the > real email address of one of our users? is there some way i can > output this information directly e.g. in logs? I am no Postfix expert. I just need to know which values are being passed to dovecot-lda with what options. I'd assume Postfix allows logging the command line or at least the values of these variables. > the incoming email message could be anything? again i can run an > example directly if you can advise the best way to do this As long as the problem occurs with this message. BTW, it would also be helpful to have the Dovecot logs from this delivery, with mail_debug configured to "yes". Regards, Stephan. > > On 19/10/2016 12:54, Stephan Bosch wrote: >> Also, please provide an example scenario; i.e., for one problematic >> delivery provide: >> >> - The values of the variables substituted in the dovecot-lda command >> line; i.e., provide that command line. >> - The incoming e-mail message. >> >> Regards, >> >> Stephan. >> >> Op 19-10-2016 om 12:43 schreef Matthew Broadhead: >>> dovecot is configured by sentora control panel to a certain extent. >>> if you want those configs i can send them as well >>> >>> dovecot -n >>> >>> debug_log_path = /var/log/dovecot-debug.log >>> dict { >>> quotadict = >>> mysql:/etc/sentora/configs/dovecot2/dovecot-dict-quota.conf >>> } >>> disable_plaintext_auth = no >>> first_valid_gid = 12 >>> first_valid_uid = 996 >>> info_log_path = /var/log/dovecot-info.log >>> lda_mailbox_autocreate = yes >>> lda_mailbox_autosubscribe = yes >>> listen = * >>> lmtp_save_to_detail_mailbox = yes >>> log_path = /var/log/dovecot.log >>> log_timestamp = %Y-%m-%d %H:%M:%S >>> mail_fsync = never >>> mail_location = maildir:/var/sentora/vmail/%d/%n >>> managesieve_notify_capability = mailto >>> managesieve_sieve_capability = fileinto reject envelope >>> encoded-character vacation subaddress comparator-i;ascii-numeric >>> relational regex imap4flags copy include variables body enotify >>> environment mailbox date ihave >>> passdb { >>> args = /etc/sentora/configs/dovecot2/dovecot-mysql.conf >>> driver = sql >>> } >>> plugin { >>> acl = vfile:/etc/dovecot/acls >>> quota = maildir:User quota >>> sieve = ~/dovecot.sieve >>> sieve_dir = ~/sieve >>> sieve_global_dir = /var/sentora/sieve/ >>> sieve_global_path = /var/sentora/sieve/globalfilter.sieve >>> sieve_max_script_size = 1M >>> sieve_vacation_send_from_recipient = yes >>> trash = /etc/sentora/configs/dovecot2/dovecot-trash.conf >>> } >>> protocols = imap pop3 lmtp sieve >>> service auth { >>> unix_listener /var/spool/postfix/private/auth { >>> group = postfix >>> mode = 0666 >>> user = postfix >>> } >>> unix_listener auth-userdb { >>> group = mail >>> mode = 0666 >>> user = vmail >>> } >>> } >>> service dict { >>> unix_listener dict { >>> group = mail >>> mode = 0666 >>> user = vmail >>> } >>> } >>> service imap-login { >>> inet_listener imap { >>> port = 143 >>> } >>> process_limit = 500 >>> process_min_avail = 2 >>> } >>> service imap { >>> vsz_limit = 256 M >>> } >>> service managesieve-login { >>> inet_listener sieve { >>> port = 4190 >>> } >>> process_min_avail = 0 >>> service_count = 1 >>> vsz_limit = 64 M >>> } >>> service pop3-login { >>> inet_listener pop3 { >>> port = 110 >>> } >>> } >>> ssl_cert = >> ssl_key = >> ssl_protocols = !SSLv2 !SSLv3 >>> userdb { >>> driver = prefetch >>> } >>> userdb { >>> args = /etc/sentora/configs/dovecot2/dovecot-mysql.conf >>> driver = sql >>> } >>> protocol lda { >>> mail_fsync = optimized >>> mail_plugins = quota sieve >>> postmaster_address = postmaster at ns1.nbmlaw.co.uk >>> } >>> protocol imap { >>> imap_client_workarounds = delay-newmail >>> mail_fsync = optimized >>> mail_max_userip_connections = 60 >>> mail_plugins = quota imap_quota trash >>> } >>> protocol lmtp { >>> mail_plugins = quota sieve >>> } >>> protocol pop3 { >>> mail_plugins = quota >>> pop3_client_workarounds = outlook-no-nuls oe-ns-eoh >>> pop3_uidl_format = %08Xu%08Xv >>> } >>> protocol sieve { >>> managesieve_implementation_string = Dovecot Pigeonhole >>> managesieve_max_compile_errors = 5 >>> managesieve_max_line_length = 65536 >>> } >>> >>> managesieve.sieve >>> >>> require ["fileinto","vacation"]; >>> # rule:[vacation] >>> if true >>> { >>> vacation :days 1 :subject "Vacation subject" text: >>> i am currently out of the office >>> >>> trying some line breaks >>> >>> ...zzz >>> . >>> ; >>> } >>> >>> On 19/10/2016 12:29, Stephan Bosch wrote: >>>> Could you send your configuration (output from `dovecot -n`)? >>>> >>>> Also, please provide an example scenario; i.e., for one problematic >>>> delivery provide: >>>> >>>> - The values of the variables substituted below. >>>> >>>> - The incoming e-mail message. >>>> >>>> - The Sieve script (or at least that vacation command). >>>> >>>> Regards, >>>> >>>> >>>> Stephan. >>>> >>>> Op 19-10-2016 om 11:42 schreef Matthew Broadhead: >>>>> hi, does anyone have any ideas about this issue? i have not had >>>>> any response yet >>>>> >>>>> i tried changing /etc/postfix/master.cf line: >>>>> dovecot unix - n n - - pipe >>>>> flags=DRhu user=vmail:mail argv=/usr/libexec/dovecot/deliver -d >>>>> ${recipient} >>>>> >>>>> to >>>>> flags=DRhu user=vmail:mail argv=/usr/libexec/dovecot/dovecot-lda >>>>> -f ${sender} -d ${user}@${nexthop} -a ${original_recipient} >>>>> >>>>> and >>>>> -d ${user}@${domain} -a {recipient} -f ${sender} -m ${extension} >>>>> >>>>> but it didn't work >>>>> >>>>> On 12/10/2016 13:57, Matthew Broadhead wrote: >>>>>> I have a server running >>>>>> centos-release-7-2.1511.el7.centos.2.10.x86_64 with dovecot >>>>>> version 2.2.10. I am also using roundcube for webmail. when a >>>>>> vacation filter (reply with message) is created in roundcube it >>>>>> adds a rule to managesieve.sieve in the user's mailbox. >>>>>> everything works fine except the reply comes from >>>>>> vmail at ns1.domain.tld instead of user at domain.tld. ns1.domain.tld >>>>>> is the fully qualified name of the server. >>>>>> >>>>>> it used to work fine on my old CentOS 6 server so I am not sure >>>>>> what has changed. Can anyone point me in the direction of where >>>>>> I can configure this behaviour? >>> >> From matthew.broadhead at nbmlaw.co.uk Wed Oct 19 12:28:22 2016 From: matthew.broadhead at nbmlaw.co.uk (Matthew Broadhead) Date: Wed, 19 Oct 2016 14:28:22 +0200 Subject: sieve sending vacation message from vmail@ns1.domain.tld In-Reply-To: <8260ce16-bc94-e3a9-13d1-f1204e6ae525@rename-it.nl> References: <71b362e8-3a69-076d-6376-2f3bbd39d0eb@nbmlaw.co.uk> <94941225-09d0-1440-1733-3884cc6dcd67@rename-it.nl> <7cdadba3-fd03-7d8c-1235-b428018a081c@nbmlaw.co.uk> <55712b3a-4812-f0a6-c9f9-59efcdac79f7@rename-it.nl> <8260ce16-bc94-e3a9-13d1-f1204e6ae525@rename-it.nl> Message-ID: <4134cbe7-5e9a-5393-093f-33ac8e63a4f3@nbmlaw.co.uk> with mail_debug set to yes the dovecot-debug.log for an email sent to ufuk.koksal at nbmlaw.co.uk is 2016-10-19 13:25:41lda: Debug: Loading modules from directory: /usr/lib64/dovecot 2016-10-19 13:25:41lda: Debug: Module loaded: /usr/lib64/dovecot/lib10_quota_plugin.so 2016-10-19 13:25:41lda: Debug: Module loaded: /usr/lib64/dovecot/lib90_sieve_plugin.so 2016-10-19 13:25:41lda: Debug: auth input: ufuk.koksal at nbmlaw.co.uk home=/var/sentora/vmail/nbmlaw.co.uk/ufuk.koksal/ mail=maildir:/var/sentora/vmail/nbmlaw.co.uk/ufuk.koksal/ uid=996 gid=12 quota_rule=*:bytes=10485760000 2016-10-19 13:25:41lda: Debug: Added userdb setting: mail=maildir:/var/sentora/vmail/nbmlaw.co.uk/ufuk.koksal/ 2016-10-19 13:25:41lda: Debug: Added userdb setting: plugin/quota_rule=*:bytes=10485760000 2016-10-19 13:25:41lda(ufuk.koksal at nbmlaw.co.uk): Debug: Effective uid=996, gid=12, home=/var/sentora/vmail/nbmlaw.co.uk/ufuk.koksal/ 2016-10-19 13:25:41lda(ufuk.koksal at nbmlaw.co.uk): Debug: Quota root: name=User quota backend=maildir args= 2016-10-19 13:25:41lda(ufuk.koksal at nbmlaw.co.uk): Debug: Quota rule: root=User quota mailbox=* bytes=10485760000 messages=0 2016-10-19 13:25:41lda(ufuk.koksal at nbmlaw.co.uk): Debug: Quota grace: root=User quota bytes=1048576000 (10%) 2016-10-19 13:25:41lda(ufuk.koksal at nbmlaw.co.uk): Debug: maildir++: root=/var/sentora/vmail/nbmlaw.co.uk/ufuk.koksal, index=, indexpvt=, control=, inbox=/var/sentora/vmail/nbmlaw.co.uk/ufuk.koksal, alt= 2016-10-19 13:25:41lda(ufuk.koksal at nbmlaw.co.uk): Debug: Quota root: name=User quota backend=maildir args= 2016-10-19 13:25:41lda(ufuk.koksal at nbmlaw.co.uk): Debug: Quota grace: root=User quota bytes=0 (10%) 2016-10-19 13:25:41lda(ufuk.koksal at nbmlaw.co.uk): Debug: none: root=, index=, indexpvt=, control=, inbox=, alt= 2016-10-19 13:25:41lda(ufuk.koksal at nbmlaw.co.uk): Debug: Destination address: ufuk.koksal at nbmlaw.co.uk (source: user at hostname) 2016-10-19 13:25:41lda(ufuk.koksal at nbmlaw.co.uk): Debug: sieve: Pigeonhole version 0.4.2 initializing 2016-10-19 13:25:41lda(ufuk.koksal at nbmlaw.co.uk): Debug: sieve: using the following location for user's Sieve script: /var/sentora/vmail/nbmlaw.co.uk/ufuk.koksal//dovecot.sieve;name=main script 2016-10-19 13:25:41lda(ufuk.koksal at nbmlaw.co.uk): Debug: sieve: loading script /var/sentora/vmail/nbmlaw.co.uk/ufuk.koksal//dovecot.sieve;name=main script 2016-10-19 13:25:41lda(ufuk.koksal at nbmlaw.co.uk): Debug: sieve: script binary /var/sentora/vmail/nbmlaw.co.uk/ufuk.koksal//dovecot.svbin successfully loaded 2016-10-19 13:25:41lda(ufuk.koksal at nbmlaw.co.uk): Debug: sieve: binary save: not saving binary /var/sentora/vmail/nbmlaw.co.uk/ufuk.koksal//dovecot.svbin, because it is already stored 2016-10-19 13:25:41lda(ufuk.koksal at nbmlaw.co.uk): Debug: sieve: executing script from /var/sentora/vmail/nbmlaw.co.uk/ufuk.koksal//dovecot.svbin i will see if there is any output for postfix On 19/10/2016 13:54, Stephan Bosch wrote: > > > Op 19-10-2016 om 13:47 schreef Matthew Broadhead: >> i am not 100% sure how to give you the information you require. >> >> my current setup in /etc/postfix/master.cf is >> flags=DRhu user=vmail:mail argv=/usr/libexec/dovecot/deliver -d >> ${recipient} >> so recipient would presumably be user at domain.tld? or do you want the >> real email address of one of our users? is there some way i can >> output this information directly e.g. in logs? > > I am no Postfix expert. I just need to know which values are being > passed to dovecot-lda with what options. I'd assume Postfix allows > logging the command line or at least the values of these variables. > >> the incoming email message could be anything? again i can run an >> example directly if you can advise the best way to do this > > As long as the problem occurs with this message. > > BTW, it would also be helpful to have the Dovecot logs from this > delivery, with mail_debug configured to "yes". > > Regards, > > Stephan. > >> >> On 19/10/2016 12:54, Stephan Bosch wrote: >>> Also, please provide an example scenario; i.e., for one problematic >>> delivery provide: >>> >>> - The values of the variables substituted in the dovecot-lda command >>> line; i.e., provide that command line. >>> - The incoming e-mail message. >>> >>> Regards, >>> >>> Stephan. >>> >>> Op 19-10-2016 om 12:43 schreef Matthew Broadhead: >>>> dovecot is configured by sentora control panel to a certain extent. >>>> if you want those configs i can send them as well >>>> >>>> dovecot -n >>>> >>>> debug_log_path = /var/log/dovecot-debug.log >>>> dict { >>>> quotadict = >>>> mysql:/etc/sentora/configs/dovecot2/dovecot-dict-quota.conf >>>> } >>>> disable_plaintext_auth = no >>>> first_valid_gid = 12 >>>> first_valid_uid = 996 >>>> info_log_path = /var/log/dovecot-info.log >>>> lda_mailbox_autocreate = yes >>>> lda_mailbox_autosubscribe = yes >>>> listen = * >>>> lmtp_save_to_detail_mailbox = yes >>>> log_path = /var/log/dovecot.log >>>> log_timestamp = %Y-%m-%d %H:%M:%S >>>> mail_fsync = never >>>> mail_location = maildir:/var/sentora/vmail/%d/%n >>>> managesieve_notify_capability = mailto >>>> managesieve_sieve_capability = fileinto reject envelope >>>> encoded-character vacation subaddress comparator-i;ascii-numeric >>>> relational regex imap4flags copy include variables body enotify >>>> environment mailbox date ihave >>>> passdb { >>>> args = /etc/sentora/configs/dovecot2/dovecot-mysql.conf >>>> driver = sql >>>> } >>>> plugin { >>>> acl = vfile:/etc/dovecot/acls >>>> quota = maildir:User quota >>>> sieve = ~/dovecot.sieve >>>> sieve_dir = ~/sieve >>>> sieve_global_dir = /var/sentora/sieve/ >>>> sieve_global_path = /var/sentora/sieve/globalfilter.sieve >>>> sieve_max_script_size = 1M >>>> sieve_vacation_send_from_recipient = yes >>>> trash = /etc/sentora/configs/dovecot2/dovecot-trash.conf >>>> } >>>> protocols = imap pop3 lmtp sieve >>>> service auth { >>>> unix_listener /var/spool/postfix/private/auth { >>>> group = postfix >>>> mode = 0666 >>>> user = postfix >>>> } >>>> unix_listener auth-userdb { >>>> group = mail >>>> mode = 0666 >>>> user = vmail >>>> } >>>> } >>>> service dict { >>>> unix_listener dict { >>>> group = mail >>>> mode = 0666 >>>> user = vmail >>>> } >>>> } >>>> service imap-login { >>>> inet_listener imap { >>>> port = 143 >>>> } >>>> process_limit = 500 >>>> process_min_avail = 2 >>>> } >>>> service imap { >>>> vsz_limit = 256 M >>>> } >>>> service managesieve-login { >>>> inet_listener sieve { >>>> port = 4190 >>>> } >>>> process_min_avail = 0 >>>> service_count = 1 >>>> vsz_limit = 64 M >>>> } >>>> service pop3-login { >>>> inet_listener pop3 { >>>> port = 110 >>>> } >>>> } >>>> ssl_cert = >>> ssl_key = >>> ssl_protocols = !SSLv2 !SSLv3 >>>> userdb { >>>> driver = prefetch >>>> } >>>> userdb { >>>> args = /etc/sentora/configs/dovecot2/dovecot-mysql.conf >>>> driver = sql >>>> } >>>> protocol lda { >>>> mail_fsync = optimized >>>> mail_plugins = quota sieve >>>> postmaster_address = postmaster at ns1.nbmlaw.co.uk >>>> } >>>> protocol imap { >>>> imap_client_workarounds = delay-newmail >>>> mail_fsync = optimized >>>> mail_max_userip_connections = 60 >>>> mail_plugins = quota imap_quota trash >>>> } >>>> protocol lmtp { >>>> mail_plugins = quota sieve >>>> } >>>> protocol pop3 { >>>> mail_plugins = quota >>>> pop3_client_workarounds = outlook-no-nuls oe-ns-eoh >>>> pop3_uidl_format = %08Xu%08Xv >>>> } >>>> protocol sieve { >>>> managesieve_implementation_string = Dovecot Pigeonhole >>>> managesieve_max_compile_errors = 5 >>>> managesieve_max_line_length = 65536 >>>> } >>>> >>>> managesieve.sieve >>>> >>>> require ["fileinto","vacation"]; >>>> # rule:[vacation] >>>> if true >>>> { >>>> vacation :days 1 :subject "Vacation subject" text: >>>> i am currently out of the office >>>> >>>> trying some line breaks >>>> >>>> ...zzz >>>> . >>>> ; >>>> } >>>> >>>> On 19/10/2016 12:29, Stephan Bosch wrote: >>>>> Could you send your configuration (output from `dovecot -n`)? >>>>> >>>>> Also, please provide an example scenario; i.e., for one >>>>> problematic delivery provide: >>>>> >>>>> - The values of the variables substituted below. >>>>> >>>>> - The incoming e-mail message. >>>>> >>>>> - The Sieve script (or at least that vacation command). >>>>> >>>>> Regards, >>>>> >>>>> >>>>> Stephan. >>>>> >>>>> Op 19-10-2016 om 11:42 schreef Matthew Broadhead: >>>>>> hi, does anyone have any ideas about this issue? i have not had >>>>>> any response yet >>>>>> >>>>>> i tried changing /etc/postfix/master.cf line: >>>>>> dovecot unix - n n - - pipe >>>>>> flags=DRhu user=vmail:mail argv=/usr/libexec/dovecot/deliver -d >>>>>> ${recipient} >>>>>> >>>>>> to >>>>>> flags=DRhu user=vmail:mail argv=/usr/libexec/dovecot/dovecot-lda >>>>>> -f ${sender} -d ${user}@${nexthop} -a ${original_recipient} >>>>>> >>>>>> and >>>>>> -d ${user}@${domain} -a {recipient} -f ${sender} -m ${extension} >>>>>> >>>>>> but it didn't work >>>>>> >>>>>> On 12/10/2016 13:57, Matthew Broadhead wrote: >>>>>>> I have a server running >>>>>>> centos-release-7-2.1511.el7.centos.2.10.x86_64 with dovecot >>>>>>> version 2.2.10. I am also using roundcube for webmail. when a >>>>>>> vacation filter (reply with message) is created in roundcube it >>>>>>> adds a rule to managesieve.sieve in the user's mailbox. >>>>>>> everything works fine except the reply comes from >>>>>>> vmail at ns1.domain.tld instead of user at domain.tld. ns1.domain.tld >>>>>>> is the fully qualified name of the server. >>>>>>> >>>>>>> it used to work fine on my old CentOS 6 server so I am not sure >>>>>>> what has changed. Can anyone point me in the direction of where >>>>>>> I can configure this behaviour? >>>> >>> > From matthew.broadhead at nbmlaw.co.uk Wed Oct 19 12:49:29 2016 From: matthew.broadhead at nbmlaw.co.uk (Matthew Broadhead) Date: Wed, 19 Oct 2016 14:49:29 +0200 Subject: sieve sending vacation message from vmail@ns1.domain.tld In-Reply-To: <8260ce16-bc94-e3a9-13d1-f1204e6ae525@rename-it.nl> References: <71b362e8-3a69-076d-6376-2f3bbd39d0eb@nbmlaw.co.uk> <94941225-09d0-1440-1733-3884cc6dcd67@rename-it.nl> <7cdadba3-fd03-7d8c-1235-b428018a081c@nbmlaw.co.uk> <55712b3a-4812-f0a6-c9f9-59efcdac79f7@rename-it.nl> <8260ce16-bc94-e3a9-13d1-f1204e6ae525@rename-it.nl> Message-ID: <344d3d36-b905-5a90-e0ea-17d556076838@nbmlaw.co.uk> /var/log/maillog showed this Oct 19 13:25:41 ns1 postfix/smtpd[1298]: 7599A2C19C6: client=unknown[127.0.0.1] Oct 19 13:25:41 ns1 postfix/cleanup[1085]: 7599A2C19C6: message-id= Oct 19 13:25:41 ns1 postfix/qmgr[1059]: 7599A2C19C6: from=, size=3190, nrcpt=1 (queue active) Oct 19 13:25:41 ns1 amavis[32367]: (32367-17) Passed CLEAN {RelayedInternal}, ORIGINATING LOCAL [80.30.255.180]:54566 [80.30.255.180] -> , Queue-ID: BFFA62C1965, Message-ID: , mail_id: TlJQ9xQhWjQk, Hits: -2.9, size: 2235, queued_as: 7599A2C19C6, dkim_new=foo:nbmlaw.co.uk, 531 ms Oct 19 13:25:41 ns1 postfix/smtp[1135]: BFFA62C1965: to=, relay=127.0.0.1[127.0.0.1]:10026, delay=0.76, delays=0.22/0/0/0.53, dsn=2.0.0, status=sent (250 2.0.0 from MTA(smtp:[127.0.0.1]:10027): 250 2.0.0 Ok: queued as 7599A2C19C6) Oct 19 13:25:41 ns1 postfix/qmgr[1059]: BFFA62C1965: removed Oct 19 13:25:41 ns1 postfix/smtpd[1114]: connect from ns1.nbmlaw.co.uk[217.174.253.19] Oct 19 13:25:41 ns1 postfix/smtpd[1114]: NOQUEUE: filter: RCPT from ns1.nbmlaw.co.uk[217.174.253.19]: : Sender address triggers FILTER smtp-amavis:[127.0.0.1]:10026; from= to= proto=SMTP helo= Oct 19 13:25:41 ns1 postfix/smtpd[1114]: 8A03F2C1965: client=ns1.nbmlaw.co.uk[217.174.253.19] Oct 19 13:25:41 ns1 postfix/cleanup[1085]: 8A03F2C1965: message-id= Oct 19 13:25:41 ns1 opendmarc[2430]: implicit authentication service: ns1.nbmlaw.co.uk Oct 19 13:25:41 ns1 opendmarc[2430]: 8A03F2C1965: ns1.nbmlaw.co.uk fail Oct 19 13:25:41 ns1 postfix/qmgr[1059]: 8A03F2C1965: from=, size=1077, nrcpt=1 (queue active) Oct 19 13:25:41 ns1 postfix/smtpd[1114]: disconnect from ns1.nbmlaw.co.uk[217.174.253.19] Oct 19 13:25:41 ns1 sSMTP[1895]: Sent mail for vmail at ns1.nbmlaw.co.uk (221 2.0.0 Bye) uid=996 username=vmail outbytes=971 Oct 19 13:25:41 ns1 postfix/smtpd[1898]: connect from unknown[127.0.0.1] Oct 19 13:25:41 ns1 postfix/pipe[1162]: 7599A2C19C6: to=, relay=dovecot, delay=0.46, delays=0/0/0/0.45, dsn=2.0.0, status=sent (delivered via dovecot service) Oct 19 13:25:41 ns1 postfix/qmgr[1059]: 7599A2C19C6: removed Oct 19 13:25:41 ns1 postfix/smtpd[1898]: E53472C19C6: client=unknown[127.0.0.1] Oct 19 13:25:41 ns1 postfix/cleanup[1085]: E53472C19C6: message-id= Oct 19 13:25:41 ns1 postfix/qmgr[1059]: E53472C19C6: from=, size=1619, nrcpt=1 (queue active) Oct 19 13:25:41 ns1 amavis[1885]: (01885-01) Passed CLEAN {RelayedInternal}, ORIGINATING LOCAL [217.174.253.19]:40960 [217.174.253.19] -> , Queue-ID: 8A03F2C1965, Message-ID: , mail_id: mOMO97yjVqjM, Hits: -2.211, size: 1301, queued_as: E53472C19C6, 296 ms Oct 19 13:25:41 ns1 postfix/smtp[1217]: 8A03F2C1965: to=, relay=127.0.0.1[127.0.0.1]:10026, delay=0.38, delays=0.08/0/0/0.29, dsn=2.0.0, status=sent (250 2.0.0 from MTA(smtp:[127.0.0.1]:10027): 250 2.0.0 Ok: queued as E53472C19C6) Oct 19 13:25:41 ns1 postfix/qmgr[1059]: 8A03F2C1965: removed Oct 19 13:25:42 ns1 postfix/pipe[1303]: E53472C19C6: to=, relay=dovecot, delay=0.14, delays=0/0/0/0.14, dsn=2.0.0, status=sent (delivered via dovecot service) Oct 19 13:25:42 ns1 postfix/qmgr[1059]: E53472C19C6: removed On 19/10/2016 13:54, Stephan Bosch wrote: > > > Op 19-10-2016 om 13:47 schreef Matthew Broadhead: >> i am not 100% sure how to give you the information you require. >> >> my current setup in /etc/postfix/master.cf is >> flags=DRhu user=vmail:mail argv=/usr/libexec/dovecot/deliver -d >> ${recipient} >> so recipient would presumably be user at domain.tld? or do you want the >> real email address of one of our users? is there some way i can >> output this information directly e.g. in logs? > > I am no Postfix expert. I just need to know which values are being > passed to dovecot-lda with what options. I'd assume Postfix allows > logging the command line or at least the values of these variables. > >> the incoming email message could be anything? again i can run an >> example directly if you can advise the best way to do this > > As long as the problem occurs with this message. > > BTW, it would also be helpful to have the Dovecot logs from this > delivery, with mail_debug configured to "yes". > > Regards, > > Stephan. > >> >> On 19/10/2016 12:54, Stephan Bosch wrote: >>> Also, please provide an example scenario; i.e., for one problematic >>> delivery provide: >>> >>> - The values of the variables substituted in the dovecot-lda command >>> line; i.e., provide that command line. >>> - The incoming e-mail message. >>> >>> Regards, >>> >>> Stephan. >>> >>> Op 19-10-2016 om 12:43 schreef Matthew Broadhead: >>>> dovecot is configured by sentora control panel to a certain extent. >>>> if you want those configs i can send them as well >>>> >>>> dovecot -n >>>> >>>> debug_log_path = /var/log/dovecot-debug.log >>>> dict { >>>> quotadict = >>>> mysql:/etc/sentora/configs/dovecot2/dovecot-dict-quota.conf >>>> } >>>> disable_plaintext_auth = no >>>> first_valid_gid = 12 >>>> first_valid_uid = 996 >>>> info_log_path = /var/log/dovecot-info.log >>>> lda_mailbox_autocreate = yes >>>> lda_mailbox_autosubscribe = yes >>>> listen = * >>>> lmtp_save_to_detail_mailbox = yes >>>> log_path = /var/log/dovecot.log >>>> log_timestamp = %Y-%m-%d %H:%M:%S >>>> mail_fsync = never >>>> mail_location = maildir:/var/sentora/vmail/%d/%n >>>> managesieve_notify_capability = mailto >>>> managesieve_sieve_capability = fileinto reject envelope >>>> encoded-character vacation subaddress comparator-i;ascii-numeric >>>> relational regex imap4flags copy include variables body enotify >>>> environment mailbox date ihave >>>> passdb { >>>> args = /etc/sentora/configs/dovecot2/dovecot-mysql.conf >>>> driver = sql >>>> } >>>> plugin { >>>> acl = vfile:/etc/dovecot/acls >>>> quota = maildir:User quota >>>> sieve = ~/dovecot.sieve >>>> sieve_dir = ~/sieve >>>> sieve_global_dir = /var/sentora/sieve/ >>>> sieve_global_path = /var/sentora/sieve/globalfilter.sieve >>>> sieve_max_script_size = 1M >>>> sieve_vacation_send_from_recipient = yes >>>> trash = /etc/sentora/configs/dovecot2/dovecot-trash.conf >>>> } >>>> protocols = imap pop3 lmtp sieve >>>> service auth { >>>> unix_listener /var/spool/postfix/private/auth { >>>> group = postfix >>>> mode = 0666 >>>> user = postfix >>>> } >>>> unix_listener auth-userdb { >>>> group = mail >>>> mode = 0666 >>>> user = vmail >>>> } >>>> } >>>> service dict { >>>> unix_listener dict { >>>> group = mail >>>> mode = 0666 >>>> user = vmail >>>> } >>>> } >>>> service imap-login { >>>> inet_listener imap { >>>> port = 143 >>>> } >>>> process_limit = 500 >>>> process_min_avail = 2 >>>> } >>>> service imap { >>>> vsz_limit = 256 M >>>> } >>>> service managesieve-login { >>>> inet_listener sieve { >>>> port = 4190 >>>> } >>>> process_min_avail = 0 >>>> service_count = 1 >>>> vsz_limit = 64 M >>>> } >>>> service pop3-login { >>>> inet_listener pop3 { >>>> port = 110 >>>> } >>>> } >>>> ssl_cert = >>> ssl_key = >>> ssl_protocols = !SSLv2 !SSLv3 >>>> userdb { >>>> driver = prefetch >>>> } >>>> userdb { >>>> args = /etc/sentora/configs/dovecot2/dovecot-mysql.conf >>>> driver = sql >>>> } >>>> protocol lda { >>>> mail_fsync = optimized >>>> mail_plugins = quota sieve >>>> postmaster_address = postmaster at ns1.nbmlaw.co.uk >>>> } >>>> protocol imap { >>>> imap_client_workarounds = delay-newmail >>>> mail_fsync = optimized >>>> mail_max_userip_connections = 60 >>>> mail_plugins = quota imap_quota trash >>>> } >>>> protocol lmtp { >>>> mail_plugins = quota sieve >>>> } >>>> protocol pop3 { >>>> mail_plugins = quota >>>> pop3_client_workarounds = outlook-no-nuls oe-ns-eoh >>>> pop3_uidl_format = %08Xu%08Xv >>>> } >>>> protocol sieve { >>>> managesieve_implementation_string = Dovecot Pigeonhole >>>> managesieve_max_compile_errors = 5 >>>> managesieve_max_line_length = 65536 >>>> } >>>> >>>> managesieve.sieve >>>> >>>> require ["fileinto","vacation"]; >>>> # rule:[vacation] >>>> if true >>>> { >>>> vacation :days 1 :subject "Vacation subject" text: >>>> i am currently out of the office >>>> >>>> trying some line breaks >>>> >>>> ...zzz >>>> . >>>> ; >>>> } >>>> >>>> On 19/10/2016 12:29, Stephan Bosch wrote: >>>>> Could you send your configuration (output from `dovecot -n`)? >>>>> >>>>> Also, please provide an example scenario; i.e., for one >>>>> problematic delivery provide: >>>>> >>>>> - The values of the variables substituted below. >>>>> >>>>> - The incoming e-mail message. >>>>> >>>>> - The Sieve script (or at least that vacation command). >>>>> >>>>> Regards, >>>>> >>>>> >>>>> Stephan. >>>>> >>>>> Op 19-10-2016 om 11:42 schreef Matthew Broadhead: >>>>>> hi, does anyone have any ideas about this issue? i have not had >>>>>> any response yet >>>>>> >>>>>> i tried changing /etc/postfix/master.cf line: >>>>>> dovecot unix - n n - - pipe >>>>>> flags=DRhu user=vmail:mail argv=/usr/libexec/dovecot/deliver -d >>>>>> ${recipient} >>>>>> >>>>>> to >>>>>> flags=DRhu user=vmail:mail argv=/usr/libexec/dovecot/dovecot-lda >>>>>> -f ${sender} -d ${user}@${nexthop} -a ${original_recipient} >>>>>> >>>>>> and >>>>>> -d ${user}@${domain} -a {recipient} -f ${sender} -m ${extension} >>>>>> >>>>>> but it didn't work >>>>>> >>>>>> On 12/10/2016 13:57, Matthew Broadhead wrote: >>>>>>> I have a server running >>>>>>> centos-release-7-2.1511.el7.centos.2.10.x86_64 with dovecot >>>>>>> version 2.2.10. I am also using roundcube for webmail. when a >>>>>>> vacation filter (reply with message) is created in roundcube it >>>>>>> adds a rule to managesieve.sieve in the user's mailbox. >>>>>>> everything works fine except the reply comes from >>>>>>> vmail at ns1.domain.tld instead of user at domain.tld. ns1.domain.tld >>>>>>> is the fully qualified name of the server. >>>>>>> >>>>>>> it used to work fine on my old CentOS 6 server so I am not sure >>>>>>> what has changed. Can anyone point me in the direction of where >>>>>>> I can configure this behaviour? >>>> >>> > From stephan at rename-it.nl Wed Oct 19 17:04:55 2016 From: stephan at rename-it.nl (Stephan Bosch) Date: Wed, 19 Oct 2016 19:04:55 +0200 Subject: Lmtp Fatal Error In-Reply-To: <20161019113448.jkxzdvfohe72wqhs@charite.de> References: <1310725266.5694605.1476822512074.ref@mail.yahoo.com> <1310725266.5694605.1476822512074@mail.yahoo.com> <74883bb4-a329-8fb8-6ac6-05e01dad242b@rename-it.nl> <20161019113448.jkxzdvfohe72wqhs@charite.de> Message-ID: <59b1981c-c193-0a1a-5315-520f3e7169f0@rename-it.nl> Op 19-10-2016 om 13:34 schreef Ralf Hildebrandt: > * Stephan Bosch : >> Op 10/18/2016 om 10:28 PM schreef Sven Roellig: >>> Hi,dovecot is create an Fatal Panik Error. >>> : Fatal: master: service(lmtp): child 3369 killed with signal 6 (core dumps disabled) >>> <1jx3DhuCBlg1DQAAWm89Cw>: Panic: file lda-sieve-plugin.c: line 447 (lda_sieve_execute_scripts): assertion failed: (script != NULL) >>> <1jx3DhuCBlg1DQAAWm89Cw>: Error: Raw backtrace: /usr/lib/dovecot/libdovecot.so.0(+0x93fae) [0x7f547f6a3fae] -> /usr/lib/dovecot/libdovecot.so.0(+0x9409c) [0x7f547f6a409c] -> /usr/lib/dovecot/libdovecot.so.0(i_fatal+0) [0x7f547f63d56e] -> /usr/lib/dovecot/modules/lib90_sieve_plugin.so(+0x3ae8) [0x7f547d765ae8] -> /usr/lib/dovecot/libdovecot-lda.so.0(mail_deliver+0x49) [0x7f547fc709a9] -> dovecot/lmtp(+0x7201) [0x7f54800a1201] -> /usr/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x5f) [0x7f547f6b88bf] -> /usr/lib/dovecot/libdovecot.so.0(io_loop_handler_run_internal+0x10a) [0x7f547f6b9d8a] -> /usr/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0x25) [0x7f547f6b8965] -> /usr/lib/dovecot/libdovecot.so.0(io_loop_run+0x30) [0x7f547f6b8b00] -> /usr/lib/dovecot/libdovecot.so.0(master_service_run+0x13) [0x7f547f643ac3] -> dovecot/lmtp(main+0x1c9) [0x7f548009f2c9] -> /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5) [0x7f547f286b45] -> dovecot/lmtp(+0x53ae) [0x7f548009f3ae] >>> >>> Normal Mails are not deliverd but filterd mails are deliverd. >>> System is Debian 8.6 Dovecot Core is 2%3a2.3.0-alpha0-1-auto+385 AMD64 >> Problem is known and fixed. New builds are currently blocked by an >> unrelated build failure, which will be resolved soon. > Ah, I was wondering about that. I tried rebuilding from source using > your src packages and got an fatal error during make check :) New releases are available. Regards, Stephan. From sven_roellig at yahoo.de Wed Oct 19 17:18:42 2016 From: sven_roellig at yahoo.de (Sven Roellig) Date: Wed, 19 Oct 2016 17:18:42 +0000 (UTC) Subject: Lmtp Fatal Error In-Reply-To: <59b1981c-c193-0a1a-5315-520f3e7169f0@rename-it.nl> References: <1310725266.5694605.1476822512074.ref@mail.yahoo.com> <1310725266.5694605.1476822512074@mail.yahoo.com> <74883bb4-a329-8fb8-6ac6-05e01dad242b@rename-it.nl> <20161019113448.jkxzdvfohe72wqhs@charite.de> <59b1981c-c193-0a1a-5315-520f3e7169f0@rename-it.nl> Message-ID: <176938449.7235966.1476897522224@mail.yahoo.com> Hi,installed 30 minutes ago and it runs again. Thank you for the great work? Sven? Von: Stephan Bosch An: dovecot at dovecot.org Gesendet: 19:04 Mittwoch, 19.Oktober 2016 Betreff: Re: Lmtp Fatal Error Op 19-10-2016 om 13:34 schreef Ralf Hildebrandt: > * Stephan Bosch : >> Op 10/18/2016 om 10:28 PM schreef Sven Roellig: >>> Hi,dovecot is create an Fatal Panik Error. >>> : Fatal: master: service(lmtp): child 3369 killed with signal 6 (core dumps disabled) >>> <1jx3DhuCBlg1DQAAWm89Cw>: Panic: file lda-sieve-plugin.c: line 447 (lda_sieve_execute_scripts): assertion failed: (script != NULL) >>> <1jx3DhuCBlg1DQAAWm89Cw>: Error: Raw backtrace: /usr/lib/dovecot/libdovecot.so.0(+0x93fae) [0x7f547f6a3fae] -> /usr/lib/dovecot/libdovecot.so.0(+0x9409c) [0x7f547f6a409c] -> /usr/lib/dovecot/libdovecot.so.0(i_fatal+0) [0x7f547f63d56e] -> /usr/lib/dovecot/modules/lib90_sieve_plugin.so(+0x3ae8) [0x7f547d765ae8] -> /usr/lib/dovecot/libdovecot-lda.so.0(mail_deliver+0x49) [0x7f547fc709a9] -> dovecot/lmtp(+0x7201) [0x7f54800a1201] -> /usr/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x5f) [0x7f547f6b88bf] -> /usr/lib/dovecot/libdovecot.so.0(io_loop_handler_run_internal+0x10a) [0x7f547f6b9d8a] -> /usr/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0x25) [0x7f547f6b8965] -> /usr/lib/dovecot/libdovecot.so.0(io_loop_run+0x30) [0x7f547f6b8b00] -> /usr/lib/dovecot/libdovecot.so.0(master_service_run+0x13) [0x7f547f643ac3] -> dovecot/lmtp(main+0x1c9) [0x7f548009f2c9] -> /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5) [0x7f547f286b45] -> dovecot/lmtp(+0x53ae) [0x7f548009f3ae] >>> >>> Normal Mails are not deliverd but filterd mails are deliverd. >>> System is Debian 8.6 Dovecot Core is 2%3a2.3.0-alpha0-1-auto+385 AMD64 >> Problem is known and fixed. New builds are currently blocked by an >> unrelated build failure, which will be resolved soon. > Ah, I was wondering about that. I tried rebuilding from source using > your src packages and got an fatal error during make check :) New releases are available. Regards, Stephan. From tss at iki.fi Wed Oct 19 21:01:03 2016 From: tss at iki.fi (Timo Sirainen) Date: Thu, 20 Oct 2016 00:01:03 +0300 Subject: v2.2.26 release candidate released Message-ID: http://dovecot.org/releases/2.2/rc/dovecot-2.2.26.rc1.tar.gz http://dovecot.org/releases/2.2/rc/dovecot-2.2.26.rc1.tar.gz.sig There are quite a lot of changes since v2.2.25. Please try out this RC so we can get a good and stable v2.2.26 out. * master: Removed hardcoded 511 backlog limit for listen(). The kernel should limit this as needed. * doveadm import: Source user is now initialized the same as target user. Added -U parameter to override the source user. * Mailbox names are no longer limited to 16 hierarchy levels. We'll check another way to make sure mailbox names can't grow larger than 4096 bytes. + Added a concept of "alternative usernames" by returning user_* extra field(s) in passdb. doveadm proxy list shows these alt usernames in "doveadm proxy list" output. "doveadm director&proxy kick" adds -f parameter. The alt usernames don't have to be unique, so this allows creation of user groups and kicking them in one command. + auth: passdb/userdb dict allows now %variables in key settings. + auth: If passdb returns noauthenticate=yes extra field, assume that it only set extra fields and authentication wasn't actually performed. + auth: passdb static now supports password={scheme} prefix. + imapc: Added imapc_max_line_length to limit maximum memory usage. + imap, pop3: Added rawlog_dir setting to store IMAP/POP3 traffic logs. This replaces at least partially the rawlog plugin. + dsync: Added dsync_features=empty-header-workaround setting. This makes incremental dsyncs work better for servers that randomly return empty headers for mails. When an empty header is seen for an existing mail, dsync assumes that it matches the local mail. + doveadm sync/backup: Added -I parameter to skip too large mails. + doveadm sync/backup: Fixed -t parameter and added -e for "end date". + doveadm mailbox metadata: Added -s parameter to allow accessing server metadata by using empty mailbox name. - master process's listener socket was leaked to all child processes. This might have allowed untrusted processes to capture and prevent "doveadm service stop" comands from working. - auth: userdb fields weren't passed to auth-workers, so %{userdb:*} from previous userdbs didn't work there. - auth: Each userdb lookup from cache reset its TTL. - auth: Fixed auth_bind=yes + sasl_bind=yes to work together - auth: Blocking userdb lookups reset extra fields set by previous userdbs. - auth: Cache keys didn't include %{passdb:*} and %{userdb:*} - auth-policy: Fixed crash due to using already-freed memory if policy lookup takes longer than auth request exists. - lib-auth: Unescape passdb/userdb extra fields. Mainly affected returning extra fields with LFs or TABs. - lmtp_user_concurrency_limit>0 setting was logging unnecessary anvil errors. - lmtp_user_concurrency_limit is now checked before quota check with lmtp_rcpt_check_quota=yes to avoid unnecessary quota work. - lmtp: %{userdb:*} variables didn't work in mail_log_prefix - autoexpunge settings for mailboxes with wildcards didn't work when namespace prefix was non-empty. - Fixed writing >2GB to iostream-temp files (used by fs-compress, fs-metawrap, doveadm-http) - director: Ignore duplicates in director_servers setting. - zlib, IMAP BINARY: Fixed internal caching when accessing multiple newly created mails. They all had UID=0 and the next mail could have wrongly used the previously cached mail. - doveadm stats reset wasn't reseting all the stats. - auth_stats=yes: Don't update num_logins, since it doubles them when using with mail stats. - quota count: Fixed deadlocks when updating vsize header. - dict-quota: Fixed crashes happening due to memory corruption. - dict proxy: Fixed various timeout-related bugs. - doveadm proxying: Fixed -A and -u wildcard handling. - doveadm proxying: Fixed hangs and bugs related to printing. - imap: Fixed wrongly triggering assert-crash in client_check_command_hangs. - imap proxy: Don't send ID command pipelined with nopipelining=yes - imap-hibernate: Don't execute quota_over_script or last_login after un-hibernation. - imap-hibernate: Don't un-hibernate if client sends DONE+IDLE in one IP packet. - imap-hibernate: Fixed various failures when un-hibernating. - fts: fts_autoindex=yes was broken in 2.2.25 unless fts_autoindex_exclude settings existed. - fts-solr: Fixed searching multiple mailboxes (patch by x16a0) - doveadm fetch body.snippet wasn't working in 2.2.25. Also fixed a crash with certain emails. - pop3-migration + dbox: Various fixes related to POP3 UIDL optimization in 2.2.25. - pop3-migration: Fixed "truncated email header" workaround. From stephan at rename-it.nl Wed Oct 19 21:18:41 2016 From: stephan at rename-it.nl (Stephan Bosch) Date: Wed, 19 Oct 2016 23:18:41 +0200 Subject: v2.2.26 release candidate released In-Reply-To: References: Message-ID: <6d469358-cad8-f474-0b96-429a26694caf@rename-it.nl> Op 10/19/2016 om 11:01 PM schreef Timo Sirainen: > http://dovecot.org/releases/2.2/rc/dovecot-2.2.26.rc1.tar.gz > http://dovecot.org/releases/2.2/rc/dovecot-2.2.26.rc1.tar.gz.sig > > There are quite a lot of changes since v2.2.25. Please try out this RC so we can get a good and stable v2.2.26 out. Pigeonhole release candidate will follow soon. Old version apparently still compiles. Regards, Stephan. From larryrtx at gmail.com Wed Oct 19 23:13:58 2016 From: larryrtx at gmail.com (Larry Rosenman) Date: Wed, 19 Oct 2016 18:13:58 -0500 Subject: v2.2.26 release candidate released In-Reply-To: References: Message-ID: Is there a commit gainst 2.2.25 for: - fts: fts_autoindex=yes was broken in 2.2.25 unless fts_autoindex_exclude settings existed. that I could use? thanks! On Wed, Oct 19, 2016 at 4:01 PM, Timo Sirainen wrote: > http://dovecot.org/releases/2.2/rc/dovecot-2.2.26.rc1.tar.gz > http://dovecot.org/releases/2.2/rc/dovecot-2.2.26.rc1.tar.gz.sig > > There are quite a lot of changes since v2.2.25. Please try out this RC so > we can get a good and stable v2.2.26 out. > > * master: Removed hardcoded 511 backlog limit for listen(). The > kernel > should limit this as needed. > * doveadm import: Source user is now initialized the same as target > user. Added -U parameter to override the source user. > * Mailbox names are no longer limited to 16 hierarchy levels. We'll > check another way to make sure mailbox names can't grow larger > than > 4096 bytes. > > + Added a concept of "alternative usernames" by returning user_* > extra > field(s) in passdb. doveadm proxy list shows these alt usernames > in > "doveadm proxy list" output. "doveadm director&proxy kick" adds > -f parameter. The alt usernames don't have to be > unique, so this allows creation of user groups and kicking them > in > one command. > + auth: passdb/userdb dict allows now %variables in key settings. > + auth: If passdb returns noauthenticate=yes extra field, assume > that > it only set extra fields and authentication wasn't actually > performed. > + auth: passdb static now supports password={scheme} prefix. > + imapc: Added imapc_max_line_length to limit maximum memory usage. > + imap, pop3: Added rawlog_dir setting to store IMAP/POP3 traffic > logs. > This replaces at least partially the rawlog plugin. > + dsync: Added dsync_features=empty-header-workaround setting. > This > makes incremental dsyncs work better for servers that randomly > return > empty headers for mails. When an empty header is seen for an > existing > mail, dsync assumes that it matches the local mail. > + doveadm sync/backup: Added -I parameter to skip too > large mails. > + doveadm sync/backup: Fixed -t parameter and added -e for "end > date". > + doveadm mailbox metadata: Added -s parameter to allow accessing > server metadata by using empty mailbox name. > > - master process's listener socket was leaked to all child > processes. > This might have allowed untrusted processes to capture and > prevent > "doveadm service stop" comands from working. > - auth: userdb fields weren't passed to auth-workers, so > %{userdb:*} > from previous userdbs didn't work there. > - auth: Each userdb lookup from cache reset its TTL. > - auth: Fixed auth_bind=yes + sasl_bind=yes to work together > - auth: Blocking userdb lookups reset extra fields set by previous > userdbs. > - auth: Cache keys didn't include %{passdb:*} and %{userdb:*} > - auth-policy: Fixed crash due to using already-freed memory if > policy > lookup takes longer than auth request exists. > - lib-auth: Unescape passdb/userdb extra fields. Mainly affected > returning extra fields with LFs or TABs. > - lmtp_user_concurrency_limit>0 setting was logging unnecessary > anvil errors. > - lmtp_user_concurrency_limit is now checked before quota check > with > lmtp_rcpt_check_quota=yes to avoid unnecessary quota work. > - lmtp: %{userdb:*} variables didn't work in mail_log_prefix > - autoexpunge settings for mailboxes with wildcards didn't work > when > namespace prefix was non-empty. > - Fixed writing >2GB to iostream-temp files (used by fs-compress, > fs-metawrap, doveadm-http) > - director: Ignore duplicates in director_servers setting. > - zlib, IMAP BINARY: Fixed internal caching when accessing multiple > newly created mails. They all had UID=0 and the next mail could > have > wrongly used the previously cached mail. > - doveadm stats reset wasn't reseting all the stats. > - auth_stats=yes: Don't update num_logins, since it doubles them > when > using with mail stats. > - quota count: Fixed deadlocks when updating vsize header. > - dict-quota: Fixed crashes happening due to memory corruption. > - dict proxy: Fixed various timeout-related bugs. > - doveadm proxying: Fixed -A and -u wildcard handling. > - doveadm proxying: Fixed hangs and bugs related to printing. > - imap: Fixed wrongly triggering assert-crash in > client_check_command_hangs. > - imap proxy: Don't send ID command pipelined with nopipelining=yes > - imap-hibernate: Don't execute quota_over_script or last_login > after > un-hibernation. > - imap-hibernate: Don't un-hibernate if client sends DONE+IDLE in > one > IP packet. > - imap-hibernate: Fixed various failures when un-hibernating. > - fts: fts_autoindex=yes was broken in 2.2.25 unless > fts_autoindex_exclude settings existed. > - fts-solr: Fixed searching multiple mailboxes (patch by x16a0) > - doveadm fetch body.snippet wasn't working in 2.2.25. Also fixed a > crash with certain emails. > - pop3-migration + dbox: Various fixes related to POP3 UIDL > optimization in 2.2.25. > - pop3-migration: Fixed "truncated email header" workaround. > -- Larry Rosenman http://www.lerctr.org/~ler Phone: +1 214-642-9640 (c) E-Mail: larryrtx at gmail.com US Mail: 17716 Limpia Crk, Round Rock, TX 78664-7281 From adi at ddns.com.au Wed Oct 19 23:34:17 2016 From: adi at ddns.com.au (Adi Pircalabu) Date: Thu, 20 Oct 2016 10:34:17 +1100 Subject: v2.2.26 release candidate released In-Reply-To: References: Message-ID: <76849de3-6802-10af-300f-bee86e31f963@ddns.com.au> Reading the summary below I can't see any remote mention of a possible fix for the crashes from: http://dovecot.org/pipermail/dovecot/2016-October/105567.html Just confirming this the case. Adi Pircalabu On 20/10/16 08:01, Timo Sirainen wrote: > http://dovecot.org/releases/2.2/rc/dovecot-2.2.26.rc1.tar.gz > http://dovecot.org/releases/2.2/rc/dovecot-2.2.26.rc1.tar.gz.sig > > There are quite a lot of changes since v2.2.25. Please try out this RC so we can get a good and stable v2.2.26 out. > > * master: Removed hardcoded 511 backlog limit for listen(). The kernel > should limit this as needed. > * doveadm import: Source user is now initialized the same as target > user. Added -U parameter to override the source user. > * Mailbox names are no longer limited to 16 hierarchy levels. We'll > check another way to make sure mailbox names can't grow larger than > 4096 bytes. > > + Added a concept of "alternative usernames" by returning user_* extra > field(s) in passdb. doveadm proxy list shows these alt usernames in > "doveadm proxy list" output. "doveadm director&proxy kick" adds > -f parameter. The alt usernames don't have to be > unique, so this allows creation of user groups and kicking them in > one command. > + auth: passdb/userdb dict allows now %variables in key settings. > + auth: If passdb returns noauthenticate=yes extra field, assume that > it only set extra fields and authentication wasn't actually performed. > + auth: passdb static now supports password={scheme} prefix. > + imapc: Added imapc_max_line_length to limit maximum memory usage. > + imap, pop3: Added rawlog_dir setting to store IMAP/POP3 traffic logs. > This replaces at least partially the rawlog plugin. > + dsync: Added dsync_features=empty-header-workaround setting. This > makes incremental dsyncs work better for servers that randomly return > empty headers for mails. When an empty header is seen for an existing > mail, dsync assumes that it matches the local mail. > + doveadm sync/backup: Added -I parameter to skip too > large mails. > + doveadm sync/backup: Fixed -t parameter and added -e for "end date". > + doveadm mailbox metadata: Added -s parameter to allow accessing > server metadata by using empty mailbox name. > > - master process's listener socket was leaked to all child processes. > This might have allowed untrusted processes to capture and prevent > "doveadm service stop" comands from working. > - auth: userdb fields weren't passed to auth-workers, so %{userdb:*} > from previous userdbs didn't work there. > - auth: Each userdb lookup from cache reset its TTL. > - auth: Fixed auth_bind=yes + sasl_bind=yes to work together > - auth: Blocking userdb lookups reset extra fields set by previous > userdbs. > - auth: Cache keys didn't include %{passdb:*} and %{userdb:*} > - auth-policy: Fixed crash due to using already-freed memory if policy > lookup takes longer than auth request exists. > - lib-auth: Unescape passdb/userdb extra fields. Mainly affected > returning extra fields with LFs or TABs. > - lmtp_user_concurrency_limit>0 setting was logging unnecessary > anvil errors. > - lmtp_user_concurrency_limit is now checked before quota check with > lmtp_rcpt_check_quota=yes to avoid unnecessary quota work. > - lmtp: %{userdb:*} variables didn't work in mail_log_prefix > - autoexpunge settings for mailboxes with wildcards didn't work when > namespace prefix was non-empty. > - Fixed writing >2GB to iostream-temp files (used by fs-compress, > fs-metawrap, doveadm-http) > - director: Ignore duplicates in director_servers setting. > - zlib, IMAP BINARY: Fixed internal caching when accessing multiple > newly created mails. They all had UID=0 and the next mail could have > wrongly used the previously cached mail. > - doveadm stats reset wasn't reseting all the stats. > - auth_stats=yes: Don't update num_logins, since it doubles them when > using with mail stats. > - quota count: Fixed deadlocks when updating vsize header. > - dict-quota: Fixed crashes happening due to memory corruption. > - dict proxy: Fixed various timeout-related bugs. > - doveadm proxying: Fixed -A and -u wildcard handling. > - doveadm proxying: Fixed hangs and bugs related to printing. > - imap: Fixed wrongly triggering assert-crash in > client_check_command_hangs. > - imap proxy: Don't send ID command pipelined with nopipelining=yes > - imap-hibernate: Don't execute quota_over_script or last_login after > un-hibernation. > - imap-hibernate: Don't un-hibernate if client sends DONE+IDLE in one > IP packet. > - imap-hibernate: Fixed various failures when un-hibernating. > - fts: fts_autoindex=yes was broken in 2.2.25 unless > fts_autoindex_exclude settings existed. > - fts-solr: Fixed searching multiple mailboxes (patch by x16a0) > - doveadm fetch body.snippet wasn't working in 2.2.25. Also fixed a > crash with certain emails. > - pop3-migration + dbox: Various fixes related to POP3 UIDL > optimization in 2.2.25. > - pop3-migration: Fixed "truncated email header" workaround. > From aki.tuomi at dovecot.fi Thu Oct 20 04:15:10 2016 From: aki.tuomi at dovecot.fi (Aki Tuomi) Date: Thu, 20 Oct 2016 07:15:10 +0300 (EEST) Subject: v2.2.26 release candidate released In-Reply-To: <76849de3-6802-10af-300f-bee86e31f963@ddns.com.au> References: <76849de3-6802-10af-300f-bee86e31f963@ddns.com.au> Message-ID: <1165441214.1090.1476936912194@appsuite-dev.open-xchange.com> We'll take a look. Aki > On October 20, 2016 at 2:34 AM Adi Pircalabu wrote: > > > Reading the summary below I can't see any remote mention of a possible > fix for the crashes from: > http://dovecot.org/pipermail/dovecot/2016-October/105567.html > Just confirming this the case. > > Adi Pircalabu > > On 20/10/16 08:01, Timo Sirainen wrote: > > http://dovecot.org/releases/2.2/rc/dovecot-2.2.26.rc1.tar.gz > > http://dovecot.org/releases/2.2/rc/dovecot-2.2.26.rc1.tar.gz.sig > > > > There are quite a lot of changes since v2.2.25. Please try out this RC so we can get a good and stable v2.2.26 out. > > > > * master: Removed hardcoded 511 backlog limit for listen(). The kernel > > should limit this as needed. > > * doveadm import: Source user is now initialized the same as target > > user. Added -U parameter to override the source user. > > * Mailbox names are no longer limited to 16 hierarchy levels. We'll > > check another way to make sure mailbox names can't grow larger than > > 4096 bytes. > > > > + Added a concept of "alternative usernames" by returning user_* extra > > field(s) in passdb. doveadm proxy list shows these alt usernames in > > "doveadm proxy list" output. "doveadm director&proxy kick" adds > > -f parameter. The alt usernames don't have to be > > unique, so this allows creation of user groups and kicking them in > > one command. > > + auth: passdb/userdb dict allows now %variables in key settings. > > + auth: If passdb returns noauthenticate=yes extra field, assume that > > it only set extra fields and authentication wasn't actually performed. > > + auth: passdb static now supports password={scheme} prefix. > > + imapc: Added imapc_max_line_length to limit maximum memory usage. > > + imap, pop3: Added rawlog_dir setting to store IMAP/POP3 traffic logs. > > This replaces at least partially the rawlog plugin. > > + dsync: Added dsync_features=empty-header-workaround setting. This > > makes incremental dsyncs work better for servers that randomly return > > empty headers for mails. When an empty header is seen for an existing > > mail, dsync assumes that it matches the local mail. > > + doveadm sync/backup: Added -I parameter to skip too > > large mails. > > + doveadm sync/backup: Fixed -t parameter and added -e for "end date". > > + doveadm mailbox metadata: Added -s parameter to allow accessing > > server metadata by using empty mailbox name. > > > > - master process's listener socket was leaked to all child processes. > > This might have allowed untrusted processes to capture and prevent > > "doveadm service stop" comands from working. > > - auth: userdb fields weren't passed to auth-workers, so %{userdb:*} > > from previous userdbs didn't work there. > > - auth: Each userdb lookup from cache reset its TTL. > > - auth: Fixed auth_bind=yes + sasl_bind=yes to work together > > - auth: Blocking userdb lookups reset extra fields set by previous > > userdbs. > > - auth: Cache keys didn't include %{passdb:*} and %{userdb:*} > > - auth-policy: Fixed crash due to using already-freed memory if policy > > lookup takes longer than auth request exists. > > - lib-auth: Unescape passdb/userdb extra fields. Mainly affected > > returning extra fields with LFs or TABs. > > - lmtp_user_concurrency_limit>0 setting was logging unnecessary > > anvil errors. > > - lmtp_user_concurrency_limit is now checked before quota check with > > lmtp_rcpt_check_quota=yes to avoid unnecessary quota work. > > - lmtp: %{userdb:*} variables didn't work in mail_log_prefix > > - autoexpunge settings for mailboxes with wildcards didn't work when > > namespace prefix was non-empty. > > - Fixed writing >2GB to iostream-temp files (used by fs-compress, > > fs-metawrap, doveadm-http) > > - director: Ignore duplicates in director_servers setting. > > - zlib, IMAP BINARY: Fixed internal caching when accessing multiple > > newly created mails. They all had UID=0 and the next mail could have > > wrongly used the previously cached mail. > > - doveadm stats reset wasn't reseting all the stats. > > - auth_stats=yes: Don't update num_logins, since it doubles them when > > using with mail stats. > > - quota count: Fixed deadlocks when updating vsize header. > > - dict-quota: Fixed crashes happening due to memory corruption. > > - dict proxy: Fixed various timeout-related bugs. > > - doveadm proxying: Fixed -A and -u wildcard handling. > > - doveadm proxying: Fixed hangs and bugs related to printing. > > - imap: Fixed wrongly triggering assert-crash in > > client_check_command_hangs. > > - imap proxy: Don't send ID command pipelined with nopipelining=yes > > - imap-hibernate: Don't execute quota_over_script or last_login after > > un-hibernation. > > - imap-hibernate: Don't un-hibernate if client sends DONE+IDLE in one > > IP packet. > > - imap-hibernate: Fixed various failures when un-hibernating. > > - fts: fts_autoindex=yes was broken in 2.2.25 unless > > fts_autoindex_exclude settings existed. > > - fts-solr: Fixed searching multiple mailboxes (patch by x16a0) > > - doveadm fetch body.snippet wasn't working in 2.2.25. Also fixed a > > crash with certain emails. > > - pop3-migration + dbox: Various fixes related to POP3 UIDL > > optimization in 2.2.25. > > - pop3-migration: Fixed "truncated email header" workaround. > > From tss at iki.fi Thu Oct 20 08:14:56 2016 From: tss at iki.fi (Timo Sirainen) Date: Thu, 20 Oct 2016 11:14:56 +0300 Subject: v2.2.26 release candidate released In-Reply-To: <76849de3-6802-10af-300f-bee86e31f963@ddns.com.au> References: <76849de3-6802-10af-300f-bee86e31f963@ddns.com.au> Message-ID: On 20 Oct 2016, at 02:34, Adi Pircalabu wrote: > > Reading the summary below I can't see any remote mention of a possible fix for the crashes from: > http://dovecot.org/pipermail/dovecot/2016-October/105567.html > Just confirming this the case. Thanks, fixed: https://github.com/dovecot/core/commit/c67082dca6eda730c2bb07cdef242b1b8ee09929 From Ralf.Hildebrandt at charite.de Thu Oct 20 08:25:11 2016 From: Ralf.Hildebrandt at charite.de (Ralf Hildebrandt) Date: Thu, 20 Oct 2016 10:25:11 +0200 Subject: Massive LMTP Problems with dovecot In-Reply-To: References: <3syKJD4Vj9z20sts@mail-cbf.charite.de> <3c8fdd70-f345-55d6-1151-f82dc6dfb396@rename-it.nl> <20161017134829.x4qp4opkorg32sd2@charite.de> <20161017140000.mdfm3sp3eqzve35b@charite.de> <20161017140232.o7z4qyu4kdlwwveb@charite.de> <20161017140820.zspfu6herylymp55@charite.de> <20161017143107.yzh5denau3kzj37w@charite.de> Message-ID: <20161020082510.2oa665vmnyfwngqx@charite.de> * Timo Sirainen : > On 17 Oct 2016, at 17:31, Ralf Hildebrandt wrote: > > > > * Ralf Hildebrandt : > > > >>> It seems to loop in sha1_loop & hash_format_loop > >> > >> The problem occurs in both 2.3 and 2.2 (I just updated to 2.3 to check). > > > > I'm seeing the first occurence of that problem on the 10th of october! > > > > I was using (prior to the 10th) : 2.3.0~alpha0-1~auto+371 > > On the 10th I upgraded (16:04) to: 2.3.0~alpha0-1~auto+376 > > > > I'd think the change must have been introduced between 371 and 376 :) > > > > I then went back to, issues went away: 2.2.25-1~auto+49 > > and the issues reappeared with 2.2.25-1~auto+57 > > https://github.com/dovecot/core/commit/9b5fa7fdd9b9f1f61eaddda48036df200fc5e56e should fix this. Yes, fixed. -- Ralf Hildebrandt Gesch?ftsbereich IT | Abteilung Netzwerk Charit? - Universit?tsmedizin Berlin Campus Benjamin Franklin Hindenburgdamm 30 | D-12203 Berlin Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962 ralf.hildebrandt at charite.de | http://www.charite.de From leo at strike.wu.ac.at Thu Oct 20 10:16:04 2016 From: leo at strike.wu.ac.at (Alexander 'Leo' Bergolth) Date: Thu, 20 Oct 2016 12:16:04 +0200 Subject: sieve duplicate locking In-Reply-To: References: <5804F6BD.3090302@strike.wu.ac.at> Message-ID: <58089964.9060400@strike.wu.ac.at> On 10/19/2016 12:51 PM, Stephan Bosch wrote: > Op 17-10-2016 om 18:05 schreef Alexander 'Leo' Bergolth: >> Does the duplicate sieve plugin do any locking to avoid duplicate >> parallel delivery of the same message? [...] >> Is there an easy way to serialize mail delivery using some locking >> inside sieve? > > We've seen this before I think. It would require some changes to the > duplicate tracking system. I'd expect the vacation command to be > affected as well. Would be great! :-) >> Or do I have to serialize per-user dovecot-lda delivery? Any experiences >> with that? > > Very little. I know there is a new lmtp_user_concurrency_limit setting, > but there is not much documentation apart from the commit message: > https://github.com/dovecot/core/commit/42abccd9b2a5a4190bd3c14ec2dcc10d51c0f491 I am currently using dovecot-lda as mailbox_command, so this is not an option right now. > There are possibilities from within the MTA as well I expect. As a temporary workaround, I wrapped dovecot-lda with flock to serialize delivery: -------------------- 8< -------------------- #!/bin/sh exec /usr/bin/flock "$HOME/Maildir/INBOX" \ /usr/libexec/dovecot/dovecot-lda -f "$SENDER" -a "$RECIPIENT" -------------------- 8< -------------------- Cheers, --leo -- e-mail ::: Leo.Bergolth (at) wu.ac.at fax ::: +43-1-31336-906050 location ::: IT-Services | Vienna University of Economics | Austria From aki.tuomi at dovecot.fi Thu Oct 20 10:44:47 2016 From: aki.tuomi at dovecot.fi (Aki Tuomi) Date: Thu, 20 Oct 2016 13:44:47 +0300 Subject: logging TLS SNI hostname In-Reply-To: <201610181316.06948.arekm@maven.pl> References: <201605300829.17351.arekm@maven.pl> <201610170841.38721.arekm@maven.pl> <201610181316.06948.arekm@maven.pl> Message-ID: <2597c221-4637-eb21-dc04-d84d12744a15@dovecot.fi> On 18.10.2016 14:16, Arkadiusz Mi?kiewicz wrote: > On Monday 17 of October 2016, KT Walrus wrote: >>> On Oct 17, 2016, at 2:41 AM, Arkadiusz Mi?kiewicz wrote: >>> >>> On Monday 30 of May 2016, Arkadiusz Mi?kiewicz wrote: >>>> Is there a way to log SNI hostname used in TLS session? Info is there in >>>> SSL_CTX_set_tlsext_servername_callback, dovecot copies it to >>>> ssl_io->host. >>>> >>>> Unfortunately I don't see it expanded to any variables ( >>>> http://wiki.dovecot.org/Variables ). Please consider this to be a >>>> feature request. >>>> >>>> The goal is to be able to see which hostname client used like: >>>> >>>> May 30 08:21:19 xxx dovecot: pop3-login: Login: user=, >>>> method=PLAIN, rip=1.1.1.1, lip=2.2.2.2, mpid=17135, TLS, >>>> SNI=pop3.somehost.org, session= >>> Dear dovecot team, would be possible to add such variable ^^^^^ ? >>> >>> That would be neat feature because server operator would know what >>> hostname client uses to connect to server (which is really usefull in >>> case of many hostnames pointing to single IP). >> I?d love to be able to use this SNI domain name in the Dovecot IMAP proxy >> for use in the SQL password_query. This would allow the proxy to support >> multiple IMAP server domains each with their own set of users. And, it >> would save me money by using only the IP of the proxy for all the IMAP >> server domains instead of giving each domain a unique IP. > It only needs to be carefuly implemented on dovecot side as TLS SNI hostname > is information passed directly by client. > > So some fqdn name validation would need to happen in case if client has > malicious intents. > >> Kevin > Hi! I wonder if this would be of any help? It provides %{local_name} passdb/userdb variable, you can use it for some logging too... https://github.com/dovecot/core/commit/fe791e96fdf796f7d8997ee0515b163dc5eddd72 Aki From arekm at maven.pl Thu Oct 20 12:41:33 2016 From: arekm at maven.pl (Arkadiusz =?utf-8?q?Mi=C5=9Bkiewicz?=) Date: Thu, 20 Oct 2016 14:41:33 +0200 Subject: logging TLS SNI hostname In-Reply-To: <2597c221-4637-eb21-dc04-d84d12744a15@dovecot.fi> References: <201605300829.17351.arekm@maven.pl> <201610181316.06948.arekm@maven.pl> <2597c221-4637-eb21-dc04-d84d12744a15@dovecot.fi> Message-ID: <201610201441.33593.arekm@maven.pl> On Thursday 20 of October 2016, Aki Tuomi wrote: > On 18.10.2016 14:16, Arkadiusz Mi?kiewicz wrote: > > On Monday 17 of October 2016, KT Walrus wrote: > >>> On Oct 17, 2016, at 2:41 AM, Arkadiusz Mi?kiewicz > >>> wrote: > >>> > >>> On Monday 30 of May 2016, Arkadiusz Mi?kiewicz wrote: > >>>> Is there a way to log SNI hostname used in TLS session? Info is there > >>>> in SSL_CTX_set_tlsext_servername_callback, dovecot copies it to > >>>> ssl_io->host. > >>>> > >>>> Unfortunately I don't see it expanded to any variables ( > >>>> http://wiki.dovecot.org/Variables ). Please consider this to be a > >>>> feature request. > >>>> > >>>> The goal is to be able to see which hostname client used like: > >>>> > >>>> May 30 08:21:19 xxx dovecot: pop3-login: Login: user=, > >>>> method=PLAIN, rip=1.1.1.1, lip=2.2.2.2, mpid=17135, TLS, > >>>> SNI=pop3.somehost.org, session= > >>> > >>> Dear dovecot team, would be possible to add such variable ^^^^^ ? > >>> > >>> That would be neat feature because server operator would know what > >>> hostname client uses to connect to server (which is really usefull in > >>> case of many hostnames pointing to single IP). > >> > >> I?d love to be able to use this SNI domain name in the Dovecot IMAP > >> proxy for use in the SQL password_query. This would allow the proxy to > >> support multiple IMAP server domains each with their own set of users. > >> And, it would save me money by using only the IP of the proxy for all > >> the IMAP server domains instead of giving each domain a unique IP. > > > > It only needs to be carefuly implemented on dovecot side as TLS SNI > > hostname is information passed directly by client. > > > > So some fqdn name validation would need to happen in case if client has > > malicious intents. > > > >> Kevin > > Hi! > > I wonder if this would be of any help? It provides %{local_name} > passdb/userdb variable, you can use it for some logging too... > > https://github.com/dovecot/core/commit/fe791e96fdf796f7d8997ee0515b163dc5ed > dd72 Should it work for such usage, too? login_log_format_elements = user=<%u> method=%m rip=%r lip=%l mpid=%e local_name=%{local_name} %c session=<%{session}> Because I'm not getting local_name logged at all (dovecot -a shows its there). > Aki Thanks, -- Arkadiusz Mi?kiewicz, arekm / ( maven.pl | pld-linux.org ) From aki.tuomi at dovecot.fi Thu Oct 20 12:45:56 2016 From: aki.tuomi at dovecot.fi (Aki Tuomi) Date: Thu, 20 Oct 2016 15:45:56 +0300 Subject: logging TLS SNI hostname In-Reply-To: <201610201441.33593.arekm@maven.pl> References: <201605300829.17351.arekm@maven.pl> <201610181316.06948.arekm@maven.pl> <2597c221-4637-eb21-dc04-d84d12744a15@dovecot.fi> <201610201441.33593.arekm@maven.pl> Message-ID: On 20.10.2016 15:41, Arkadiusz Mi?kiewicz wrote: > On Thursday 20 of October 2016, Aki Tuomi wrote: >> On 18.10.2016 14:16, Arkadiusz Mi?kiewicz wrote: >>> On Monday 17 of October 2016, KT Walrus wrote: >>>>> On Oct 17, 2016, at 2:41 AM, Arkadiusz Mi?kiewicz >>>>> wrote: >>>>> >>>>> On Monday 30 of May 2016, Arkadiusz Mi?kiewicz wrote: >>>>>> Is there a way to log SNI hostname used in TLS session? Info is there >>>>>> in SSL_CTX_set_tlsext_servername_callback, dovecot copies it to >>>>>> ssl_io->host. >>>>>> >>>>>> Unfortunately I don't see it expanded to any variables ( >>>>>> http://wiki.dovecot.org/Variables ). Please consider this to be a >>>>>> feature request. >>>>>> >>>>>> The goal is to be able to see which hostname client used like: >>>>>> >>>>>> May 30 08:21:19 xxx dovecot: pop3-login: Login: user=, >>>>>> method=PLAIN, rip=1.1.1.1, lip=2.2.2.2, mpid=17135, TLS, >>>>>> SNI=pop3.somehost.org, session= >>>>> Dear dovecot team, would be possible to add such variable ^^^^^ ? >>>>> >>>>> That would be neat feature because server operator would know what >>>>> hostname client uses to connect to server (which is really usefull in >>>>> case of many hostnames pointing to single IP). >>>> I?d love to be able to use this SNI domain name in the Dovecot IMAP >>>> proxy for use in the SQL password_query. This would allow the proxy to >>>> support multiple IMAP server domains each with their own set of users. >>>> And, it would save me money by using only the IP of the proxy for all >>>> the IMAP server domains instead of giving each domain a unique IP. >>> It only needs to be carefuly implemented on dovecot side as TLS SNI >>> hostname is information passed directly by client. >>> >>> So some fqdn name validation would need to happen in case if client has >>> malicious intents. >>> >>>> Kevin >> Hi! >> >> I wonder if this would be of any help? It provides %{local_name} >> passdb/userdb variable, you can use it for some logging too... >> >> https://github.com/dovecot/core/commit/fe791e96fdf796f7d8997ee0515b163dc5ed >> dd72 > Should it work for such usage, too? > > login_log_format_elements = user=<%u> method=%m rip=%r lip=%l mpid=%e > local_name=%{local_name} %c session=<%{session}> > > Because I'm not getting local_name logged at all (dovecot -a shows its there). > >> Aki > Thanks, How did you try? With openssl you need to use openssl s_client -connect ... -servername something Aki From arekm at maven.pl Thu Oct 20 12:52:17 2016 From: arekm at maven.pl (Arkadiusz =?utf-8?q?Mi=C5=9Bkiewicz?=) Date: Thu, 20 Oct 2016 14:52:17 +0200 Subject: logging TLS SNI hostname In-Reply-To: References: <201605300829.17351.arekm@maven.pl> <201610201441.33593.arekm@maven.pl> Message-ID: <201610201452.17546.arekm@maven.pl> On Thursday 20 of October 2016, Aki Tuomi wrote: > On 20.10.2016 15:41, Arkadiusz Mi?kiewicz wrote: > > On Thursday 20 of October 2016, Aki Tuomi wrote: > >> On 18.10.2016 14:16, Arkadiusz Mi?kiewicz wrote: > >>> On Monday 17 of October 2016, KT Walrus wrote: > >>>>> On Oct 17, 2016, at 2:41 AM, Arkadiusz Mi?kiewicz > >>>>> wrote: > >>>>> > >>>>> On Monday 30 of May 2016, Arkadiusz Mi?kiewicz wrote: > >>>>>> Is there a way to log SNI hostname used in TLS session? Info is > >>>>>> there in SSL_CTX_set_tlsext_servername_callback, dovecot copies it > >>>>>> to ssl_io->host. > >>>>>> > >>>>>> Unfortunately I don't see it expanded to any variables ( > >>>>>> http://wiki.dovecot.org/Variables ). Please consider this to be a > >>>>>> feature request. > >>>>>> > >>>>>> The goal is to be able to see which hostname client used like: > >>>>>> > >>>>>> May 30 08:21:19 xxx dovecot: pop3-login: Login: user=, > >>>>>> method=PLAIN, rip=1.1.1.1, lip=2.2.2.2, mpid=17135, TLS, > >>>>>> SNI=pop3.somehost.org, session= > >>>>> > >>>>> Dear dovecot team, would be possible to add such variable ^^^^^ ? > >>>>> > >>>>> That would be neat feature because server operator would know what > >>>>> hostname client uses to connect to server (which is really usefull in > >>>>> case of many hostnames pointing to single IP). > >>>> > >>>> I?d love to be able to use this SNI domain name in the Dovecot IMAP > >>>> proxy for use in the SQL password_query. This would allow the proxy to > >>>> support multiple IMAP server domains each with their own set of users. > >>>> And, it would save me money by using only the IP of the proxy for all > >>>> the IMAP server domains instead of giving each domain a unique IP. > >>> > >>> It only needs to be carefuly implemented on dovecot side as TLS SNI > >>> hostname is information passed directly by client. > >>> > >>> So some fqdn name validation would need to happen in case if client has > >>> malicious intents. > >>> > >>>> Kevin > >> > >> Hi! > >> > >> I wonder if this would be of any help? It provides %{local_name} > >> passdb/userdb variable, you can use it for some logging too... > >> > >> https://github.com/dovecot/core/commit/fe791e96fdf796f7d8997ee0515b163dc > >> 5ed dd72 > > > > Should it work for such usage, too? > > > > login_log_format_elements = user=<%u> method=%m rip=%r lip=%l mpid=%e > > local_name=%{local_name} %c session=<%{session}> > > > > Because I'm not getting local_name logged at all (dovecot -a shows its > > there). > > > >> Aki > > > > Thanks, > > How did you try? With openssl you need to use openssl s_client -connect > ... -servername something Yes, using it. -servername is mandatory for TLS SNI to work. I'm getting correct certificate (as shown by openssl s_client). Certificate that's configured with local_name, so TLS SNI works fine on client and dovecot side. ps. I'm using 2.2.25 + above %{local_name} patch. Could some other patch be needed for this to work? > Aki -- Arkadiusz Mi?kiewicz, arekm / ( maven.pl | pld-linux.org ) From aki.tuomi at dovecot.fi Thu Oct 20 13:10:22 2016 From: aki.tuomi at dovecot.fi (Aki Tuomi) Date: Thu, 20 Oct 2016 16:10:22 +0300 Subject: logging TLS SNI hostname In-Reply-To: <201610201452.17546.arekm@maven.pl> References: <201605300829.17351.arekm@maven.pl> <201610201441.33593.arekm@maven.pl> <201610201452.17546.arekm@maven.pl> Message-ID: On 20.10.2016 15:52, Arkadiusz Mi?kiewicz wrote: > > ... -servername something If you want to try out, try applying this patch... >From 066edb5e5c14a05c90e9ae63f0b76fcfd9c1149e Mon Sep 17 00:00:00 2001 From: Aki Tuomi Date: Thu, 20 Oct 2016 16:06:27 +0300 Subject: [PATCH] login-common: Include local_name in login_var_expand_table This way it can be used in login_log_format --- src/login-common/client-common.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/src/login-common/client-common.c b/src/login-common/client-common.c index d0a9c52..5964ec9 100644 --- a/src/login-common/client-common.c +++ b/src/login-common/client-common.c @@ -507,6 +507,7 @@ static struct var_expand_table login_var_expand_empty_tab[] = { { '\0', NULL, "auth_username" }, { '\0', NULL, "auth_domain" }, { '\0', NULL, "listener" }, + { '\0', NULL, "local_name" }, { '\0', NULL, NULL } }; @@ -581,6 +582,7 @@ get_var_expand_table(struct client *client) tab[24].value = tab[21].value; } tab[25].value = client->listener_name; + tab[26].value = client->local_name == NULL ? "" : client->local_name; return tab; } -- 2.7.4 From jerry at seibercom.net Thu Oct 20 13:18:12 2016 From: jerry at seibercom.net (Jerry) Date: Thu, 20 Oct 2016 09:18:12 -0400 Subject: Backing up and Importing IMAP folders Message-ID: <20161020091812.00006939@seibercom.net> I am running Dovecot with Postfix on a FreeBSD machine. There are problems with the drive and I cannot depend on it. Dovecot saves all mail in IMAP format. I want to back up the mail folders, install a new HD, install the latest FreeBSD OS and then reinstall my programs. Reinstalling Dovecot is simple, but how do I reinstall the IMAP folders? Can Dovecot backup the folders onto a CD and then import them when I reinstall it? My mail is kept under ?/var/mail/vmail?. Should I just back up that entire directory structure and then restore it later? Thanks! -- Jerry From arekm at maven.pl Thu Oct 20 13:21:48 2016 From: arekm at maven.pl (Arkadiusz =?utf-8?q?Mi=C5=9Bkiewicz?=) Date: Thu, 20 Oct 2016 15:21:48 +0200 Subject: logging TLS SNI hostname In-Reply-To: References: <201605300829.17351.arekm@maven.pl> <201610201452.17546.arekm@maven.pl> Message-ID: <201610201521.48404.arekm@maven.pl> On Thursday 20 of October 2016, Aki Tuomi wrote: > On 20.10.2016 15:52, Arkadiusz Mi?kiewicz wrote: > > > ... -servername something > > If you want to try out, try applying this patch... Works, thanks! > > From 066edb5e5c14a05c90e9ae63f0b76fcfd9c1149e Mon Sep 17 00:00:00 2001 > From: Aki Tuomi > Date: Thu, 20 Oct 2016 16:06:27 +0300 > Subject: [PATCH] login-common: Include local_name in login_var_expand_table > > This way it can be used in login_log_format -- Arkadiusz Mi?kiewicz, arekm / ( maven.pl | pld-linux.org ) From aki.tuomi at dovecot.fi Thu Oct 20 13:32:59 2016 From: aki.tuomi at dovecot.fi (Aki Tuomi) Date: Thu, 20 Oct 2016 16:32:59 +0300 Subject: logging TLS SNI hostname In-Reply-To: <201610201521.48404.arekm@maven.pl> References: <201605300829.17351.arekm@maven.pl> <201610201452.17546.arekm@maven.pl> <201610201521.48404.arekm@maven.pl> Message-ID: On 20.10.2016 16:21, Arkadiusz Mi?kiewicz wrote: > On Thursday 20 of October 2016, Aki Tuomi wrote: >> On 20.10.2016 15:52, Arkadiusz Mi?kiewicz wrote: >>>> ... -servername something >> If you want to try out, try applying this patch... > Works, thanks! > >> From 066edb5e5c14a05c90e9ae63f0b76fcfd9c1149e Mon Sep 17 00:00:00 2001 >> From: Aki Tuomi >> Date: Thu, 20 Oct 2016 16:06:27 +0300 >> Subject: [PATCH] login-common: Include local_name in login_var_expand_table >> >> This way it can be used in login_log_format Thank you for testing. Aki From flatworm at users.sourceforge.net Thu Oct 20 13:45:33 2016 From: flatworm at users.sourceforge.net (Konstantin Khomoutov) Date: Thu, 20 Oct 2016 16:45:33 +0300 Subject: Backing up and Importing IMAP folders In-Reply-To: <20161020091812.00006939@seibercom.net> References: <20161020091812.00006939@seibercom.net> Message-ID: <20161020164533.67e2d31bb0c7d641c943466d@domain007.com> On Thu, 20 Oct 2016 09:18:12 -0400 Jerry wrote: > I am running Dovecot with Postfix on a FreeBSD machine. There are > problems with the drive and I cannot depend on it. Dovecot saves all > mail in IMAP format. I want to back up the mail folders, install a new > HD, install the latest FreeBSD OS and then reinstall my programs. > Reinstalling Dovecot is simple, but how do I reinstall the IMAP > folders? Can Dovecot backup the folders onto a CD and then import them > when I reinstall it? My mail is kept under ?/var/mail/vmail?. Should I > just back up that entire directory structure and then restore it > later? That should work (just make sure Dovecot is not running to not have a race between your backup software and the IMAP server and clients). Alternatively you can use `dsync` to perform backup with a native Dovecot tool. It's able to sync mailboxes of any Dovecot user -- including synchronizing a mailbox to an empty (yet) spool. You'll need to do a bit of shell scripting which would spin around calling `doveadm user *` and feeding its output to something like while read user; do \ dest="/var/backup/dovecot/$user"; mkdir -p "$dest" && chown vmail:vmail "$dest" \ && chmod 0755 "$dest" dsync -u "$user" backup "maildir:$dest" \ done Note that you will only need this if you don't want to shut down Dovecot to copy its mail spool out. From aki.tuomi at dovecot.fi Thu Oct 20 13:57:45 2016 From: aki.tuomi at dovecot.fi (Aki Tuomi) Date: Thu, 20 Oct 2016 16:57:45 +0300 (EEST) Subject: Backing up and Importing IMAP folders In-Reply-To: <20161020164533.67e2d31bb0c7d641c943466d@domain007.com> References: <20161020091812.00006939@seibercom.net> <20161020164533.67e2d31bb0c7d641c943466d@domain007.com> Message-ID: <15355822.1033.1476971866443@appsuite-dev.open-xchange.com> > On October 20, 2016 at 4:45 PM Konstantin Khomoutov wrote: > > > On Thu, 20 Oct 2016 09:18:12 -0400 > Jerry wrote: > > > I am running Dovecot with Postfix on a FreeBSD machine. There are > > problems with the drive and I cannot depend on it. Dovecot saves all > > mail in IMAP format. I want to back up the mail folders, install a new > > HD, install the latest FreeBSD OS and then reinstall my programs. > > Reinstalling Dovecot is simple, but how do I reinstall the IMAP > > folders? Can Dovecot backup the folders onto a CD and then import them > > when I reinstall it? My mail is kept under ?/var/mail/vmail?. Should I > > just back up that entire directory structure and then restore it > > later? > > That should work (just make sure Dovecot is not running to not have a > race between your backup software and the IMAP server and clients). > > Alternatively you can use `dsync` to perform backup with a native > Dovecot tool. It's able to sync mailboxes of any Dovecot user -- > including synchronizing a mailbox to an empty (yet) spool. > You'll need to do a bit of shell scripting which would spin around > calling `doveadm user *` and feeding its output to something like > > while read user; do \ > dest="/var/backup/dovecot/$user"; > mkdir -p "$dest" && chown vmail:vmail "$dest" \ > && chmod 0755 "$dest" > dsync -u "$user" backup "maildir:$dest" \ > done > > Note that you will only need this if you don't want to shut down > Dovecot to copy its mail spool out. You can also use doveadm backup -A maildir:%u/ Aki From stu at spacehopper.org Thu Oct 20 14:02:46 2016 From: stu at spacehopper.org (Stuart Henderson) Date: Thu, 20 Oct 2016 14:02:46 +0000 (UTC) Subject: v2.2.26 release candidate released References: Message-ID: On 2016-10-19, Timo Sirainen wrote: > http://dovecot.org/releases/2.2/rc/dovecot-2.2.26.rc1.tar.gz > http://dovecot.org/releases/2.2/rc/dovecot-2.2.26.rc1.tar.gz.sig > > There are quite a lot of changes since v2.2.25. Please try out this RC so we can get a good and stable v2.2.26 out. I'm seeing this on OpenBSD: cc -DHAVE_CONFIG_H -I. -I../.. -I../../src/lib -I../../src/lib-test -I../../src/lib-settings -I../../src/lib-master -I../../src/lib-ssl-iostream -I/usr/local/include -std=gnu99 -O2 -pipe -Wall -W -Wmissing-prototypes -Wmissing-declarations -Wpointer-arith -Wchar-subscripts -Wformat=2 -Wbad-function-cast -fno-builtin-strftime -Wstrict-aliasing=2 -I/usr/include -MT ldap-search.lo -MD -MP -MF .deps/ldap-search.Tpo -c ldap-search.c -fPIC -DPIC -o .libs/ldap-search.o ldap-search.c: In function 'ldap_search_send': ldap-search.c:99: error: variable 'tv' has initializer but incomplete type Fixed by adding "#include ". From flatworm at users.sourceforge.net Thu Oct 20 14:11:13 2016 From: flatworm at users.sourceforge.net (Konstantin Khomoutov) Date: Thu, 20 Oct 2016 17:11:13 +0300 Subject: Backing up and Importing IMAP folders In-Reply-To: <15355822.1033.1476971866443@appsuite-dev.open-xchange.com> References: <20161020091812.00006939@seibercom.net> <20161020164533.67e2d31bb0c7d641c943466d@domain007.com> <15355822.1033.1476971866443@appsuite-dev.open-xchange.com> Message-ID: <20161020171113.d5e4bf5f8c35b864f44f49b2@domain007.com> On Thu, 20 Oct 2016 16:57:45 +0300 (EEST) Aki Tuomi wrote: [...] > > Alternatively you can use `dsync` to perform backup with a native > > Dovecot tool. It's able to sync mailboxes of any Dovecot user -- > > including synchronizing a mailbox to an empty (yet) spool. > > You'll need to do a bit of shell scripting which would spin around > > calling `doveadm user *` and feeding its output to something like > > > > while read user; do \ > > dest="/var/backup/dovecot/$user"; > > mkdir -p "$dest" && chown vmail:vmail "$dest" \ > > && chmod 0755 "$dest" > > dsync -u "$user" backup "maildir:$dest" \ > > done > > > > Note that you will only need this if you don't want to shut down > > Dovecot to copy its mail spool out. > > You can also use doveadm backup -A maildir:%u/ Looks like `doveadm` of my Dovecot 2.2 (Debian 8.0 Jessie) does not support the "backup" subcommand. Is it a past-2.2 addition? From aki.tuomi at dovecot.fi Thu Oct 20 14:38:31 2016 From: aki.tuomi at dovecot.fi (Aki Tuomi) Date: Thu, 20 Oct 2016 17:38:31 +0300 (EEST) Subject: Backing up and Importing IMAP folders In-Reply-To: <20161020171113.d5e4bf5f8c35b864f44f49b2@domain007.com> References: <20161020091812.00006939@seibercom.net> <20161020164533.67e2d31bb0c7d641c943466d@domain007.com> <15355822.1033.1476971866443@appsuite-dev.open-xchange.com> <20161020171113.d5e4bf5f8c35b864f44f49b2@domain007.com> Message-ID: <1481416248.1085.1476974312057@appsuite-dev.open-xchange.com> > On October 20, 2016 at 5:11 PM Konstantin Khomoutov wrote: > > > On Thu, 20 Oct 2016 16:57:45 +0300 (EEST) > Aki Tuomi wrote: > > [...] > > > Alternatively you can use `dsync` to perform backup with a native > > > Dovecot tool. It's able to sync mailboxes of any Dovecot user -- > > > including synchronizing a mailbox to an empty (yet) spool. > > > You'll need to do a bit of shell scripting which would spin around > > > calling `doveadm user *` and feeding its output to something like > > > > > > while read user; do \ > > > dest="/var/backup/dovecot/$user"; > > > mkdir -p "$dest" && chown vmail:vmail "$dest" \ > > > && chmod 0755 "$dest" > > > dsync -u "$user" backup "maildir:$dest" \ > > > done > > > > > > Note that you will only need this if you don't want to shut down > > > Dovecot to copy its mail spool out. > > > > You can also use doveadm backup -A maildir:%u/ > > Looks like `doveadm` of my Dovecot 2.2 (Debian 8.0 Jessie) does not > support the "backup" subcommand. Is it a past-2.2 addition? We aren't past 2.2 yet. But it should work with dsync -A backup as well I guess. Aki From flatworm at users.sourceforge.net Thu Oct 20 14:49:42 2016 From: flatworm at users.sourceforge.net (Konstantin Khomoutov) Date: Thu, 20 Oct 2016 17:49:42 +0300 Subject: Backing up and Importing IMAP folders In-Reply-To: <1481416248.1085.1476974312057@appsuite-dev.open-xchange.com> References: <20161020091812.00006939@seibercom.net> <20161020164533.67e2d31bb0c7d641c943466d@domain007.com> <15355822.1033.1476971866443@appsuite-dev.open-xchange.com> <20161020171113.d5e4bf5f8c35b864f44f49b2@domain007.com> <1481416248.1085.1476974312057@appsuite-dev.open-xchange.com> Message-ID: <20161020174942.8c3ee4c18370e55aad6a87bc@domain007.com> On Thu, 20 Oct 2016 17:38:31 +0300 (EEST) Aki Tuomi wrote: [...] > > > > Alternatively you can use `dsync` to perform backup with a > > > > native Dovecot tool. It's able to sync mailboxes of any > > > > Dovecot user -- including synchronizing a mailbox to an empty > > > > (yet) spool. You'll need to do a bit of shell scripting which > > > > would spin around calling `doveadm user *` and feeding its > > > > output to something like > > > > > > > > while read user; do \ > > > > dest="/var/backup/dovecot/$user"; > > > > mkdir -p "$dest" && chown vmail:vmail "$dest" \ > > > > && chmod 0755 "$dest" > > > > dsync -u "$user" backup "maildir:$dest" \ > > > > done > > > > > > > > Note that you will only need this if you don't want to shut down > > > > Dovecot to copy its mail spool out. > > > > > > You can also use doveadm backup -A maildir:%u/ > > > > Looks like `doveadm` of my Dovecot 2.2 (Debian 8.0 Jessie) does not > > support the "backup" subcommand. Is it a past-2.2 addition? > > We aren't past 2.2 yet. But it should work with dsync -A backup as > well I guess. Oh, that's a documentation problem: the manual page doveadm(1) does not mention the word "backup" at all while running the command actually tells it's supported: $ doveadm backup -A doveadm backup [-u |-A] [-S ] [-dfR] [-l ] [-r ] [-m ] [-n | -N] [-x ] [-s ] Good to know, thanks! From laska at kam.mff.cuni.cz Thu Oct 20 15:06:52 2016 From: laska at kam.mff.cuni.cz (Ladislav Laska) Date: Thu, 20 Oct 2016 17:06:52 +0200 Subject: Pigeonhole/sieve possibly corrupting mails In-Reply-To: <20161015185924.gt7i5jykuqu55pfc@wallaby> References: <20161015185924.gt7i5jykuqu55pfc@wallaby> Message-ID: ... Bump. Anything? On Sat, Oct 15, 2016 at 08:59:24PM +0200, Ladislav Laska wrote: > Hi! > > I'm here again with a problem. I'm using dovecot as an IMAP server and > LDA, filtering mail via sieve. However, few times a day I get the > following error on server and my client (mutt) gets disconnected. > > Oct 15 20:20:29 ibex dovecot: imap(krakonos): Error: Corrupted index cache file /home/krakonos/.mbox/.imap/INBOX/dovecot.index.cache: Broken physical s ize for mail UID 149418 in mailbox INBOX: read(/home/krakonos/.mbox/inbox) failed: Cached message size smaller than expected (3793 < 8065, box=INBOX, UID=149418, cached > Message-Id=<88deda0d-86f6-6115-af10-60ac06bb2d22 at rename-it.nl>) Oct 15 20:20:29 ibex dovecot: imap(krakonos): Error: read(/home/krakonos/.mbox/inbox) failed: Cached message size smaller than expected (3793 < 8065, box=INBOX, UID=149418, cached > Message-Id=<88deda0d-86f6-6115-af10-60ac06bb2d22 at rename-it.nl>) (FETCH BODY[] for mailbox INBOX UID 149418) > Oct 15 20:20:29 ibex dovecot: imap(krakonos): FETCH read() failed in=110326 out=5115197 > > This is on a new message (attached), and this error happens on some > messages when first opened. Once I reconnect, the message always opens > fine, and no old message ever causes problem. > > I also noticed this error, which is possibly connected: > > Oct 15 20:15:12 ibex dovecot: lda(krakonos): Error: Next message > unexpectedly corrupted in mbox file /home/krakonos/.mbox/inbox at > 546862809 > > The filesystem is ext4, and there are no errors in syslog or problems > with any other services. > > I also don't access the mbox locally, and only dovecot manipulates the > mbox (via imap and mailbox_command = /usr/libexec/dovecot/deliver) > > The postfix version is 2.2.25. I'm attaching dovecot -n and the > offending message (after it's been corrected). I'd rather not publish my > sieve file, but will send it privately. > > The offending message also contains other message I received at > approximately the same time. > > Any hint's on what could be wrong? > > > > -- > S pozdravem Ladislav "Krakono?" L?ska http://www.krakonos.org/ > # 2.2.25 (7be1766): /etc/dovecot/dovecot.conf > # Pigeonhole version 0.4.15 (97b3da0) > # OS: Linux 4.0.4-gentoo x86_64 Gentoo Base System release 2.2 > auth_username_format = %n > hostname = ibex.krakonos.org > login_greeting = Dovecot at krakonos.org ready. > mail_debug = yes > mail_location = mbox:~/.mbox > namespace inbox { > inbox = yes > location = > mailbox Drafts { > special_use = \Drafts > } > mailbox Junk { > special_use = \Junk > } > mailbox Sent { > special_use = \Sent > } > mailbox "Sent Messages" { > special_use = \Sent > } > mailbox Trash { > special_use = \Trash > } > prefix = > } > passdb { > args = * > driver = pam > } > passdb { > args = scheme=CRYPT username_format=%u /etc/dovecot/users > driver = passwd-file > } > plugin { > sieve = file:~/sieve;active=~/.dovecot.sieve > sieve_execute_bin_dir = /usr/lib/dovecot/sieve-execute > sieve_execute_socket_dir = sieve-execute > sieve_extensions = +vnd.dovecot.filter +editheader > sieve_filter_bin_dir = /usr/lib/dovecot/sieve-filter > sieve_filter_socket_dir = sieve-filter > sieve_pipe_bin_dir = /usr/lib/dovecot/sieve-pipe > sieve_pipe_socket_dir = sieve-pipe > sieve_plugins = sieve_extprograms > } > postmaster_address = postmaster at krakonos.org > protocols = imap > service auth { > unix_listener /var/spool/postfix/private/auth { > mode = 0666 > } > } > ssl_cert = ssl_key = userdb { > driver = passwd > } > protocol lda { > mail_plugins = sieve > } -- S pozdravem Ladislav "Krakono?" L?ska http://www.krakonos.org/ From tss at iki.fi Thu Oct 20 15:50:25 2016 From: tss at iki.fi (Timo Sirainen) Date: Thu, 20 Oct 2016 18:50:25 +0300 Subject: Pigeonhole/sieve possibly corrupting mails In-Reply-To: <20161015185924.gt7i5jykuqu55pfc@wallaby> References: <20161015185924.gt7i5jykuqu55pfc@wallaby> Message-ID: On 15 Oct 2016, at 21:59, Ladislav Laska wrote: > > Hi! > > I'm here again with a problem. I'm using dovecot as an IMAP server and > LDA, filtering mail via sieve. However, few times a day I get the > following error on server and my client (mutt) gets disconnected. > > Oct 15 20:20:29 ibex dovecot: imap(krakonos): Error: Corrupted index cache file /home/krakonos/.mbox/.imap/INBOX/dovecot.index.cache: Broken physical s ize for mail UID 149418 in mailbox INBOX: read(/home/krakonos/.mbox/inbox) failed: Cached message size smaller than expected (3793 < 8065, box=INBOX, UID=149418, cached .. > Oct 15 20:15:12 ibex dovecot: lda(krakonos): Error: Next message > unexpectedly corrupted in mbox file /home/krakonos/.mbox/inbox at > 546862809 Somehow Dovecot thinks that the mbox file changed under it.. > The filesystem is ext4, and there are no errors in syslog or problems > with any other services. > > I also don't access the mbox locally, and only dovecot manipulates the > mbox (via imap and mailbox_command = /usr/libexec/dovecot/deliver) So it shouldn't have broken. > The postfix version is 2.2.25. I'm attaching dovecot -n and the > offending message (after it's been corrected). I'd rather not publish my > sieve file, but will send it privately. > > The offending message also contains other message I received at > approximately the same time. > > Any hint's on what could be wrong? These mbox corruptions are usually pretty difficult to reproduce (= impossible to fix without ability to reproduce). You could try if you can (reliably) reproduce it in some way, e.g.: 1. Create a test folder: doveadm mailbox create -u krakonos testbox 2. Use some combination of: * Save mail(s) to test folder: cat some-mails | doveadm save -u krakonos testbox * Try to read mails from test folder: doveadm fetch -u krakonos text mailbox testbox > /dev/null The fetch should print similar errors to stderr in some way. I attempted to reproduce this way with your msg-error.mbox, but it worked ok. From laska at kam.mff.cuni.cz Thu Oct 20 17:03:33 2016 From: laska at kam.mff.cuni.cz (Ladislav Laska) Date: Thu, 20 Oct 2016 19:03:33 +0200 Subject: Pigeonhole/sieve possibly corrupting mails In-Reply-To: References: <20161015185924.gt7i5jykuqu55pfc@wallaby> Message-ID: Hi! > Somehow Dovecot thinks that the mbox file changed under it.. Yes. And it's probably right, but I wonder what could have changed it. I looked around inotify and it seems there is no way to let a file being watched and get program names/pids of processes accessing it. > These mbox corruptions are usually pretty difficult to reproduce (= impossible to fix without ability to reproduce). You could try if you can (reliably) reproduce it in some way, e.g.: I can reproduce them multiple times a day :-). But not on command, and probably not on another machine, I know... > 1. Create a test folder: doveadm mailbox create -u krakonos testbox > 2. Use some combination of: > * Save mail(s) to test folder: cat some-mails | doveadm save -u krakonos testbox > * Try to read mails from test folder: doveadm fetch -u krakonos text mailbox testbox > /dev/null Well, that's something. doveadm-save doesn't have a manpage, and there is nothing about it on wiki. Is it something new? Also, it doesn't seem to work. > > The fetch should print similar errors to stderr in some way. I attempted to reproduce this way with your msg-error.mbox, but it worked ok. > Thinking about it, it might be that I'm fetching the message just as dovecot delivers another one. Is it possible that fcntl locking is just not working? I'm running a bit older kernel, if that could play a role in it. I'll try to enable dotlock even on read and see if the problem persists. -- S pozdravem Ladislav "Krakono?" L?ska http://www.krakonos.org/ From aki.tuomi at dovecot.fi Thu Oct 20 17:20:51 2016 From: aki.tuomi at dovecot.fi (Aki Tuomi) Date: Thu, 20 Oct 2016 20:20:51 +0300 (EEST) Subject: Pigeonhole/sieve possibly corrupting mails In-Reply-To: References: <20161015185924.gt7i5jykuqu55pfc@wallaby> Message-ID: <1807652352.1279.1476984052294@appsuite-dev.open-xchange.com> > On October 20, 2016 at 8:03 PM Ladislav Laska wrote: > > > Hi! > > > Somehow Dovecot thinks that the mbox file changed under it.. > > Yes. And it's probably right, but I wonder what could have changed it. I > looked around inotify and it seems there is no way to let a file being > watched and get program names/pids of processes accessing it. > > > These mbox corruptions are usually pretty difficult to reproduce (= impossible to fix without ability to reproduce). You could try if you can (reliably) reproduce it in some way, e.g.: > > I can reproduce them multiple times a day :-). But not on command, and > probably not on another machine, I know... > > > 1. Create a test folder: doveadm mailbox create -u krakonos testbox > > 2. Use some combination of: > > * Save mail(s) to test folder: cat some-mails | doveadm save -u krakonos testbox > > * Try to read mails from test folder: doveadm fetch -u krakonos text mailbox testbox > /dev/null > > Well, that's something. doveadm-save doesn't have a manpage, and there > is nothing about it on wiki. Is it something new? Also, it doesn't seem > to work. > > > > > The fetch should print similar errors to stderr in some way. I attempted to reproduce this way with your msg-error.mbox, but it worked ok. > > > > Thinking about it, it might be that I'm fetching the message just as > dovecot delivers another one. > > Is it possible that fcntl locking is just not working? I'm running a bit > older kernel, if that could play a role in it. I'll try to enable > dotlock even on read and see if the problem persists. > > -- > S pozdravem Ladislav "Krakono?" L?ska http://www.krakonos.org/ You could try running lsof in hopes of catching it. Might be rather difficult though. Aki From laska at kam.mff.cuni.cz Thu Oct 20 17:31:20 2016 From: laska at kam.mff.cuni.cz (Ladislav Laska) Date: Thu, 20 Oct 2016 19:31:20 +0200 Subject: Pigeonhole/sieve possibly corrupting mails In-Reply-To: <1807652352.1279.1476984052294@appsuite-dev.open-xchange.com> References: <20161015185924.gt7i5jykuqu55pfc@wallaby> <1807652352.1279.1476984052294@appsuite-dev.open-xchange.com> Message-ID: Well, I tried. for i in {1..50}; do echo x | mail -s test krakonos+test at krakonos.org; done and running lsof. Didn't catch a single lockfile. lsof runs about 1s, so there is little chance of catching it. However, I was reading the mails while they were being delivered, and didn't trigger the problem. I'll let it happen once more, so I know it's still reproducible and add dotfile locks even for read, and see if it helps. Or is it possible to enable lock debugging, or perhaps run it completely synchronized (I don't have a lot of traffic, so a little slowdown isn't an issue). On Thu, Oct 20, 2016 at 08:20:51PM +0300, Aki Tuomi wrote: > > > On October 20, 2016 at 8:03 PM Ladislav Laska wrote: > > > > > > Hi! > > > > > Somehow Dovecot thinks that the mbox file changed under it.. > > > > Yes. And it's probably right, but I wonder what could have changed it. I > > looked around inotify and it seems there is no way to let a file being > > watched and get program names/pids of processes accessing it. > > > > > These mbox corruptions are usually pretty difficult to reproduce (= impossible to fix without ability to reproduce). You could try if you can (reliably) reproduce it in some way, e.g.: > > > > I can reproduce them multiple times a day :-). But not on command, and > > probably not on another machine, I know... > > > > > 1. Create a test folder: doveadm mailbox create -u krakonos testbox > > > 2. Use some combination of: > > > * Save mail(s) to test folder: cat some-mails | doveadm save -u krakonos testbox > > > * Try to read mails from test folder: doveadm fetch -u krakonos text mailbox testbox > /dev/null > > > > Well, that's something. doveadm-save doesn't have a manpage, and there > > is nothing about it on wiki. Is it something new? Also, it doesn't seem > > to work. > > > > > > > > The fetch should print similar errors to stderr in some way. I attempted to reproduce this way with your msg-error.mbox, but it worked ok. > > > > > > > Thinking about it, it might be that I'm fetching the message just as > > dovecot delivers another one. > > > > Is it possible that fcntl locking is just not working? I'm running a bit > > older kernel, if that could play a role in it. I'll try to enable > > dotlock even on read and see if the problem persists. > > > > -- > > S pozdravem Ladislav "Krakono?" L?ska http://www.krakonos.org/ > > You could try running lsof in hopes of catching it. Might be rather difficult though. > > Aki -- S pozdravem Ladislav "Krakono?" L?ska http://www.krakonos.org/ From flatworm at users.sourceforge.net Thu Oct 20 17:36:35 2016 From: flatworm at users.sourceforge.net (Konstantin Khomoutov) Date: Thu, 20 Oct 2016 20:36:35 +0300 Subject: Backing up and Importing IMAP folders In-Reply-To: <15355822.1033.1476971866443@appsuite-dev.open-xchange.com> References: <20161020091812.00006939@seibercom.net> <20161020164533.67e2d31bb0c7d641c943466d@domain007.com> <15355822.1033.1476971866443@appsuite-dev.open-xchange.com> Message-ID: <20161020203635.a894f7324bfc0354b581f87e@domain007.com> On Thu, 20 Oct 2016 16:57:45 +0300 (EEST) Aki Tuomi wrote: [...] > > Alternatively you can use `dsync` to perform backup with a native > > Dovecot tool. It's able to sync mailboxes of any Dovecot user -- > > including synchronizing a mailbox to an empty (yet) spool. > > You'll need to do a bit of shell scripting which would spin around > > calling `doveadm user *` and feeding its output to something like > > > > while read user; do \ > > dest="/var/backup/dovecot/$user"; > > mkdir -p "$dest" && chown vmail:vmail "$dest" \ > > && chmod 0755 "$dest" > > dsync -u "$user" backup "maildir:$dest" \ > > done > > > > Note that you will only need this if you don't want to shut down > > Dovecot to copy its mail spool out. > > You can also use doveadm backup -A maildir:%u/ Could you please elaborate? I have a typical "virtual users" setup where I do have mail_home = /var/local/mail/%Ln mail_location = maildir:~/mail and everything is stored with uid=vmail / gid=vmail (much like described in the wiki, that is). I'd like to use a single call to `doveadm backup -A ...` to back up the whole /var/local/mail/* to another location (say, /var/backups/dovecot/) so that is has the same structure, just synchronized with the spool. (The purpose is to then backup the replica off-site). I tried to call doveadm backup -A maildir:/var/backups/dovecot/%u and it created a directory "/var/backups/dovecot/%u" (with literal "%u", that is), created what appeared to be a single mailbox structure under it and after a while scared a heck out of me with a series of error messages reading dsync(user1): Error: Mailbox INBOX sync: mailbox_delete failed: INBOX can't be deleted. dsync(user2): Error: Mailbox INBOX sync: mailbox_delete failed: INBOX can't be deleted. ... for each existing user. It appears that it luckily failed to delete anything in the source directory (though I have no idea what it actually tried to do). Reading the doveadm-backup(1) multiple times still failed to shed a light for me on how to actually backup the whole maildir hierarchy for all existing users. So, the question: how do I really should go about backing up the whole mailbox hierarchy in the case of virtual users? From matthew.broadhead at nbmlaw.co.uk Thu Oct 20 17:38:55 2016 From: matthew.broadhead at nbmlaw.co.uk (Matthew Broadhead) Date: Thu, 20 Oct 2016 19:38:55 +0200 Subject: sieve sending vacation message from vmail@ns1.domain.tld In-Reply-To: <344d3d36-b905-5a90-e0ea-17d556076838@nbmlaw.co.uk> References: <71b362e8-3a69-076d-6376-2f3bbd39d0eb@nbmlaw.co.uk> <94941225-09d0-1440-1733-3884cc6dcd67@rename-it.nl> <7cdadba3-fd03-7d8c-1235-b428018a081c@nbmlaw.co.uk> <55712b3a-4812-f0a6-c9f9-59efcdac79f7@rename-it.nl> <8260ce16-bc94-e3a9-13d1-f1204e6ae525@rename-it.nl> <344d3d36-b905-5a90-e0ea-17d556076838@nbmlaw.co.uk> Message-ID: <9b47cb74-0aa7-4851-11f0-5a367341a63b@nbmlaw.co.uk> do i need to provide more information? On 19/10/2016 14:49, Matthew Broadhead wrote: > /var/log/maillog showed this > Oct 19 13:25:41 ns1 postfix/smtpd[1298]: 7599A2C19C6: > client=unknown[127.0.0.1] > Oct 19 13:25:41 ns1 postfix/cleanup[1085]: 7599A2C19C6: > message-id= > Oct 19 13:25:41 ns1 postfix/qmgr[1059]: 7599A2C19C6: > from=, size=3190, nrcpt=1 (queue active) > Oct 19 13:25:41 ns1 amavis[32367]: (32367-17) Passed CLEAN > {RelayedInternal}, ORIGINATING LOCAL [80.30.255.180]:54566 > [80.30.255.180] -> > , Queue-ID: BFFA62C1965, Message-ID: > , mail_id: > TlJQ9xQhWjQk, Hits: -2.9, size: 2235, queued_as: 7599A2C19C6, > dkim_new=foo:nbmlaw.co.uk, 531 ms > Oct 19 13:25:41 ns1 postfix/smtp[1135]: BFFA62C1965: > to=, relay=127.0.0.1[127.0.0.1]:10026, > delay=0.76, delays=0.22/0/0/0.53, dsn=2.0.0, status=sent (250 2.0.0 > from MTA(smtp:[127.0.0.1]:10027): 250 2.0.0 Ok: queued as 7599A2C19C6) > Oct 19 13:25:41 ns1 postfix/qmgr[1059]: BFFA62C1965: removed > Oct 19 13:25:41 ns1 postfix/smtpd[1114]: connect from > ns1.nbmlaw.co.uk[217.174.253.19] > Oct 19 13:25:41 ns1 postfix/smtpd[1114]: NOQUEUE: filter: RCPT from > ns1.nbmlaw.co.uk[217.174.253.19]: : Sender > address triggers FILTER smtp-amavis:[127.0.0.1]:10026; > from= to= > proto=SMTP helo= > Oct 19 13:25:41 ns1 postfix/smtpd[1114]: 8A03F2C1965: > client=ns1.nbmlaw.co.uk[217.174.253.19] > Oct 19 13:25:41 ns1 postfix/cleanup[1085]: 8A03F2C1965: > message-id= > Oct 19 13:25:41 ns1 opendmarc[2430]: implicit authentication service: > ns1.nbmlaw.co.uk > Oct 19 13:25:41 ns1 opendmarc[2430]: 8A03F2C1965: ns1.nbmlaw.co.uk fail > Oct 19 13:25:41 ns1 postfix/qmgr[1059]: 8A03F2C1965: > from=, size=1077, nrcpt=1 (queue active) > Oct 19 13:25:41 ns1 postfix/smtpd[1114]: disconnect from > ns1.nbmlaw.co.uk[217.174.253.19] > Oct 19 13:25:41 ns1 sSMTP[1895]: Sent mail for vmail at ns1.nbmlaw.co.uk > (221 2.0.0 Bye) uid=996 username=vmail outbytes=971 > Oct 19 13:25:41 ns1 postfix/smtpd[1898]: connect from unknown[127.0.0.1] > Oct 19 13:25:41 ns1 postfix/pipe[1162]: 7599A2C19C6: > to=, relay=dovecot, delay=0.46, > delays=0/0/0/0.45, dsn=2.0.0, status=sent (delivered via dovecot service) > Oct 19 13:25:41 ns1 postfix/qmgr[1059]: 7599A2C19C6: removed > Oct 19 13:25:41 ns1 postfix/smtpd[1898]: E53472C19C6: > client=unknown[127.0.0.1] > Oct 19 13:25:41 ns1 postfix/cleanup[1085]: E53472C19C6: > message-id= > Oct 19 13:25:41 ns1 postfix/qmgr[1059]: E53472C19C6: > from=, size=1619, nrcpt=1 (queue active) > Oct 19 13:25:41 ns1 amavis[1885]: (01885-01) Passed CLEAN > {RelayedInternal}, ORIGINATING LOCAL [217.174.253.19]:40960 > [217.174.253.19] -> > , Queue-ID: 8A03F2C1965, Message-ID: > , mail_id: > mOMO97yjVqjM, Hits: -2.211, size: 1301, queued_as: E53472C19C6, 296 ms > Oct 19 13:25:41 ns1 postfix/smtp[1217]: 8A03F2C1965: > to=, relay=127.0.0.1[127.0.0.1]:10026, > delay=0.38, delays=0.08/0/0/0.29, dsn=2.0.0, status=sent (250 2.0.0 > from MTA(smtp:[127.0.0.1]:10027): 250 2.0.0 Ok: queued as E53472C19C6) > Oct 19 13:25:41 ns1 postfix/qmgr[1059]: 8A03F2C1965: removed > Oct 19 13:25:42 ns1 postfix/pipe[1303]: E53472C19C6: > to=, relay=dovecot, delay=0.14, > delays=0/0/0/0.14, dsn=2.0.0, status=sent (delivered via dovecot service) > Oct 19 13:25:42 ns1 postfix/qmgr[1059]: E53472C19C6: removed > > On 19/10/2016 13:54, Stephan Bosch wrote: >> >> >> Op 19-10-2016 om 13:47 schreef Matthew Broadhead: >>> i am not 100% sure how to give you the information you require. >>> >>> my current setup in /etc/postfix/master.cf is >>> flags=DRhu user=vmail:mail argv=/usr/libexec/dovecot/deliver -d >>> ${recipient} >>> so recipient would presumably be user at domain.tld? or do you want >>> the real email address of one of our users? is there some way i can >>> output this information directly e.g. in logs? >> >> I am no Postfix expert. I just need to know which values are being >> passed to dovecot-lda with what options. I'd assume Postfix allows >> logging the command line or at least the values of these variables. >> >>> the incoming email message could be anything? again i can run an >>> example directly if you can advise the best way to do this >> >> As long as the problem occurs with this message. >> >> BTW, it would also be helpful to have the Dovecot logs from this >> delivery, with mail_debug configured to "yes". >> >> Regards, >> >> Stephan. >> >>> >>> On 19/10/2016 12:54, Stephan Bosch wrote: >>>> Also, please provide an example scenario; i.e., for one problematic >>>> delivery provide: >>>> >>>> - The values of the variables substituted in the dovecot-lda >>>> command line; i.e., provide that command line. >>>> - The incoming e-mail message. >>>> >>>> Regards, >>>> >>>> Stephan. >>>> >>>> Op 19-10-2016 om 12:43 schreef Matthew Broadhead: >>>>> dovecot is configured by sentora control panel to a certain >>>>> extent. if you want those configs i can send them as well >>>>> >>>>> dovecot -n >>>>> >>>>> debug_log_path = /var/log/dovecot-debug.log >>>>> dict { >>>>> quotadict = >>>>> mysql:/etc/sentora/configs/dovecot2/dovecot-dict-quota.conf >>>>> } >>>>> disable_plaintext_auth = no >>>>> first_valid_gid = 12 >>>>> first_valid_uid = 996 >>>>> info_log_path = /var/log/dovecot-info.log >>>>> lda_mailbox_autocreate = yes >>>>> lda_mailbox_autosubscribe = yes >>>>> listen = * >>>>> lmtp_save_to_detail_mailbox = yes >>>>> log_path = /var/log/dovecot.log >>>>> log_timestamp = %Y-%m-%d %H:%M:%S >>>>> mail_fsync = never >>>>> mail_location = maildir:/var/sentora/vmail/%d/%n >>>>> managesieve_notify_capability = mailto >>>>> managesieve_sieve_capability = fileinto reject envelope >>>>> encoded-character vacation subaddress comparator-i;ascii-numeric >>>>> relational regex imap4flags copy include variables body enotify >>>>> environment mailbox date ihave >>>>> passdb { >>>>> args = /etc/sentora/configs/dovecot2/dovecot-mysql.conf >>>>> driver = sql >>>>> } >>>>> plugin { >>>>> acl = vfile:/etc/dovecot/acls >>>>> quota = maildir:User quota >>>>> sieve = ~/dovecot.sieve >>>>> sieve_dir = ~/sieve >>>>> sieve_global_dir = /var/sentora/sieve/ >>>>> sieve_global_path = /var/sentora/sieve/globalfilter.sieve >>>>> sieve_max_script_size = 1M >>>>> sieve_vacation_send_from_recipient = yes >>>>> trash = /etc/sentora/configs/dovecot2/dovecot-trash.conf >>>>> } >>>>> protocols = imap pop3 lmtp sieve >>>>> service auth { >>>>> unix_listener /var/spool/postfix/private/auth { >>>>> group = postfix >>>>> mode = 0666 >>>>> user = postfix >>>>> } >>>>> unix_listener auth-userdb { >>>>> group = mail >>>>> mode = 0666 >>>>> user = vmail >>>>> } >>>>> } >>>>> service dict { >>>>> unix_listener dict { >>>>> group = mail >>>>> mode = 0666 >>>>> user = vmail >>>>> } >>>>> } >>>>> service imap-login { >>>>> inet_listener imap { >>>>> port = 143 >>>>> } >>>>> process_limit = 500 >>>>> process_min_avail = 2 >>>>> } >>>>> service imap { >>>>> vsz_limit = 256 M >>>>> } >>>>> service managesieve-login { >>>>> inet_listener sieve { >>>>> port = 4190 >>>>> } >>>>> process_min_avail = 0 >>>>> service_count = 1 >>>>> vsz_limit = 64 M >>>>> } >>>>> service pop3-login { >>>>> inet_listener pop3 { >>>>> port = 110 >>>>> } >>>>> } >>>>> ssl_cert = >>>> ssl_key = >>>> ssl_protocols = !SSLv2 !SSLv3 >>>>> userdb { >>>>> driver = prefetch >>>>> } >>>>> userdb { >>>>> args = /etc/sentora/configs/dovecot2/dovecot-mysql.conf >>>>> driver = sql >>>>> } >>>>> protocol lda { >>>>> mail_fsync = optimized >>>>> mail_plugins = quota sieve >>>>> postmaster_address = postmaster at ns1.nbmlaw.co.uk >>>>> } >>>>> protocol imap { >>>>> imap_client_workarounds = delay-newmail >>>>> mail_fsync = optimized >>>>> mail_max_userip_connections = 60 >>>>> mail_plugins = quota imap_quota trash >>>>> } >>>>> protocol lmtp { >>>>> mail_plugins = quota sieve >>>>> } >>>>> protocol pop3 { >>>>> mail_plugins = quota >>>>> pop3_client_workarounds = outlook-no-nuls oe-ns-eoh >>>>> pop3_uidl_format = %08Xu%08Xv >>>>> } >>>>> protocol sieve { >>>>> managesieve_implementation_string = Dovecot Pigeonhole >>>>> managesieve_max_compile_errors = 5 >>>>> managesieve_max_line_length = 65536 >>>>> } >>>>> >>>>> managesieve.sieve >>>>> >>>>> require ["fileinto","vacation"]; >>>>> # rule:[vacation] >>>>> if true >>>>> { >>>>> vacation :days 1 :subject "Vacation subject" text: >>>>> i am currently out of the office >>>>> >>>>> trying some line breaks >>>>> >>>>> ...zzz >>>>> . >>>>> ; >>>>> } >>>>> >>>>> On 19/10/2016 12:29, Stephan Bosch wrote: >>>>>> Could you send your configuration (output from `dovecot -n`)? >>>>>> >>>>>> Also, please provide an example scenario; i.e., for one >>>>>> problematic delivery provide: >>>>>> >>>>>> - The values of the variables substituted below. >>>>>> >>>>>> - The incoming e-mail message. >>>>>> >>>>>> - The Sieve script (or at least that vacation command). >>>>>> >>>>>> Regards, >>>>>> >>>>>> >>>>>> Stephan. >>>>>> >>>>>> Op 19-10-2016 om 11:42 schreef Matthew Broadhead: >>>>>>> hi, does anyone have any ideas about this issue? i have not had >>>>>>> any response yet >>>>>>> >>>>>>> i tried changing /etc/postfix/master.cf line: >>>>>>> dovecot unix - n n - - pipe >>>>>>> flags=DRhu user=vmail:mail argv=/usr/libexec/dovecot/deliver -d >>>>>>> ${recipient} >>>>>>> >>>>>>> to >>>>>>> flags=DRhu user=vmail:mail argv=/usr/libexec/dovecot/dovecot-lda >>>>>>> -f ${sender} -d ${user}@${nexthop} -a ${original_recipient} >>>>>>> >>>>>>> and >>>>>>> -d ${user}@${domain} -a {recipient} -f ${sender} -m ${extension} >>>>>>> >>>>>>> but it didn't work >>>>>>> >>>>>>> On 12/10/2016 13:57, Matthew Broadhead wrote: >>>>>>>> I have a server running >>>>>>>> centos-release-7-2.1511.el7.centos.2.10.x86_64 with dovecot >>>>>>>> version 2.2.10. I am also using roundcube for webmail. when a >>>>>>>> vacation filter (reply with message) is created in roundcube it >>>>>>>> adds a rule to managesieve.sieve in the user's mailbox. >>>>>>>> everything works fine except the reply comes from >>>>>>>> vmail at ns1.domain.tld instead of user at domain.tld. ns1.domain.tld >>>>>>>> is the fully qualified name of the server. >>>>>>>> >>>>>>>> it used to work fine on my old CentOS 6 server so I am not sure >>>>>>>> what has changed. Can anyone point me in the direction of >>>>>>>> where I can configure this behaviour? >>>>> >>>> >> From gerben.wierda at rna.nl Thu Oct 20 19:55:28 2016 From: gerben.wierda at rna.nl (Gerben Wierda) Date: Thu, 20 Oct 2016 21:55:28 +0200 Subject: Migrating users from a 2.0.19 to a 2.2.24 installation Message-ID: <4826F752-1241-4255-A1FF-F7B4B1D1240F@rna.nl> Hello, I am currently still running an older dovecot (2.0.19apple1 on Mac OS X 10.8.5) and I want to migrate my users to a new server (macOS 10.12 with Server 5, which contains dovecot 2.2.24 (a82c823)). Basically, I want to create a new server installation on the new server so I don't bring any junk over (new user accounts, with the same uid/gid (still need to figure that one out), but after I have done that I need to move the data over from the old instalation to the new. Has anything changed in the formats between 2.0 and 2.2 that will stop me from doing this? Thanks, G From ebroch at whitehorsetc.com Thu Oct 20 20:19:30 2016 From: ebroch at whitehorsetc.com (Eric Broch) Date: Thu, 20 Oct 2016 14:19:30 -0600 Subject: v2.2.26 release candidate released In-Reply-To: References: Message-ID: <3ed8aa4e-dfd0-baa1-87a3-98c617add21c@whitehorsetc.com> Compiled on CentOS 6 and CentOS 7 successfully. On 10/19/2016 3:01 PM, Timo Sirainen wrote: > http://dovecot.org/releases/2.2/rc/dovecot-2.2.26.rc1.tar.gz > http://dovecot.org/releases/2.2/rc/dovecot-2.2.26.rc1.tar.gz.sig > > There are quite a lot of changes since v2.2.25. Please try out this RC so we can get a good and stable v2.2.26 out. > > * master: Removed hardcoded 511 backlog limit for listen(). The kernel > should limit this as needed. > * doveadm import: Source user is now initialized the same as target > user. Added -U parameter to override the source user. > * Mailbox names are no longer limited to 16 hierarchy levels. We'll > check another way to make sure mailbox names can't grow larger than > 4096 bytes. > > + Added a concept of "alternative usernames" by returning user_* extra > field(s) in passdb. doveadm proxy list shows these alt usernames in > "doveadm proxy list" output. "doveadm director&proxy kick" adds > -f parameter. The alt usernames don't have to be > unique, so this allows creation of user groups and kicking them in > one command. > + auth: passdb/userdb dict allows now %variables in key settings. > + auth: If passdb returns noauthenticate=yes extra field, assume that > it only set extra fields and authentication wasn't actually performed. > + auth: passdb static now supports password={scheme} prefix. > + imapc: Added imapc_max_line_length to limit maximum memory usage. > + imap, pop3: Added rawlog_dir setting to store IMAP/POP3 traffic logs. > This replaces at least partially the rawlog plugin. > + dsync: Added dsync_features=empty-header-workaround setting. This > makes incremental dsyncs work better for servers that randomly return > empty headers for mails. When an empty header is seen for an existing > mail, dsync assumes that it matches the local mail. > + doveadm sync/backup: Added -I parameter to skip too > large mails. > + doveadm sync/backup: Fixed -t parameter and added -e for "end date". > + doveadm mailbox metadata: Added -s parameter to allow accessing > server metadata by using empty mailbox name. > > - master process's listener socket was leaked to all child processes. > This might have allowed untrusted processes to capture and prevent > "doveadm service stop" comands from working. > - auth: userdb fields weren't passed to auth-workers, so %{userdb:*} > from previous userdbs didn't work there. > - auth: Each userdb lookup from cache reset its TTL. > - auth: Fixed auth_bind=yes + sasl_bind=yes to work together > - auth: Blocking userdb lookups reset extra fields set by previous > userdbs. > - auth: Cache keys didn't include %{passdb:*} and %{userdb:*} > - auth-policy: Fixed crash due to using already-freed memory if policy > lookup takes longer than auth request exists. > - lib-auth: Unescape passdb/userdb extra fields. Mainly affected > returning extra fields with LFs or TABs. > - lmtp_user_concurrency_limit>0 setting was logging unnecessary > anvil errors. > - lmtp_user_concurrency_limit is now checked before quota check with > lmtp_rcpt_check_quota=yes to avoid unnecessary quota work. > - lmtp: %{userdb:*} variables didn't work in mail_log_prefix > - autoexpunge settings for mailboxes with wildcards didn't work when > namespace prefix was non-empty. > - Fixed writing >2GB to iostream-temp files (used by fs-compress, > fs-metawrap, doveadm-http) > - director: Ignore duplicates in director_servers setting. > - zlib, IMAP BINARY: Fixed internal caching when accessing multiple > newly created mails. They all had UID=0 and the next mail could have > wrongly used the previously cached mail. > - doveadm stats reset wasn't reseting all the stats. > - auth_stats=yes: Don't update num_logins, since it doubles them when > using with mail stats. > - quota count: Fixed deadlocks when updating vsize header. > - dict-quota: Fixed crashes happening due to memory corruption. > - dict proxy: Fixed various timeout-related bugs. > - doveadm proxying: Fixed -A and -u wildcard handling. > - doveadm proxying: Fixed hangs and bugs related to printing. > - imap: Fixed wrongly triggering assert-crash in > client_check_command_hangs. > - imap proxy: Don't send ID command pipelined with nopipelining=yes > - imap-hibernate: Don't execute quota_over_script or last_login after > un-hibernation. > - imap-hibernate: Don't un-hibernate if client sends DONE+IDLE in one > IP packet. > - imap-hibernate: Fixed various failures when un-hibernating. > - fts: fts_autoindex=yes was broken in 2.2.25 unless > fts_autoindex_exclude settings existed. > - fts-solr: Fixed searching multiple mailboxes (patch by x16a0) > - doveadm fetch body.snippet wasn't working in 2.2.25. Also fixed a > crash with certain emails. > - pop3-migration + dbox: Various fixes related to POP3 UIDL > optimization in 2.2.25. > - pop3-migration: Fixed "truncated email header" workaround. From larryrtx at gmail.com Thu Oct 20 20:28:54 2016 From: larryrtx at gmail.com (Larry Rosenman) Date: Thu, 20 Oct 2016 15:28:54 -0500 Subject: v2.2.26 release candidate released In-Reply-To: <3ed8aa4e-dfd0-baa1-87a3-98c617add21c@whitehorsetc.com> References: <3ed8aa4e-dfd0-baa1-87a3-98c617add21c@whitehorsetc.com> Message-ID: hacked on the FreeBSD port, and it works there as well. (FreeBSD 10.3-STABLE) On Thu, Oct 20, 2016 at 3:19 PM, Eric Broch wrote: > Compiled on CentOS 6 and CentOS 7 successfully. > > > > On 10/19/2016 3:01 PM, Timo Sirainen wrote: > >> http://dovecot.org/releases/2.2/rc/dovecot-2.2.26.rc1.tar.gz >> http://dovecot.org/releases/2.2/rc/dovecot-2.2.26.rc1.tar.gz.sig >> >> There are quite a lot of changes since v2.2.25. Please try out this RC so >> we can get a good and stable v2.2.26 out. >> >> * master: Removed hardcoded 511 backlog limit for listen(). The >> kernel >> should limit this as needed. >> * doveadm import: Source user is now initialized the same as >> target >> user. Added -U parameter to override the source user. >> * Mailbox names are no longer limited to 16 hierarchy levels. >> We'll >> check another way to make sure mailbox names can't grow larger >> than >> 4096 bytes. >> >> + Added a concept of "alternative usernames" by returning user_* >> extra >> field(s) in passdb. doveadm proxy list shows these alt >> usernames in >> "doveadm proxy list" output. "doveadm director&proxy kick" adds >> -f parameter. The alt usernames don't have to be >> unique, so this allows creation of user groups and kicking them >> in >> one command. >> + auth: passdb/userdb dict allows now %variables in key settings. >> + auth: If passdb returns noauthenticate=yes extra field, assume >> that >> it only set extra fields and authentication wasn't actually >> performed. >> + auth: passdb static now supports password={scheme} prefix. >> + imapc: Added imapc_max_line_length to limit maximum memory >> usage. >> + imap, pop3: Added rawlog_dir setting to store IMAP/POP3 traffic >> logs. >> This replaces at least partially the rawlog plugin. >> + dsync: Added dsync_features=empty-header-workaround setting. >> This >> makes incremental dsyncs work better for servers that randomly >> return >> empty headers for mails. When an empty header is seen for an >> existing >> mail, dsync assumes that it matches the local mail. >> + doveadm sync/backup: Added -I parameter to skip too >> large mails. >> + doveadm sync/backup: Fixed -t parameter and added -e for "end >> date". >> + doveadm mailbox metadata: Added -s parameter to allow accessing >> server metadata by using empty mailbox name. >> >> - master process's listener socket was leaked to all child >> processes. >> This might have allowed untrusted processes to capture and >> prevent >> "doveadm service stop" comands from working. >> - auth: userdb fields weren't passed to auth-workers, so >> %{userdb:*} >> from previous userdbs didn't work there. >> - auth: Each userdb lookup from cache reset its TTL. >> - auth: Fixed auth_bind=yes + sasl_bind=yes to work together >> - auth: Blocking userdb lookups reset extra fields set by previous >> userdbs. >> - auth: Cache keys didn't include %{passdb:*} and %{userdb:*} >> - auth-policy: Fixed crash due to using already-freed memory if >> policy >> lookup takes longer than auth request exists. >> - lib-auth: Unescape passdb/userdb extra fields. Mainly affected >> returning extra fields with LFs or TABs. >> - lmtp_user_concurrency_limit>0 setting was logging unnecessary >> anvil errors. >> - lmtp_user_concurrency_limit is now checked before quota check >> with >> lmtp_rcpt_check_quota=yes to avoid unnecessary quota work. >> - lmtp: %{userdb:*} variables didn't work in mail_log_prefix >> - autoexpunge settings for mailboxes with wildcards didn't work >> when >> namespace prefix was non-empty. >> - Fixed writing >2GB to iostream-temp files (used by fs-compress, >> fs-metawrap, doveadm-http) >> - director: Ignore duplicates in director_servers setting. >> - zlib, IMAP BINARY: Fixed internal caching when accessing >> multiple >> newly created mails. They all had UID=0 and the next mail could >> have >> wrongly used the previously cached mail. >> - doveadm stats reset wasn't reseting all the stats. >> - auth_stats=yes: Don't update num_logins, since it doubles them >> when >> using with mail stats. >> - quota count: Fixed deadlocks when updating vsize header. >> - dict-quota: Fixed crashes happening due to memory corruption. >> - dict proxy: Fixed various timeout-related bugs. >> - doveadm proxying: Fixed -A and -u wildcard handling. >> - doveadm proxying: Fixed hangs and bugs related to printing. >> - imap: Fixed wrongly triggering assert-crash in >> client_check_command_hangs. >> - imap proxy: Don't send ID command pipelined with >> nopipelining=yes >> - imap-hibernate: Don't execute quota_over_script or last_login >> after >> un-hibernation. >> - imap-hibernate: Don't un-hibernate if client sends DONE+IDLE in >> one >> IP packet. >> - imap-hibernate: Fixed various failures when un-hibernating. >> - fts: fts_autoindex=yes was broken in 2.2.25 unless >> fts_autoindex_exclude settings existed. >> - fts-solr: Fixed searching multiple mailboxes (patch by x16a0) >> - doveadm fetch body.snippet wasn't working in 2.2.25. Also fixed >> a >> crash with certain emails. >> - pop3-migration + dbox: Various fixes related to POP3 UIDL >> optimization in 2.2.25. >> - pop3-migration: Fixed "truncated email header" workaround. >> > -- Larry Rosenman http://www.lerctr.org/~ler Phone: +1 214-642-9640 (c) E-Mail: larryrtx at gmail.com US Mail: 17716 Limpia Crk, Round Rock, TX 78664-7281 From stephan at rename-it.nl Thu Oct 20 23:07:46 2016 From: stephan at rename-it.nl (Stephan Bosch) Date: Fri, 21 Oct 2016 01:07:46 +0200 Subject: sieve sending vacation message from vmail@ns1.domain.tld In-Reply-To: <9b47cb74-0aa7-4851-11f0-5a367341a63b@nbmlaw.co.uk> References: <71b362e8-3a69-076d-6376-2f3bbd39d0eb@nbmlaw.co.uk> <94941225-09d0-1440-1733-3884cc6dcd67@rename-it.nl> <7cdadba3-fd03-7d8c-1235-b428018a081c@nbmlaw.co.uk> <55712b3a-4812-f0a6-c9f9-59efcdac79f7@rename-it.nl> <8260ce16-bc94-e3a9-13d1-f1204e6ae525@rename-it.nl> <344d3d36-b905-5a90-e0ea-17d556076838@nbmlaw.co.uk> <9b47cb74-0aa7-4851-11f0-5a367341a63b@nbmlaw.co.uk> Message-ID: <4aa89a3c-937f-a1e6-3871-1df196ac7af2@rename-it.nl> Op 10/20/2016 om 7:38 PM schreef Matthew Broadhead: > do i need to provide more information? > It still doesn't make sense to me. I do notice that the version you're using is ancient (dated 26-09-2013), which may well the problem. Do have the ability to upgrade? Regards, Stephan. > On 19/10/2016 14:49, Matthew Broadhead wrote: >> /var/log/maillog showed this >> Oct 19 13:25:41 ns1 postfix/smtpd[1298]: 7599A2C19C6: >> client=unknown[127.0.0.1] >> Oct 19 13:25:41 ns1 postfix/cleanup[1085]: 7599A2C19C6: >> message-id= >> Oct 19 13:25:41 ns1 postfix/qmgr[1059]: 7599A2C19C6: >> from=, size=3190, nrcpt=1 (queue active) >> Oct 19 13:25:41 ns1 amavis[32367]: (32367-17) Passed CLEAN >> {RelayedInternal}, ORIGINATING LOCAL [80.30.255.180]:54566 >> [80.30.255.180] -> >> , Queue-ID: BFFA62C1965, Message-ID: >> , mail_id: >> TlJQ9xQhWjQk, Hits: -2.9, size: 2235, queued_as: 7599A2C19C6, >> dkim_new=foo:nbmlaw.co.uk, 531 ms >> Oct 19 13:25:41 ns1 postfix/smtp[1135]: BFFA62C1965: >> to=, relay=127.0.0.1[127.0.0.1]:10026, >> delay=0.76, delays=0.22/0/0/0.53, dsn=2.0.0, status=sent (250 2.0.0 >> from MTA(smtp:[127.0.0.1]:10027): 250 2.0.0 Ok: queued as 7599A2C19C6) >> Oct 19 13:25:41 ns1 postfix/qmgr[1059]: BFFA62C1965: removed >> Oct 19 13:25:41 ns1 postfix/smtpd[1114]: connect from >> ns1.nbmlaw.co.uk[217.174.253.19] >> Oct 19 13:25:41 ns1 postfix/smtpd[1114]: NOQUEUE: filter: RCPT from >> ns1.nbmlaw.co.uk[217.174.253.19]: : Sender >> address triggers FILTER smtp-amavis:[127.0.0.1]:10026; >> from= to= >> proto=SMTP helo= >> Oct 19 13:25:41 ns1 postfix/smtpd[1114]: 8A03F2C1965: >> client=ns1.nbmlaw.co.uk[217.174.253.19] >> Oct 19 13:25:41 ns1 postfix/cleanup[1085]: 8A03F2C1965: >> message-id= >> Oct 19 13:25:41 ns1 opendmarc[2430]: implicit authentication service: >> ns1.nbmlaw.co.uk >> Oct 19 13:25:41 ns1 opendmarc[2430]: 8A03F2C1965: ns1.nbmlaw.co.uk fail >> Oct 19 13:25:41 ns1 postfix/qmgr[1059]: 8A03F2C1965: >> from=, size=1077, nrcpt=1 (queue active) >> Oct 19 13:25:41 ns1 postfix/smtpd[1114]: disconnect from >> ns1.nbmlaw.co.uk[217.174.253.19] >> Oct 19 13:25:41 ns1 sSMTP[1895]: Sent mail for vmail at ns1.nbmlaw.co.uk >> (221 2.0.0 Bye) uid=996 username=vmail outbytes=971 >> Oct 19 13:25:41 ns1 postfix/smtpd[1898]: connect from unknown[127.0.0.1] >> Oct 19 13:25:41 ns1 postfix/pipe[1162]: 7599A2C19C6: >> to=, relay=dovecot, delay=0.46, >> delays=0/0/0/0.45, dsn=2.0.0, status=sent (delivered via dovecot >> service) >> Oct 19 13:25:41 ns1 postfix/qmgr[1059]: 7599A2C19C6: removed >> Oct 19 13:25:41 ns1 postfix/smtpd[1898]: E53472C19C6: >> client=unknown[127.0.0.1] >> Oct 19 13:25:41 ns1 postfix/cleanup[1085]: E53472C19C6: >> message-id= >> Oct 19 13:25:41 ns1 postfix/qmgr[1059]: E53472C19C6: >> from=, size=1619, nrcpt=1 (queue active) >> Oct 19 13:25:41 ns1 amavis[1885]: (01885-01) Passed CLEAN >> {RelayedInternal}, ORIGINATING LOCAL [217.174.253.19]:40960 >> [217.174.253.19] -> >> , Queue-ID: 8A03F2C1965, Message-ID: >> , mail_id: >> mOMO97yjVqjM, Hits: -2.211, size: 1301, queued_as: E53472C19C6, 296 ms >> Oct 19 13:25:41 ns1 postfix/smtp[1217]: 8A03F2C1965: >> to=, >> relay=127.0.0.1[127.0.0.1]:10026, delay=0.38, delays=0.08/0/0/0.29, >> dsn=2.0.0, status=sent (250 2.0.0 from MTA(smtp:[127.0.0.1]:10027): >> 250 2.0.0 Ok: queued as E53472C19C6) >> Oct 19 13:25:41 ns1 postfix/qmgr[1059]: 8A03F2C1965: removed >> Oct 19 13:25:42 ns1 postfix/pipe[1303]: E53472C19C6: >> to=, relay=dovecot, delay=0.14, >> delays=0/0/0/0.14, dsn=2.0.0, status=sent (delivered via dovecot >> service) >> Oct 19 13:25:42 ns1 postfix/qmgr[1059]: E53472C19C6: removed >> >> On 19/10/2016 13:54, Stephan Bosch wrote: >>> >>> >>> Op 19-10-2016 om 13:47 schreef Matthew Broadhead: >>>> i am not 100% sure how to give you the information you require. >>>> >>>> my current setup in /etc/postfix/master.cf is >>>> flags=DRhu user=vmail:mail argv=/usr/libexec/dovecot/deliver -d >>>> ${recipient} >>>> so recipient would presumably be user at domain.tld? or do you want >>>> the real email address of one of our users? is there some way i >>>> can output this information directly e.g. in logs? >>> >>> I am no Postfix expert. I just need to know which values are being >>> passed to dovecot-lda with what options. I'd assume Postfix allows >>> logging the command line or at least the values of these variables. >>> >>>> the incoming email message could be anything? again i can run an >>>> example directly if you can advise the best way to do this >>> >>> As long as the problem occurs with this message. >>> >>> BTW, it would also be helpful to have the Dovecot logs from this >>> delivery, with mail_debug configured to "yes". >>> >>> Regards, >>> >>> Stephan. >>> >>>> >>>> On 19/10/2016 12:54, Stephan Bosch wrote: >>>>> Also, please provide an example scenario; i.e., for one >>>>> problematic delivery provide: >>>>> >>>>> - The values of the variables substituted in the dovecot-lda >>>>> command line; i.e., provide that command line. >>>>> - The incoming e-mail message. >>>>> >>>>> Regards, >>>>> >>>>> Stephan. >>>>> >>>>> Op 19-10-2016 om 12:43 schreef Matthew Broadhead: >>>>>> dovecot is configured by sentora control panel to a certain >>>>>> extent. if you want those configs i can send them as well >>>>>> >>>>>> dovecot -n >>>>>> >>>>>> debug_log_path = /var/log/dovecot-debug.log >>>>>> dict { >>>>>> quotadict = >>>>>> mysql:/etc/sentora/configs/dovecot2/dovecot-dict-quota.conf >>>>>> } >>>>>> disable_plaintext_auth = no >>>>>> first_valid_gid = 12 >>>>>> first_valid_uid = 996 >>>>>> info_log_path = /var/log/dovecot-info.log >>>>>> lda_mailbox_autocreate = yes >>>>>> lda_mailbox_autosubscribe = yes >>>>>> listen = * >>>>>> lmtp_save_to_detail_mailbox = yes >>>>>> log_path = /var/log/dovecot.log >>>>>> log_timestamp = %Y-%m-%d %H:%M:%S >>>>>> mail_fsync = never >>>>>> mail_location = maildir:/var/sentora/vmail/%d/%n >>>>>> managesieve_notify_capability = mailto >>>>>> managesieve_sieve_capability = fileinto reject envelope >>>>>> encoded-character vacation subaddress comparator-i;ascii-numeric >>>>>> relational regex imap4flags copy include variables body enotify >>>>>> environment mailbox date ihave >>>>>> passdb { >>>>>> args = /etc/sentora/configs/dovecot2/dovecot-mysql.conf >>>>>> driver = sql >>>>>> } >>>>>> plugin { >>>>>> acl = vfile:/etc/dovecot/acls >>>>>> quota = maildir:User quota >>>>>> sieve = ~/dovecot.sieve >>>>>> sieve_dir = ~/sieve >>>>>> sieve_global_dir = /var/sentora/sieve/ >>>>>> sieve_global_path = /var/sentora/sieve/globalfilter.sieve >>>>>> sieve_max_script_size = 1M >>>>>> sieve_vacation_send_from_recipient = yes >>>>>> trash = /etc/sentora/configs/dovecot2/dovecot-trash.conf >>>>>> } >>>>>> protocols = imap pop3 lmtp sieve >>>>>> service auth { >>>>>> unix_listener /var/spool/postfix/private/auth { >>>>>> group = postfix >>>>>> mode = 0666 >>>>>> user = postfix >>>>>> } >>>>>> unix_listener auth-userdb { >>>>>> group = mail >>>>>> mode = 0666 >>>>>> user = vmail >>>>>> } >>>>>> } >>>>>> service dict { >>>>>> unix_listener dict { >>>>>> group = mail >>>>>> mode = 0666 >>>>>> user = vmail >>>>>> } >>>>>> } >>>>>> service imap-login { >>>>>> inet_listener imap { >>>>>> port = 143 >>>>>> } >>>>>> process_limit = 500 >>>>>> process_min_avail = 2 >>>>>> } >>>>>> service imap { >>>>>> vsz_limit = 256 M >>>>>> } >>>>>> service managesieve-login { >>>>>> inet_listener sieve { >>>>>> port = 4190 >>>>>> } >>>>>> process_min_avail = 0 >>>>>> service_count = 1 >>>>>> vsz_limit = 64 M >>>>>> } >>>>>> service pop3-login { >>>>>> inet_listener pop3 { >>>>>> port = 110 >>>>>> } >>>>>> } >>>>>> ssl_cert = >>>>> ssl_key = >>>>> ssl_protocols = !SSLv2 !SSLv3 >>>>>> userdb { >>>>>> driver = prefetch >>>>>> } >>>>>> userdb { >>>>>> args = /etc/sentora/configs/dovecot2/dovecot-mysql.conf >>>>>> driver = sql >>>>>> } >>>>>> protocol lda { >>>>>> mail_fsync = optimized >>>>>> mail_plugins = quota sieve >>>>>> postmaster_address = postmaster at ns1.nbmlaw.co.uk >>>>>> } >>>>>> protocol imap { >>>>>> imap_client_workarounds = delay-newmail >>>>>> mail_fsync = optimized >>>>>> mail_max_userip_connections = 60 >>>>>> mail_plugins = quota imap_quota trash >>>>>> } >>>>>> protocol lmtp { >>>>>> mail_plugins = quota sieve >>>>>> } >>>>>> protocol pop3 { >>>>>> mail_plugins = quota >>>>>> pop3_client_workarounds = outlook-no-nuls oe-ns-eoh >>>>>> pop3_uidl_format = %08Xu%08Xv >>>>>> } >>>>>> protocol sieve { >>>>>> managesieve_implementation_string = Dovecot Pigeonhole >>>>>> managesieve_max_compile_errors = 5 >>>>>> managesieve_max_line_length = 65536 >>>>>> } >>>>>> >>>>>> managesieve.sieve >>>>>> >>>>>> require ["fileinto","vacation"]; >>>>>> # rule:[vacation] >>>>>> if true >>>>>> { >>>>>> vacation :days 1 :subject "Vacation subject" text: >>>>>> i am currently out of the office >>>>>> >>>>>> trying some line breaks >>>>>> >>>>>> ...zzz >>>>>> . >>>>>> ; >>>>>> } >>>>>> >>>>>> On 19/10/2016 12:29, Stephan Bosch wrote: >>>>>>> Could you send your configuration (output from `dovecot -n`)? >>>>>>> >>>>>>> Also, please provide an example scenario; i.e., for one >>>>>>> problematic delivery provide: >>>>>>> >>>>>>> - The values of the variables substituted below. >>>>>>> >>>>>>> - The incoming e-mail message. >>>>>>> >>>>>>> - The Sieve script (or at least that vacation command). >>>>>>> >>>>>>> Regards, >>>>>>> >>>>>>> >>>>>>> Stephan. >>>>>>> >>>>>>> Op 19-10-2016 om 11:42 schreef Matthew Broadhead: >>>>>>>> hi, does anyone have any ideas about this issue? i have not >>>>>>>> had any response yet >>>>>>>> >>>>>>>> i tried changing /etc/postfix/master.cf line: >>>>>>>> dovecot unix - n n - - pipe >>>>>>>> flags=DRhu user=vmail:mail argv=/usr/libexec/dovecot/deliver -d >>>>>>>> ${recipient} >>>>>>>> >>>>>>>> to >>>>>>>> flags=DRhu user=vmail:mail >>>>>>>> argv=/usr/libexec/dovecot/dovecot-lda -f ${sender} -d >>>>>>>> ${user}@${nexthop} -a ${original_recipient} >>>>>>>> >>>>>>>> and >>>>>>>> -d ${user}@${domain} -a {recipient} -f ${sender} -m ${extension} >>>>>>>> >>>>>>>> but it didn't work >>>>>>>> >>>>>>>> On 12/10/2016 13:57, Matthew Broadhead wrote: >>>>>>>>> I have a server running >>>>>>>>> centos-release-7-2.1511.el7.centos.2.10.x86_64 with dovecot >>>>>>>>> version 2.2.10. I am also using roundcube for webmail. when a >>>>>>>>> vacation filter (reply with message) is created in roundcube >>>>>>>>> it adds a rule to managesieve.sieve in the user's mailbox. >>>>>>>>> everything works fine except the reply comes from >>>>>>>>> vmail at ns1.domain.tld instead of user at domain.tld. >>>>>>>>> ns1.domain.tld is the fully qualified name of the server. >>>>>>>>> >>>>>>>>> it used to work fine on my old CentOS 6 server so I am not >>>>>>>>> sure what has changed. Can anyone point me in the direction >>>>>>>>> of where I can configure this behaviour? >>>>>> >>>>> >>> From dovecot-list at mohtex.net Fri Oct 21 03:27:08 2016 From: dovecot-list at mohtex.net (Tamsy) Date: Fri, 21 Oct 2016 10:27:08 +0700 Subject: v2.2.26 release candidate released In-Reply-To: References: Message-ID: <7910a495-dfe0-aba1-9282-40070013a5bb@mohtex.net> Timo Sirainen wrote on 20.10.2016 04:01: > http://dovecot.org/releases/2.2/rc/dovecot-2.2.26.rc1.tar.gz > http://dovecot.org/releases/2.2/rc/dovecot-2.2.26.rc1.tar.gz.sig > > There are quite a lot of changes since v2.2.25. Please try out this RC so we can get a good and stable v2.2.26 out. > > * master: Removed hardcoded 511 backlog limit for listen(). The kernel > should limit this as needed. > * doveadm import: Source user is now initialized the same as target > user. Added -U parameter to override the source user. > * Mailbox names are no longer limited to 16 hierarchy levels. We'll > check another way to make sure mailbox names can't grow larger than > 4096 bytes. > > + Added a concept of "alternative usernames" by returning user_* extra > field(s) in passdb. doveadm proxy list shows these alt usernames in > "doveadm proxy list" output. "doveadm director&proxy kick" adds > -f parameter. The alt usernames don't have to be > unique, so this allows creation of user groups and kicking them in > one command. > + auth: passdb/userdb dict allows now %variables in key settings. > + auth: If passdb returns noauthenticate=yes extra field, assume that > it only set extra fields and authentication wasn't actually performed. > + auth: passdb static now supports password={scheme} prefix. > + imapc: Added imapc_max_line_length to limit maximum memory usage. > + imap, pop3: Added rawlog_dir setting to store IMAP/POP3 traffic logs. > This replaces at least partially the rawlog plugin. > + dsync: Added dsync_features=empty-header-workaround setting. This > makes incremental dsyncs work better for servers that randomly return > empty headers for mails. When an empty header is seen for an existing > mail, dsync assumes that it matches the local mail. > + doveadm sync/backup: Added -I parameter to skip too > large mails. > + doveadm sync/backup: Fixed -t parameter and added -e for "end date". > + doveadm mailbox metadata: Added -s parameter to allow accessing > server metadata by using empty mailbox name. > > - master process's listener socket was leaked to all child processes. > This might have allowed untrusted processes to capture and prevent > "doveadm service stop" comands from working. > - auth: userdb fields weren't passed to auth-workers, so %{userdb:*} > from previous userdbs didn't work there. > - auth: Each userdb lookup from cache reset its TTL. > - auth: Fixed auth_bind=yes + sasl_bind=yes to work together > - auth: Blocking userdb lookups reset extra fields set by previous > userdbs. > - auth: Cache keys didn't include %{passdb:*} and %{userdb:*} > - auth-policy: Fixed crash due to using already-freed memory if policy > lookup takes longer than auth request exists. > - lib-auth: Unescape passdb/userdb extra fields. Mainly affected > returning extra fields with LFs or TABs. > - lmtp_user_concurrency_limit>0 setting was logging unnecessary > anvil errors. > - lmtp_user_concurrency_limit is now checked before quota check with > lmtp_rcpt_check_quota=yes to avoid unnecessary quota work. > - lmtp: %{userdb:*} variables didn't work in mail_log_prefix > - autoexpunge settings for mailboxes with wildcards didn't work when > namespace prefix was non-empty. > - Fixed writing >2GB to iostream-temp files (used by fs-compress, > fs-metawrap, doveadm-http) > - director: Ignore duplicates in director_servers setting. > - zlib, IMAP BINARY: Fixed internal caching when accessing multiple > newly created mails. They all had UID=0 and the next mail could have > wrongly used the previously cached mail. > - doveadm stats reset wasn't reseting all the stats. > - auth_stats=yes: Don't update num_logins, since it doubles them when > using with mail stats. > - quota count: Fixed deadlocks when updating vsize header. > - dict-quota: Fixed crashes happening due to memory corruption. > - dict proxy: Fixed various timeout-related bugs. > - doveadm proxying: Fixed -A and -u wildcard handling. > - doveadm proxying: Fixed hangs and bugs related to printing. > - imap: Fixed wrongly triggering assert-crash in > client_check_command_hangs. > - imap proxy: Don't send ID command pipelined with nopipelining=yes > - imap-hibernate: Don't execute quota_over_script or last_login after > un-hibernation. > - imap-hibernate: Don't un-hibernate if client sends DONE+IDLE in one > IP packet. > - imap-hibernate: Fixed various failures when un-hibernating. > - fts: fts_autoindex=yes was broken in 2.2.25 unless > fts_autoindex_exclude settings existed. > - fts-solr: Fixed searching multiple mailboxes (patch by x16a0) > - doveadm fetch body.snippet wasn't working in 2.2.25. Also fixed a > crash with certain emails. > - pop3-migration + dbox: Various fixes related to POP3 UIDL > optimization in 2.2.25. > - pop3-migration: Fixed "truncated email header" workaround. Since v2.2.25 up to v2.2.26.rc1 on Ubuntu 16.04.1 LTS Dovecot is compiling successfully but "make check" is throwing out the following: make[2]: Leaving directory '/usr/local/src/dovecot-2.2.26.rc1/src/lib-charset' Making check in lib-ssl-iostream make[2]: Entering directory '/usr/local/src/dovecot-2.2.26.rc1/src/lib-ssl-iostream' make[2]: Nothing to be done for 'check'. make[2]: Leaving directory '/usr/local/src/dovecot-2.2.26.rc1/src/lib-ssl-iostream' Making check in lib-dcrypt make[2]: Entering directory '/usr/local/src/dovecot-2.2.26.rc1/src/lib-dcrypt' for bin in test-crypto test-stream; do \ if ! /bin/sh ../../run-test.sh ../.. ./$bin; then exit 1; fi; \ done test_cipher_test_vectors ............................................. : ok test_cipher_aead_test_vectors ........................................ : ok test_hmac_test_vectors ............................................... : ok test_load_v1_keys .................................................... : ok test_load_v1_key ..................................................... : ok test_load_v1_public_key .............................................. : ok vex: the `impossible' happened: isZeroU vex storage: T total 586404328 bytes allocated vex storage: P total 640 bytes allocated valgrind: the 'impossible' happened: LibVEX called failure_exit(). host stacktrace: ==20179== at 0x38083F48: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) ==20179== by 0x38084064: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) ==20179== by 0x380842A1: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) ==20179== by 0x380842CA: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) ==20179== by 0x3809F682: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) ==20179== by 0x38148008: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) ==20179== by 0x3815514D: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) ==20179== by 0x38159272: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) ==20179== by 0x38159EA6: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) ==20179== by 0x3815BD68: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) ==20179== by 0x3815CDB6: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) ==20179== by 0x38145DEC: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) ==20179== by 0x380A1C0B: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) ==20179== by 0x380D296B: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) ==20179== by 0x380D45CF: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) ==20179== by 0x380E3946: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) sched status: running_tid=1 Thread 1: status = VgTs_Runnable (lwpid 20179) ==20179== at 0x5DE7E00: ??? (in /lib/x86_64-linux-gnu/libcrypto.so.1.0.0) ==20179== by 0x5DC70BF: EC_POINT_mul (in /lib/x86_64-linux-gnu/libcrypto.so.1.0.0) ==20179== by 0x5DC5F06: EC_POINT_new (in /lib/x86_64-linux-gnu/libcrypto.so.1.0.0) ==20179== by 0x5823876: dcrypt_openssl_load_private_key_dovecot_v2 (dcrypt-openssl.c:1196) ==20179== by 0x5823876: dcrypt_openssl_load_private_key_dovecot (dcrypt-openssl.c:1244) ==20179== by 0x5823876: dcrypt_openssl_load_private_key (dcrypt-openssl.c:1587) ==20179== by 0x40FC08: test_load_v2_key (test-crypto.c:393) ==20179== by 0x41120C: test_run_funcs (test-common.c:236) ==20179== by 0x411B60: test_run (test-common.c:306) ==20179== by 0x40A99E: main (test-crypto.c:779) Note: see also the FAQ in the source distribution. It contains workarounds to several common problems. In particular, if Valgrind aborted or crashed after identifying problems in your program, there's a good chance that fixing those problems will prevent Valgrind aborting or crashing, especially if it happened in m_mallocfree.c. If that doesn't help, please report this bug to: www.valgrind.org In the bug report, send all the above text, the valgrind version, and what OS and version you are using. Thanks. Failed to run: ./test-crypto Makefile:1019: recipe for target 'check-test' failed make[2]: *** [check-test] Error 1 make[2]: Leaving directory '/usr/local/src/dovecot-2.2.26.rc1/src/lib-dcrypt' Makefile:494: recipe for target 'check-recursive' failed make[1]: *** [check-recursive] Error 1 make[1]: Leaving directory '/usr/local/src/dovecot-2.2.26.rc1/src' Makefile:620: recipe for target 'check-recursive' failed make: *** [check-recursive] Error 1 From thorsten.hater at gmail.com Fri Oct 21 04:25:19 2016 From: thorsten.hater at gmail.com (Thorsten Hater) Date: Fri, 21 Oct 2016 06:25:19 +0200 Subject: Extending IMAP daemon to comply with legacy Proxy Message-ID: Hello, I am currently investigating if and how to migrate a legacy setup based on a modified Courier installation to Dovecot. So far I am really liking Dovecot. however, I will have to work with an existing IMAP proxy for the time being. In particular, the proxy authenticates the user and looks up the location of the mail directory and transfers the data to the actual server via an extension of the IMAP protocol. This extension will have to stay and the information transferred is not easily recoverable. Further database lookups on the real server would be possible to solve the issue, but I would prefer to keep the design intact at least for now. Is it possible to handle IMAP commands after LOGIN/AUTHENTICATE but before opening any mailboxes? Basically these commands aim to replace the userdb lookup. Any pointers on how to implement this, perhaps as a plugin? Best regards, Thorsten From aki.tuomi at dovecot.fi Fri Oct 21 04:34:56 2016 From: aki.tuomi at dovecot.fi (Aki Tuomi) Date: Fri, 21 Oct 2016 07:34:56 +0300 (EEST) Subject: v2.2.26 release candidate released In-Reply-To: <7910a495-dfe0-aba1-9282-40070013a5bb@mohtex.net> References: <7910a495-dfe0-aba1-9282-40070013a5bb@mohtex.net> Message-ID: <362930011.1420.1477024497359@appsuite-dev.open-xchange.com> > On October 21, 2016 at 6:27 AM Tamsy wrote: > > > Timo Sirainen wrote on 20.10.2016 04:01: > > http://dovecot.org/releases/2.2/rc/dovecot-2.2.26.rc1.tar.gz > > http://dovecot.org/releases/2.2/rc/dovecot-2.2.26.rc1.tar.gz.sig > > > > There are quite a lot of changes since v2.2.25. Please try out this RC so we can get a good and stable v2.2.26 out. > > > > * master: Removed hardcoded 511 backlog limit for listen(). The kernel > > should limit this as needed. > > * doveadm import: Source user is now initialized the same as target > > user. Added -U parameter to override the source user. > > * Mailbox names are no longer limited to 16 hierarchy levels. We'll > > check another way to make sure mailbox names can't grow larger than > > 4096 bytes. > > > > + Added a concept of "alternative usernames" by returning user_* extra > > field(s) in passdb. doveadm proxy list shows these alt usernames in > > "doveadm proxy list" output. "doveadm director&proxy kick" adds > > -f parameter. The alt usernames don't have to be > > unique, so this allows creation of user groups and kicking them in > > one command. > > + auth: passdb/userdb dict allows now %variables in key settings. > > + auth: If passdb returns noauthenticate=yes extra field, assume that > > it only set extra fields and authentication wasn't actually performed. > > + auth: passdb static now supports password={scheme} prefix. > > + imapc: Added imapc_max_line_length to limit maximum memory usage. > > + imap, pop3: Added rawlog_dir setting to store IMAP/POP3 traffic logs. > > This replaces at least partially the rawlog plugin. > > + dsync: Added dsync_features=empty-header-workaround setting. This > > makes incremental dsyncs work better for servers that randomly return > > empty headers for mails. When an empty header is seen for an existing > > mail, dsync assumes that it matches the local mail. > > + doveadm sync/backup: Added -I parameter to skip too > > large mails. > > + doveadm sync/backup: Fixed -t parameter and added -e for "end date". > > + doveadm mailbox metadata: Added -s parameter to allow accessing > > server metadata by using empty mailbox name. > > > > - master process's listener socket was leaked to all child processes. > > This might have allowed untrusted processes to capture and prevent > > "doveadm service stop" comands from working. > > - auth: userdb fields weren't passed to auth-workers, so %{userdb:*} > > from previous userdbs didn't work there. > > - auth: Each userdb lookup from cache reset its TTL. > > - auth: Fixed auth_bind=yes + sasl_bind=yes to work together > > - auth: Blocking userdb lookups reset extra fields set by previous > > userdbs. > > - auth: Cache keys didn't include %{passdb:*} and %{userdb:*} > > - auth-policy: Fixed crash due to using already-freed memory if policy > > lookup takes longer than auth request exists. > > - lib-auth: Unescape passdb/userdb extra fields. Mainly affected > > returning extra fields with LFs or TABs. > > - lmtp_user_concurrency_limit>0 setting was logging unnecessary > > anvil errors. > > - lmtp_user_concurrency_limit is now checked before quota check with > > lmtp_rcpt_check_quota=yes to avoid unnecessary quota work. > > - lmtp: %{userdb:*} variables didn't work in mail_log_prefix > > - autoexpunge settings for mailboxes with wildcards didn't work when > > namespace prefix was non-empty. > > - Fixed writing >2GB to iostream-temp files (used by fs-compress, > > fs-metawrap, doveadm-http) > > - director: Ignore duplicates in director_servers setting. > > - zlib, IMAP BINARY: Fixed internal caching when accessing multiple > > newly created mails. They all had UID=0 and the next mail could have > > wrongly used the previously cached mail. > > - doveadm stats reset wasn't reseting all the stats. > > - auth_stats=yes: Don't update num_logins, since it doubles them when > > using with mail stats. > > - quota count: Fixed deadlocks when updating vsize header. > > - dict-quota: Fixed crashes happening due to memory corruption. > > - dict proxy: Fixed various timeout-related bugs. > > - doveadm proxying: Fixed -A and -u wildcard handling. > > - doveadm proxying: Fixed hangs and bugs related to printing. > > - imap: Fixed wrongly triggering assert-crash in > > client_check_command_hangs. > > - imap proxy: Don't send ID command pipelined with nopipelining=yes > > - imap-hibernate: Don't execute quota_over_script or last_login after > > un-hibernation. > > - imap-hibernate: Don't un-hibernate if client sends DONE+IDLE in one > > IP packet. > > - imap-hibernate: Fixed various failures when un-hibernating. > > - fts: fts_autoindex=yes was broken in 2.2.25 unless > > fts_autoindex_exclude settings existed. > > - fts-solr: Fixed searching multiple mailboxes (patch by x16a0) > > - doveadm fetch body.snippet wasn't working in 2.2.25. Also fixed a > > crash with certain emails. > > - pop3-migration + dbox: Various fixes related to POP3 UIDL > > optimization in 2.2.25. > > - pop3-migration: Fixed "truncated email header" workaround. > > Since v2.2.25 up to v2.2.26.rc1 on Ubuntu 16.04.1 LTS Dovecot is > compiling successfully but "make check" is throwing out the following: > > make[2]: Leaving directory > '/usr/local/src/dovecot-2.2.26.rc1/src/lib-charset' > Making check in lib-ssl-iostream > make[2]: Entering directory > '/usr/local/src/dovecot-2.2.26.rc1/src/lib-ssl-iostream' > make[2]: Nothing to be done for 'check'. > make[2]: Leaving directory > '/usr/local/src/dovecot-2.2.26.rc1/src/lib-ssl-iostream' > Making check in lib-dcrypt > make[2]: Entering directory > '/usr/local/src/dovecot-2.2.26.rc1/src/lib-dcrypt' > for bin in test-crypto test-stream; do \ > if ! /bin/sh ../../run-test.sh ../.. ./$bin; then exit 1; fi; \ > done > test_cipher_test_vectors ............................................. : ok > test_cipher_aead_test_vectors ........................................ : ok > test_hmac_test_vectors ............................................... : ok > test_load_v1_keys .................................................... : ok > test_load_v1_key ..................................................... : ok > test_load_v1_public_key .............................................. : ok > > vex: the `impossible' happened: > isZeroU > vex storage: T total 586404328 bytes allocated > vex storage: P total 640 bytes allocated > > valgrind: the 'impossible' happened: > LibVEX called failure_exit(). > > host stacktrace: > ==20179== at 0x38083F48: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) > ==20179== by 0x38084064: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) > ==20179== by 0x380842A1: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) > ==20179== by 0x380842CA: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) > ==20179== by 0x3809F682: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) > ==20179== by 0x38148008: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) > ==20179== by 0x3815514D: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) > ==20179== by 0x38159272: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) > ==20179== by 0x38159EA6: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) > ==20179== by 0x3815BD68: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) > ==20179== by 0x3815CDB6: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) > ==20179== by 0x38145DEC: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) > ==20179== by 0x380A1C0B: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) > ==20179== by 0x380D296B: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) > ==20179== by 0x380D45CF: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) > ==20179== by 0x380E3946: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) > > sched status: > running_tid=1 > > Thread 1: status = VgTs_Runnable (lwpid 20179) > ==20179== at 0x5DE7E00: ??? (in /lib/x86_64-linux-gnu/libcrypto.so.1.0.0) > ==20179== by 0x5DC70BF: EC_POINT_mul (in > /lib/x86_64-linux-gnu/libcrypto.so.1.0.0) > ==20179== by 0x5DC5F06: EC_POINT_new (in > /lib/x86_64-linux-gnu/libcrypto.so.1.0.0) > ==20179== by 0x5823876: dcrypt_openssl_load_private_key_dovecot_v2 > (dcrypt-openssl.c:1196) > ==20179== by 0x5823876: dcrypt_openssl_load_private_key_dovecot > (dcrypt-openssl.c:1244) > ==20179== by 0x5823876: dcrypt_openssl_load_private_key > (dcrypt-openssl.c:1587) > ==20179== by 0x40FC08: test_load_v2_key (test-crypto.c:393) > ==20179== by 0x41120C: test_run_funcs (test-common.c:236) > ==20179== by 0x411B60: test_run (test-common.c:306) > ==20179== by 0x40A99E: main (test-crypto.c:779) > > > Note: see also the FAQ in the source distribution. > It contains workarounds to several common problems. > In particular, if Valgrind aborted or crashed after > identifying problems in your program, there's a good chance > that fixing those problems will prevent Valgrind aborting or > crashing, especially if it happened in m_mallocfree.c. > > If that doesn't help, please report this bug to: www.valgrind.org > > In the bug report, send all the above text, the valgrind > version, and what OS and version you are using. Thanks. > > Failed to run: ./test-crypto > Makefile:1019: recipe for target 'check-test' failed > make[2]: *** [check-test] Error 1 > make[2]: Leaving directory > '/usr/local/src/dovecot-2.2.26.rc1/src/lib-dcrypt' > Makefile:494: recipe for target 'check-recursive' failed > make[1]: *** [check-recursive] Error 1 > make[1]: Leaving directory '/usr/local/src/dovecot-2.2.26.rc1/src' > Makefile:620: recipe for target 'check-recursive' failed > make: *** [check-recursive] Error 1 This is an issue with either valgrind or openssl. See https://bugs.launchpad.net/ubuntu/+source/valgrind/+bug/1574437 Aki From dovecot-list at mohtex.net Fri Oct 21 05:22:17 2016 From: dovecot-list at mohtex.net (Tamsy) Date: Fri, 21 Oct 2016 12:22:17 +0700 Subject: v2.2.26 release candidate released In-Reply-To: <362930011.1420.1477024497359@appsuite-dev.open-xchange.com> References: <7910a495-dfe0-aba1-9282-40070013a5bb@mohtex.net> <362930011.1420.1477024497359@appsuite-dev.open-xchange.com> Message-ID: Aki Tuomi wrote on 21.10.2016 11:34: >> On October 21, 2016 at 6:27 AM Tamsy wrote: >> >> >> Timo Sirainen wrote on 20.10.2016 04:01: >>> http://dovecot.org/releases/2.2/rc/dovecot-2.2.26.rc1.tar.gz >>> http://dovecot.org/releases/2.2/rc/dovecot-2.2.26.rc1.tar.gz.sig >>> >>> There are quite a lot of changes since v2.2.25. Please try out this RC so we can get a good and stable v2.2.26 out. >>> >>> * master: Removed hardcoded 511 backlog limit for listen(). The kernel >>> should limit this as needed. >>> * doveadm import: Source user is now initialized the same as target >>> user. Added -U parameter to override the source user. >>> * Mailbox names are no longer limited to 16 hierarchy levels. We'll >>> check another way to make sure mailbox names can't grow larger than >>> 4096 bytes. >>> >>> + Added a concept of "alternative usernames" by returning user_* extra >>> field(s) in passdb. doveadm proxy list shows these alt usernames in >>> "doveadm proxy list" output. "doveadm director&proxy kick" adds >>> -f parameter. The alt usernames don't have to be >>> unique, so this allows creation of user groups and kicking them in >>> one command. >>> + auth: passdb/userdb dict allows now %variables in key settings. >>> + auth: If passdb returns noauthenticate=yes extra field, assume that >>> it only set extra fields and authentication wasn't actually performed. >>> + auth: passdb static now supports password={scheme} prefix. >>> + imapc: Added imapc_max_line_length to limit maximum memory usage. >>> + imap, pop3: Added rawlog_dir setting to store IMAP/POP3 traffic logs. >>> This replaces at least partially the rawlog plugin. >>> + dsync: Added dsync_features=empty-header-workaround setting. This >>> makes incremental dsyncs work better for servers that randomly return >>> empty headers for mails. When an empty header is seen for an existing >>> mail, dsync assumes that it matches the local mail. >>> + doveadm sync/backup: Added -I parameter to skip too >>> large mails. >>> + doveadm sync/backup: Fixed -t parameter and added -e for "end date". >>> + doveadm mailbox metadata: Added -s parameter to allow accessing >>> server metadata by using empty mailbox name. >>> >>> - master process's listener socket was leaked to all child processes. >>> This might have allowed untrusted processes to capture and prevent >>> "doveadm service stop" comands from working. >>> - auth: userdb fields weren't passed to auth-workers, so %{userdb:*} >>> from previous userdbs didn't work there. >>> - auth: Each userdb lookup from cache reset its TTL. >>> - auth: Fixed auth_bind=yes + sasl_bind=yes to work together >>> - auth: Blocking userdb lookups reset extra fields set by previous >>> userdbs. >>> - auth: Cache keys didn't include %{passdb:*} and %{userdb:*} >>> - auth-policy: Fixed crash due to using already-freed memory if policy >>> lookup takes longer than auth request exists. >>> - lib-auth: Unescape passdb/userdb extra fields. Mainly affected >>> returning extra fields with LFs or TABs. >>> - lmtp_user_concurrency_limit>0 setting was logging unnecessary >>> anvil errors. >>> - lmtp_user_concurrency_limit is now checked before quota check with >>> lmtp_rcpt_check_quota=yes to avoid unnecessary quota work. >>> - lmtp: %{userdb:*} variables didn't work in mail_log_prefix >>> - autoexpunge settings for mailboxes with wildcards didn't work when >>> namespace prefix was non-empty. >>> - Fixed writing >2GB to iostream-temp files (used by fs-compress, >>> fs-metawrap, doveadm-http) >>> - director: Ignore duplicates in director_servers setting. >>> - zlib, IMAP BINARY: Fixed internal caching when accessing multiple >>> newly created mails. They all had UID=0 and the next mail could have >>> wrongly used the previously cached mail. >>> - doveadm stats reset wasn't reseting all the stats. >>> - auth_stats=yes: Don't update num_logins, since it doubles them when >>> using with mail stats. >>> - quota count: Fixed deadlocks when updating vsize header. >>> - dict-quota: Fixed crashes happening due to memory corruption. >>> - dict proxy: Fixed various timeout-related bugs. >>> - doveadm proxying: Fixed -A and -u wildcard handling. >>> - doveadm proxying: Fixed hangs and bugs related to printing. >>> - imap: Fixed wrongly triggering assert-crash in >>> client_check_command_hangs. >>> - imap proxy: Don't send ID command pipelined with nopipelining=yes >>> - imap-hibernate: Don't execute quota_over_script or last_login after >>> un-hibernation. >>> - imap-hibernate: Don't un-hibernate if client sends DONE+IDLE in one >>> IP packet. >>> - imap-hibernate: Fixed various failures when un-hibernating. >>> - fts: fts_autoindex=yes was broken in 2.2.25 unless >>> fts_autoindex_exclude settings existed. >>> - fts-solr: Fixed searching multiple mailboxes (patch by x16a0) >>> - doveadm fetch body.snippet wasn't working in 2.2.25. Also fixed a >>> crash with certain emails. >>> - pop3-migration + dbox: Various fixes related to POP3 UIDL >>> optimization in 2.2.25. >>> - pop3-migration: Fixed "truncated email header" workaround. >> Since v2.2.25 up to v2.2.26.rc1 on Ubuntu 16.04.1 LTS Dovecot is >> compiling successfully but "make check" is throwing out the following: >> >> make[2]: Leaving directory >> '/usr/local/src/dovecot-2.2.26.rc1/src/lib-charset' >> Making check in lib-ssl-iostream >> make[2]: Entering directory >> '/usr/local/src/dovecot-2.2.26.rc1/src/lib-ssl-iostream' >> make[2]: Nothing to be done for 'check'. >> make[2]: Leaving directory >> '/usr/local/src/dovecot-2.2.26.rc1/src/lib-ssl-iostream' >> Making check in lib-dcrypt >> make[2]: Entering directory >> '/usr/local/src/dovecot-2.2.26.rc1/src/lib-dcrypt' >> for bin in test-crypto test-stream; do \ >> if ! /bin/sh ../../run-test.sh ../.. ./$bin; then exit 1; fi; \ >> done >> test_cipher_test_vectors ............................................. : ok >> test_cipher_aead_test_vectors ........................................ : ok >> test_hmac_test_vectors ............................................... : ok >> test_load_v1_keys .................................................... : ok >> test_load_v1_key ..................................................... : ok >> test_load_v1_public_key .............................................. : ok >> >> vex: the `impossible' happened: >> isZeroU >> vex storage: T total 586404328 bytes allocated >> vex storage: P total 640 bytes allocated >> >> valgrind: the 'impossible' happened: >> LibVEX called failure_exit(). >> >> host stacktrace: >> ==20179== at 0x38083F48: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) >> ==20179== by 0x38084064: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) >> ==20179== by 0x380842A1: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) >> ==20179== by 0x380842CA: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) >> ==20179== by 0x3809F682: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) >> ==20179== by 0x38148008: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) >> ==20179== by 0x3815514D: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) >> ==20179== by 0x38159272: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) >> ==20179== by 0x38159EA6: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) >> ==20179== by 0x3815BD68: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) >> ==20179== by 0x3815CDB6: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) >> ==20179== by 0x38145DEC: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) >> ==20179== by 0x380A1C0B: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) >> ==20179== by 0x380D296B: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) >> ==20179== by 0x380D45CF: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) >> ==20179== by 0x380E3946: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) >> >> sched status: >> running_tid=1 >> >> Thread 1: status = VgTs_Runnable (lwpid 20179) >> ==20179== at 0x5DE7E00: ??? (in /lib/x86_64-linux-gnu/libcrypto.so.1.0.0) >> ==20179== by 0x5DC70BF: EC_POINT_mul (in >> /lib/x86_64-linux-gnu/libcrypto.so.1.0.0) >> ==20179== by 0x5DC5F06: EC_POINT_new (in >> /lib/x86_64-linux-gnu/libcrypto.so.1.0.0) >> ==20179== by 0x5823876: dcrypt_openssl_load_private_key_dovecot_v2 >> (dcrypt-openssl.c:1196) >> ==20179== by 0x5823876: dcrypt_openssl_load_private_key_dovecot >> (dcrypt-openssl.c:1244) >> ==20179== by 0x5823876: dcrypt_openssl_load_private_key >> (dcrypt-openssl.c:1587) >> ==20179== by 0x40FC08: test_load_v2_key (test-crypto.c:393) >> ==20179== by 0x41120C: test_run_funcs (test-common.c:236) >> ==20179== by 0x411B60: test_run (test-common.c:306) >> ==20179== by 0x40A99E: main (test-crypto.c:779) >> >> >> Note: see also the FAQ in the source distribution. >> It contains workarounds to several common problems. >> In particular, if Valgrind aborted or crashed after >> identifying problems in your program, there's a good chance >> that fixing those problems will prevent Valgrind aborting or >> crashing, especially if it happened in m_mallocfree.c. >> >> If that doesn't help, please report this bug to: www.valgrind.org >> >> In the bug report, send all the above text, the valgrind >> version, and what OS and version you are using. Thanks. >> >> Failed to run: ./test-crypto >> Makefile:1019: recipe for target 'check-test' failed >> make[2]: *** [check-test] Error 1 >> make[2]: Leaving directory >> '/usr/local/src/dovecot-2.2.26.rc1/src/lib-dcrypt' >> Makefile:494: recipe for target 'check-recursive' failed >> make[1]: *** [check-recursive] Error 1 >> make[1]: Leaving directory '/usr/local/src/dovecot-2.2.26.rc1/src' >> Makefile:620: recipe for target 'check-recursive' failed >> make: *** [check-recursive] Error 1 > > This is an issue with either valgrind or openssl. See https://bugs.launchpad.net/ubuntu/+source/valgrind/+bug/1574437 > > Aki Thank you for the hint on this, Aki. I can confirm that Dovecot's Test Suite is finishing fine after building and installing the fixed version of valgrind following robgolebiowski's programming blog here: https://robgolebiowski.wordpress.com/2016/08/19/valgrind-failing-on-ubuntu/ From ebroch at whitehorsetc.com Fri Oct 21 05:52:09 2016 From: ebroch at whitehorsetc.com (Eric Broch) Date: Thu, 20 Oct 2016 23:52:09 -0600 Subject: v2.2.26 release candidate released In-Reply-To: <362930011.1420.1477024497359@appsuite-dev.open-xchange.com> References: <7910a495-dfe0-aba1-9282-40070013a5bb@mohtex.net> <362930011.1420.1477024497359@appsuite-dev.open-xchange.com> Message-ID: On 10/20/2016 10:34 PM, Aki Tuomi wrote: >> On October 21, 2016 at 6:27 AM Tamsy wrote: >> >> >> Timo Sirainen wrote on 20.10.2016 04:01: >>> http://dovecot.org/releases/2.2/rc/dovecot-2.2.26.rc1.tar.gz >>> http://dovecot.org/releases/2.2/rc/dovecot-2.2.26.rc1.tar.gz.sig >>> >>> There are quite a lot of changes since v2.2.25. Please try out this RC so we can get a good and stable v2.2.26 out. >>> >>> * master: Removed hardcoded 511 backlog limit for listen(). The kernel >>> should limit this as needed. >>> * doveadm import: Source user is now initialized the same as target >>> user. Added -U parameter to override the source user. >>> * Mailbox names are no longer limited to 16 hierarchy levels. We'll >>> check another way to make sure mailbox names can't grow larger than >>> 4096 bytes. >>> >>> + Added a concept of "alternative usernames" by returning user_* extra >>> field(s) in passdb. doveadm proxy list shows these alt usernames in >>> "doveadm proxy list" output. "doveadm director&proxy kick" adds >>> -f parameter. The alt usernames don't have to be >>> unique, so this allows creation of user groups and kicking them in >>> one command. >>> + auth: passdb/userdb dict allows now %variables in key settings. >>> + auth: If passdb returns noauthenticate=yes extra field, assume that >>> it only set extra fields and authentication wasn't actually performed. >>> + auth: passdb static now supports password={scheme} prefix. >>> + imapc: Added imapc_max_line_length to limit maximum memory usage. >>> + imap, pop3: Added rawlog_dir setting to store IMAP/POP3 traffic logs. >>> This replaces at least partially the rawlog plugin. >>> + dsync: Added dsync_features=empty-header-workaround setting. This >>> makes incremental dsyncs work better for servers that randomly return >>> empty headers for mails. When an empty header is seen for an existing >>> mail, dsync assumes that it matches the local mail. >>> + doveadm sync/backup: Added -I parameter to skip too >>> large mails. >>> + doveadm sync/backup: Fixed -t parameter and added -e for "end date". >>> + doveadm mailbox metadata: Added -s parameter to allow accessing >>> server metadata by using empty mailbox name. >>> >>> - master process's listener socket was leaked to all child processes. >>> This might have allowed untrusted processes to capture and prevent >>> "doveadm service stop" comands from working. >>> - auth: userdb fields weren't passed to auth-workers, so %{userdb:*} >>> from previous userdbs didn't work there. >>> - auth: Each userdb lookup from cache reset its TTL. >>> - auth: Fixed auth_bind=yes + sasl_bind=yes to work together >>> - auth: Blocking userdb lookups reset extra fields set by previous >>> userdbs. >>> - auth: Cache keys didn't include %{passdb:*} and %{userdb:*} >>> - auth-policy: Fixed crash due to using already-freed memory if policy >>> lookup takes longer than auth request exists. >>> - lib-auth: Unescape passdb/userdb extra fields. Mainly affected >>> returning extra fields with LFs or TABs. >>> - lmtp_user_concurrency_limit>0 setting was logging unnecessary >>> anvil errors. >>> - lmtp_user_concurrency_limit is now checked before quota check with >>> lmtp_rcpt_check_quota=yes to avoid unnecessary quota work. >>> - lmtp: %{userdb:*} variables didn't work in mail_log_prefix >>> - autoexpunge settings for mailboxes with wildcards didn't work when >>> namespace prefix was non-empty. >>> - Fixed writing >2GB to iostream-temp files (used by fs-compress, >>> fs-metawrap, doveadm-http) >>> - director: Ignore duplicates in director_servers setting. >>> - zlib, IMAP BINARY: Fixed internal caching when accessing multiple >>> newly created mails. They all had UID=0 and the next mail could have >>> wrongly used the previously cached mail. >>> - doveadm stats reset wasn't reseting all the stats. >>> - auth_stats=yes: Don't update num_logins, since it doubles them when >>> using with mail stats. >>> - quota count: Fixed deadlocks when updating vsize header. >>> - dict-quota: Fixed crashes happening due to memory corruption. >>> - dict proxy: Fixed various timeout-related bugs. >>> - doveadm proxying: Fixed -A and -u wildcard handling. >>> - doveadm proxying: Fixed hangs and bugs related to printing. >>> - imap: Fixed wrongly triggering assert-crash in >>> client_check_command_hangs. >>> - imap proxy: Don't send ID command pipelined with nopipelining=yes >>> - imap-hibernate: Don't execute quota_over_script or last_login after >>> un-hibernation. >>> - imap-hibernate: Don't un-hibernate if client sends DONE+IDLE in one >>> IP packet. >>> - imap-hibernate: Fixed various failures when un-hibernating. >>> - fts: fts_autoindex=yes was broken in 2.2.25 unless >>> fts_autoindex_exclude settings existed. >>> - fts-solr: Fixed searching multiple mailboxes (patch by x16a0) >>> - doveadm fetch body.snippet wasn't working in 2.2.25. Also fixed a >>> crash with certain emails. >>> - pop3-migration + dbox: Various fixes related to POP3 UIDL >>> optimization in 2.2.25. >>> - pop3-migration: Fixed "truncated email header" workaround. >> Since v2.2.25 up to v2.2.26.rc1 on Ubuntu 16.04.1 LTS Dovecot is >> compiling successfully but "make check" is throwing out the following: >> >> make[2]: Leaving directory >> '/usr/local/src/dovecot-2.2.26.rc1/src/lib-charset' >> Making check in lib-ssl-iostream >> make[2]: Entering directory >> '/usr/local/src/dovecot-2.2.26.rc1/src/lib-ssl-iostream' >> make[2]: Nothing to be done for 'check'. >> make[2]: Leaving directory >> '/usr/local/src/dovecot-2.2.26.rc1/src/lib-ssl-iostream' >> Making check in lib-dcrypt >> make[2]: Entering directory >> '/usr/local/src/dovecot-2.2.26.rc1/src/lib-dcrypt' >> for bin in test-crypto test-stream; do \ >> if ! /bin/sh ../../run-test.sh ../.. ./$bin; then exit 1; fi; \ >> done >> test_cipher_test_vectors ............................................. : ok >> test_cipher_aead_test_vectors ........................................ : ok >> test_hmac_test_vectors ............................................... : ok >> test_load_v1_keys .................................................... : ok >> test_load_v1_key ..................................................... : ok >> test_load_v1_public_key .............................................. : ok >> >> vex: the `impossible' happened: >> isZeroU >> vex storage: T total 586404328 bytes allocated >> vex storage: P total 640 bytes allocated >> >> valgrind: the 'impossible' happened: >> LibVEX called failure_exit(). >> >> host stacktrace: >> ==20179== at 0x38083F48: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) >> ==20179== by 0x38084064: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) >> ==20179== by 0x380842A1: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) >> ==20179== by 0x380842CA: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) >> ==20179== by 0x3809F682: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) >> ==20179== by 0x38148008: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) >> ==20179== by 0x3815514D: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) >> ==20179== by 0x38159272: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) >> ==20179== by 0x38159EA6: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) >> ==20179== by 0x3815BD68: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) >> ==20179== by 0x3815CDB6: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) >> ==20179== by 0x38145DEC: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) >> ==20179== by 0x380A1C0B: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) >> ==20179== by 0x380D296B: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) >> ==20179== by 0x380D45CF: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) >> ==20179== by 0x380E3946: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) >> >> sched status: >> running_tid=1 >> >> Thread 1: status = VgTs_Runnable (lwpid 20179) >> ==20179== at 0x5DE7E00: ??? (in /lib/x86_64-linux-gnu/libcrypto.so.1.0.0) >> ==20179== by 0x5DC70BF: EC_POINT_mul (in >> /lib/x86_64-linux-gnu/libcrypto.so.1.0.0) >> ==20179== by 0x5DC5F06: EC_POINT_new (in >> /lib/x86_64-linux-gnu/libcrypto.so.1.0.0) >> ==20179== by 0x5823876: dcrypt_openssl_load_private_key_dovecot_v2 >> (dcrypt-openssl.c:1196) >> ==20179== by 0x5823876: dcrypt_openssl_load_private_key_dovecot >> (dcrypt-openssl.c:1244) >> ==20179== by 0x5823876: dcrypt_openssl_load_private_key >> (dcrypt-openssl.c:1587) >> ==20179== by 0x40FC08: test_load_v2_key (test-crypto.c:393) >> ==20179== by 0x41120C: test_run_funcs (test-common.c:236) >> ==20179== by 0x411B60: test_run (test-common.c:306) >> ==20179== by 0x40A99E: main (test-crypto.c:779) >> >> >> Note: see also the FAQ in the source distribution. >> It contains workarounds to several common problems. >> In particular, if Valgrind aborted or crashed after >> identifying problems in your program, there's a good chance >> that fixing those problems will prevent Valgrind aborting or >> crashing, especially if it happened in m_mallocfree.c. >> >> If that doesn't help, please report this bug to: www.valgrind.org >> >> In the bug report, send all the above text, the valgrind >> version, and what OS and version you are using. Thanks. >> >> Failed to run: ./test-crypto >> Makefile:1019: recipe for target 'check-test' failed >> make[2]: *** [check-test] Error 1 >> make[2]: Leaving directory >> '/usr/local/src/dovecot-2.2.26.rc1/src/lib-dcrypt' >> Makefile:494: recipe for target 'check-recursive' failed >> make[1]: *** [check-recursive] Error 1 >> make[1]: Leaving directory '/usr/local/src/dovecot-2.2.26.rc1/src' >> Makefile:620: recipe for target 'check-recursive' failed >> make: *** [check-recursive] Error 1 > > This is an issue with either valgrind or openssl. See https://bugs.launchpad.net/ubuntu/+source/valgrind/+bug/1574437 > > Aki I had an error (below) during make check as well on CentOS 5 (no issues with 6 and 7) and commented the following 'sed' commands (Removing Rpath from this page: https://fedoraproject.org/wiki/Packaging:Guidelines ) in the spec file resulting in a successful build: #sed -i 's|^hardcode_libdir_flag_spec=.*|hardcode_libdir_flag_spec=""|g' libtool #sed -i 's|^runpath_var=LD_RUN_PATH|runpath_var=DIE_RPATH_DIE|g' libtool make[3]: Leaving directory `/usr/src/redhat/BUILD/dovecot-2.2.26.rc1/src/lib-storage' for bin in test-mail-search-args-imap test-mail-search-args-simplify test-mailbox-get; do \ if ! /bin/sh ../../run-test.sh ../.. ./$bin; then exit 1; fi; \ done /usr/src/redhat/BUILD/dovecot-2.2.26.rc1/src/lib-storage/.libs/lt-test-mail-search-args-imap: symbol lookup error: /usr/src/redhat/BUILD/dovecot-2.2.26.rc1/src/lib-storage/.libs/lt-test-mail-search-args-imap: undefined symbol: message_search_more_get_decoded ==12372== Invalid read of size 8 ==12372== at 0x3EABB0B925: ??? (in /lib64/libc-2.5.so) ==12372== by 0x3EABB0B79A: ??? (in /lib64/libc-2.5.so) ==12372== by 0x3EABB0BDF1: ??? (in /lib64/libc-2.5.so) ==12372== by 0x48024E8: _vgnU_freeres (vg_preloaded.c:62) ==12372== by 0x3EAB60D33B: _dl_signal_error (in /lib64/ld-2.5.so) ==12372== by 0x3EAB60D3D3: _dl_signal_cerror (in /lib64/ld-2.5.so) ==12372== by 0x3EAB609CEF: _dl_lookup_symbol_x (in /lib64/ld-2.5.so) ==12372== by 0x3EAB60A9D4: _dl_relocate_object (in /lib64/ld-2.5.so) ==12372== by 0x3EAB603629: dl_main (in /lib64/ld-2.5.so) ==12372== by 0x3EAB6134EA: _dl_sysdep_start (in /lib64/ld-2.5.so) ==12372== by 0x3EAB601389: _dl_start (in /lib64/ld-2.5.so) ==12372== by 0x3EAB600A77: ??? (in /lib64/ld-2.5.so) ==12372== Address 0x0 is not stack'd, malloc'd or (recently) free'd ==12372== ==12372== ==12372== Process terminating with default action of signal 11 (SIGSEGV) ==12372== Access not within mapped region at address 0x0 ==12372== at 0x3EABB0B925: ??? (in /lib64/libc-2.5.so) ==12372== by 0x3EABB0B79A: ??? (in /lib64/libc-2.5.so) ==12372== by 0x3EABB0BDF1: ??? (in /lib64/libc-2.5.so) ==12372== by 0x48024E8: _vgnU_freeres (vg_preloaded.c:62) ==12372== by 0x3EAB60D33B: _dl_signal_error (in /lib64/ld-2.5.so) ==12372== by 0x3EAB60D3D3: _dl_signal_cerror (in /lib64/ld-2.5.so) ==12372== by 0x3EAB609CEF: _dl_lookup_symbol_x (in /lib64/ld-2.5.so) ==12372== by 0x3EAB60A9D4: _dl_relocate_object (in /lib64/ld-2.5.so) ==12372== by 0x3EAB603629: dl_main (in /lib64/ld-2.5.so) ==12372== by 0x3EAB6134EA: _dl_sysdep_start (in /lib64/ld-2.5.so) ==12372== by 0x3EAB601389: _dl_start (in /lib64/ld-2.5.so) ==12372== by 0x3EAB600A77: ??? (in /lib64/ld-2.5.so) ==12372== If you believe this happened as a result of a stack ==12372== overflow in your program's main thread (unlikely but ==12372== possible), you can try to increase the size of the ==12372== main thread stack using the --main-stacksize= flag. ==12372== The main thread stack size used in this run was 10485760. Failed to run: ./test-mail-search-args-imap make[2]: *** [check-test] Error 1 make[2]: Leaving directory `/usr/src/redhat/BUILD/dovecot-2.2.26.rc1/src/lib-storage' make[1]: *** [check-recursive] Error 1 make[1]: Leaving directory `/usr/src/redhat/BUILD/dovecot-2.2.26.rc1/src' make: *** [check-recursive] Error 1 error: Bad exit status from /var/tmp/rpm-tmp.69652 (%check) RPM build errors: Bad exit status from /var/tmp/rpm-tmp.69652 (%check) From aki.tuomi at dovecot.fi Fri Oct 21 06:41:05 2016 From: aki.tuomi at dovecot.fi (Aki Tuomi) Date: Fri, 21 Oct 2016 09:41:05 +0300 (EEST) Subject: v2.2.26 release candidate released In-Reply-To: References: <7910a495-dfe0-aba1-9282-40070013a5bb@mohtex.net> <362930011.1420.1477024497359@appsuite-dev.open-xchange.com> Message-ID: <2106955365.1504.1477032065634@appsuite-dev.open-xchange.com> > On October 21, 2016 at 8:52 AM Eric Broch wrote: > > > On 10/20/2016 10:34 PM, Aki Tuomi wrote: > > >> On October 21, 2016 at 6:27 AM Tamsy wrote: > >> > >> > >> Timo Sirainen wrote on 20.10.2016 04:01: > >>> http://dovecot.org/releases/2.2/rc/dovecot-2.2.26.rc1.tar.gz > >>> http://dovecot.org/releases/2.2/rc/dovecot-2.2.26.rc1.tar.gz.sig > >>> > >>> There are quite a lot of changes since v2.2.25. Please try out this RC so we can get a good and stable v2.2.26 out. > >>> > >>> * master: Removed hardcoded 511 backlog limit for listen(). The kernel > >>> should limit this as needed. > >>> * doveadm import: Source user is now initialized the same as target > >>> user. Added -U parameter to override the source user. > >>> * Mailbox names are no longer limited to 16 hierarchy levels. We'll > >>> check another way to make sure mailbox names can't grow larger than > >>> 4096 bytes. > >>> > >>> + Added a concept of "alternative usernames" by returning user_* extra > >>> field(s) in passdb. doveadm proxy list shows these alt usernames in > >>> "doveadm proxy list" output. "doveadm director&proxy kick" adds > >>> -f parameter. The alt usernames don't have to be > >>> unique, so this allows creation of user groups and kicking them in > >>> one command. > >>> + auth: passdb/userdb dict allows now %variables in key settings. > >>> + auth: If passdb returns noauthenticate=yes extra field, assume that > >>> it only set extra fields and authentication wasn't actually performed. > >>> + auth: passdb static now supports password={scheme} prefix. > >>> + imapc: Added imapc_max_line_length to limit maximum memory usage. > >>> + imap, pop3: Added rawlog_dir setting to store IMAP/POP3 traffic logs. > >>> This replaces at least partially the rawlog plugin. > >>> + dsync: Added dsync_features=empty-header-workaround setting. This > >>> makes incremental dsyncs work better for servers that randomly return > >>> empty headers for mails. When an empty header is seen for an existing > >>> mail, dsync assumes that it matches the local mail. > >>> + doveadm sync/backup: Added -I parameter to skip too > >>> large mails. > >>> + doveadm sync/backup: Fixed -t parameter and added -e for "end date". > >>> + doveadm mailbox metadata: Added -s parameter to allow accessing > >>> server metadata by using empty mailbox name. > >>> > >>> - master process's listener socket was leaked to all child processes. > >>> This might have allowed untrusted processes to capture and prevent > >>> "doveadm service stop" comands from working. > >>> - auth: userdb fields weren't passed to auth-workers, so %{userdb:*} > >>> from previous userdbs didn't work there. > >>> - auth: Each userdb lookup from cache reset its TTL. > >>> - auth: Fixed auth_bind=yes + sasl_bind=yes to work together > >>> - auth: Blocking userdb lookups reset extra fields set by previous > >>> userdbs. > >>> - auth: Cache keys didn't include %{passdb:*} and %{userdb:*} > >>> - auth-policy: Fixed crash due to using already-freed memory if policy > >>> lookup takes longer than auth request exists. > >>> - lib-auth: Unescape passdb/userdb extra fields. Mainly affected > >>> returning extra fields with LFs or TABs. > >>> - lmtp_user_concurrency_limit>0 setting was logging unnecessary > >>> anvil errors. > >>> - lmtp_user_concurrency_limit is now checked before quota check with > >>> lmtp_rcpt_check_quota=yes to avoid unnecessary quota work. > >>> - lmtp: %{userdb:*} variables didn't work in mail_log_prefix > >>> - autoexpunge settings for mailboxes with wildcards didn't work when > >>> namespace prefix was non-empty. > >>> - Fixed writing >2GB to iostream-temp files (used by fs-compress, > >>> fs-metawrap, doveadm-http) > >>> - director: Ignore duplicates in director_servers setting. > >>> - zlib, IMAP BINARY: Fixed internal caching when accessing multiple > >>> newly created mails. They all had UID=0 and the next mail could have > >>> wrongly used the previously cached mail. > >>> - doveadm stats reset wasn't reseting all the stats. > >>> - auth_stats=yes: Don't update num_logins, since it doubles them when > >>> using with mail stats. > >>> - quota count: Fixed deadlocks when updating vsize header. > >>> - dict-quota: Fixed crashes happening due to memory corruption. > >>> - dict proxy: Fixed various timeout-related bugs. > >>> - doveadm proxying: Fixed -A and -u wildcard handling. > >>> - doveadm proxying: Fixed hangs and bugs related to printing. > >>> - imap: Fixed wrongly triggering assert-crash in > >>> client_check_command_hangs. > >>> - imap proxy: Don't send ID command pipelined with nopipelining=yes > >>> - imap-hibernate: Don't execute quota_over_script or last_login after > >>> un-hibernation. > >>> - imap-hibernate: Don't un-hibernate if client sends DONE+IDLE in one > >>> IP packet. > >>> - imap-hibernate: Fixed various failures when un-hibernating. > >>> - fts: fts_autoindex=yes was broken in 2.2.25 unless > >>> fts_autoindex_exclude settings existed. > >>> - fts-solr: Fixed searching multiple mailboxes (patch by x16a0) > >>> - doveadm fetch body.snippet wasn't working in 2.2.25. Also fixed a > >>> crash with certain emails. > >>> - pop3-migration + dbox: Various fixes related to POP3 UIDL > >>> optimization in 2.2.25. > >>> - pop3-migration: Fixed "truncated email header" workaround. > >> Since v2.2.25 up to v2.2.26.rc1 on Ubuntu 16.04.1 LTS Dovecot is > >> compiling successfully but "make check" is throwing out the following: > >> > >> make[2]: Leaving directory > >> '/usr/local/src/dovecot-2.2.26.rc1/src/lib-charset' > >> Making check in lib-ssl-iostream > >> make[2]: Entering directory > >> '/usr/local/src/dovecot-2.2.26.rc1/src/lib-ssl-iostream' > >> make[2]: Nothing to be done for 'check'. > >> make[2]: Leaving directory > >> '/usr/local/src/dovecot-2.2.26.rc1/src/lib-ssl-iostream' > >> Making check in lib-dcrypt > >> make[2]: Entering directory > >> '/usr/local/src/dovecot-2.2.26.rc1/src/lib-dcrypt' > >> for bin in test-crypto test-stream; do \ > >> if ! /bin/sh ../../run-test.sh ../.. ./$bin; then exit 1; fi; \ > >> done > >> test_cipher_test_vectors ............................................. : ok > >> test_cipher_aead_test_vectors ........................................ : ok > >> test_hmac_test_vectors ............................................... : ok > >> test_load_v1_keys .................................................... : ok > >> test_load_v1_key ..................................................... : ok > >> test_load_v1_public_key .............................................. : ok > >> > >> vex: the `impossible' happened: > >> isZeroU > >> vex storage: T total 586404328 bytes allocated > >> vex storage: P total 640 bytes allocated > >> > >> valgrind: the 'impossible' happened: > >> LibVEX called failure_exit(). > >> > >> host stacktrace: > >> ==20179== at 0x38083F48: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) > >> ==20179== by 0x38084064: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) > >> ==20179== by 0x380842A1: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) > >> ==20179== by 0x380842CA: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) > >> ==20179== by 0x3809F682: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) > >> ==20179== by 0x38148008: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) > >> ==20179== by 0x3815514D: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) > >> ==20179== by 0x38159272: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) > >> ==20179== by 0x38159EA6: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) > >> ==20179== by 0x3815BD68: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) > >> ==20179== by 0x3815CDB6: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) > >> ==20179== by 0x38145DEC: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) > >> ==20179== by 0x380A1C0B: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) > >> ==20179== by 0x380D296B: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) > >> ==20179== by 0x380D45CF: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) > >> ==20179== by 0x380E3946: ??? (in /usr/lib/valgrind/memcheck-amd64-linux) > >> > >> sched status: > >> running_tid=1 > >> > >> Thread 1: status = VgTs_Runnable (lwpid 20179) > >> ==20179== at 0x5DE7E00: ??? (in /lib/x86_64-linux-gnu/libcrypto.so.1.0.0) > >> ==20179== by 0x5DC70BF: EC_POINT_mul (in > >> /lib/x86_64-linux-gnu/libcrypto.so.1.0.0) > >> ==20179== by 0x5DC5F06: EC_POINT_new (in > >> /lib/x86_64-linux-gnu/libcrypto.so.1.0.0) > >> ==20179== by 0x5823876: dcrypt_openssl_load_private_key_dovecot_v2 > >> (dcrypt-openssl.c:1196) > >> ==20179== by 0x5823876: dcrypt_openssl_load_private_key_dovecot > >> (dcrypt-openssl.c:1244) > >> ==20179== by 0x5823876: dcrypt_openssl_load_private_key > >> (dcrypt-openssl.c:1587) > >> ==20179== by 0x40FC08: test_load_v2_key (test-crypto.c:393) > >> ==20179== by 0x41120C: test_run_funcs (test-common.c:236) > >> ==20179== by 0x411B60: test_run (test-common.c:306) > >> ==20179== by 0x40A99E: main (test-crypto.c:779) > >> > >> > >> Note: see also the FAQ in the source distribution. > >> It contains workarounds to several common problems. > >> In particular, if Valgrind aborted or crashed after > >> identifying problems in your program, there's a good chance > >> that fixing those problems will prevent Valgrind aborting or > >> crashing, especially if it happened in m_mallocfree.c. > >> > >> If that doesn't help, please report this bug to: www.valgrind.org > >> > >> In the bug report, send all the above text, the valgrind > >> version, and what OS and version you are using. Thanks. > >> > >> Failed to run: ./test-crypto > >> Makefile:1019: recipe for target 'check-test' failed > >> make[2]: *** [check-test] Error 1 > >> make[2]: Leaving directory > >> '/usr/local/src/dovecot-2.2.26.rc1/src/lib-dcrypt' > >> Makefile:494: recipe for target 'check-recursive' failed > >> make[1]: *** [check-recursive] Error 1 > >> make[1]: Leaving directory '/usr/local/src/dovecot-2.2.26.rc1/src' > >> Makefile:620: recipe for target 'check-recursive' failed > >> make: *** [check-recursive] Error 1 > > > > This is an issue with either valgrind or openssl. See https://bugs.launchpad.net/ubuntu/+source/valgrind/+bug/1574437 > > > > Aki > > I had an error (below) during make check as well on CentOS 5 (no issues > with 6 and 7) and commented the following 'sed' commands (Removing Rpath > from this page: https://fedoraproject.org/wiki/Packaging:Guidelines ) in > the spec file resulting in a successful build: > #sed -i 's|^hardcode_libdir_flag_spec=.*|hardcode_libdir_flag_spec=""|g' > libtool > #sed -i 's|^runpath_var=LD_RUN_PATH|runpath_var=DIE_RPATH_DIE|g' libtool > > > > make[3]: Leaving directory > `/usr/src/redhat/BUILD/dovecot-2.2.26.rc1/src/lib-storage' > for bin in test-mail-search-args-imap test-mail-search-args-simplify > test-mailbox-get; do \ > if ! /bin/sh ../../run-test.sh ../.. ./$bin; then exit 1; fi; \ > done > /usr/src/redhat/BUILD/dovecot-2.2.26.rc1/src/lib-storage/.libs/lt-test-mail-search-args-imap: > symbol lookup error: > /usr/src/redhat/BUILD/dovecot-2.2.26.rc1/src/lib-storage/.libs/lt-test-mail-search-args-imap: > undefined symbol: message_search_more_get_decoded > ==12372== Invalid read of size 8 > ==12372== at 0x3EABB0B925: ??? (in /lib64/libc-2.5.so) > ==12372== by 0x3EABB0B79A: ??? (in /lib64/libc-2.5.so) > ==12372== by 0x3EABB0BDF1: ??? (in /lib64/libc-2.5.so) > ==12372== by 0x48024E8: _vgnU_freeres (vg_preloaded.c:62) > ==12372== by 0x3EAB60D33B: _dl_signal_error (in /lib64/ld-2.5.so) > ==12372== by 0x3EAB60D3D3: _dl_signal_cerror (in /lib64/ld-2.5.so) > ==12372== by 0x3EAB609CEF: _dl_lookup_symbol_x (in /lib64/ld-2.5.so) > ==12372== by 0x3EAB60A9D4: _dl_relocate_object (in /lib64/ld-2.5.so) > ==12372== by 0x3EAB603629: dl_main (in /lib64/ld-2.5.so) > ==12372== by 0x3EAB6134EA: _dl_sysdep_start (in /lib64/ld-2.5.so) > ==12372== by 0x3EAB601389: _dl_start (in /lib64/ld-2.5.so) > ==12372== by 0x3EAB600A77: ??? (in /lib64/ld-2.5.so) > ==12372== Address 0x0 is not stack'd, malloc'd or (recently) free'd > ==12372== > ==12372== > ==12372== Process terminating with default action of signal 11 (SIGSEGV) > ==12372== Access not within mapped region at address 0x0 > ==12372== at 0x3EABB0B925: ??? (in /lib64/libc-2.5.so) > ==12372== by 0x3EABB0B79A: ??? (in /lib64/libc-2.5.so) > ==12372== by 0x3EABB0BDF1: ??? (in /lib64/libc-2.5.so) > ==12372== by 0x48024E8: _vgnU_freeres (vg_preloaded.c:62) > ==12372== by 0x3EAB60D33B: _dl_signal_error (in /lib64/ld-2.5.so) > ==12372== by 0x3EAB60D3D3: _dl_signal_cerror (in /lib64/ld-2.5.so) > ==12372== by 0x3EAB609CEF: _dl_lookup_symbol_x (in /lib64/ld-2.5.so) > ==12372== by 0x3EAB60A9D4: _dl_relocate_object (in /lib64/ld-2.5.so) > ==12372== by 0x3EAB603629: dl_main (in /lib64/ld-2.5.so) > ==12372== by 0x3EAB6134EA: _dl_sysdep_start (in /lib64/ld-2.5.so) > ==12372== by 0x3EAB601389: _dl_start (in /lib64/ld-2.5.so) > ==12372== by 0x3EAB600A77: ??? (in /lib64/ld-2.5.so) > ==12372== If you believe this happened as a result of a stack > ==12372== overflow in your program's main thread (unlikely but > ==12372== possible), you can try to increase the size of the > ==12372== main thread stack using the --main-stacksize= flag. > ==12372== The main thread stack size used in this run was 10485760. > Failed to run: ./test-mail-search-args-imap > make[2]: *** [check-test] Error 1 > make[2]: Leaving directory > `/usr/src/redhat/BUILD/dovecot-2.2.26.rc1/src/lib-storage' > make[1]: *** [check-recursive] Error 1 > make[1]: Leaving directory `/usr/src/redhat/BUILD/dovecot-2.2.26.rc1/src' > make: *** [check-recursive] Error 1 > error: Bad exit status from /var/tmp/rpm-tmp.69652 (%check) > > > RPM build errors: > Bad exit status from /var/tmp/rpm-tmp.69652 (%check) > > We have dropped support for CentOS5. Aki Tuomi Dovecot oy From matthew.broadhead at nbmlaw.co.uk Fri Oct 21 08:22:20 2016 From: matthew.broadhead at nbmlaw.co.uk (Matthew Broadhead) Date: Fri, 21 Oct 2016 10:22:20 +0200 Subject: sieve sending vacation message from vmail@ns1.domain.tld In-Reply-To: <4aa89a3c-937f-a1e6-3871-1df196ac7af2@rename-it.nl> References: <71b362e8-3a69-076d-6376-2f3bbd39d0eb@nbmlaw.co.uk> <94941225-09d0-1440-1733-3884cc6dcd67@rename-it.nl> <7cdadba3-fd03-7d8c-1235-b428018a081c@nbmlaw.co.uk> <55712b3a-4812-f0a6-c9f9-59efcdac79f7@rename-it.nl> <8260ce16-bc94-e3a9-13d1-f1204e6ae525@rename-it.nl> <344d3d36-b905-5a90-e0ea-17d556076838@nbmlaw.co.uk> <9b47cb74-0aa7-4851-11f0-5a367341a63b@nbmlaw.co.uk> <4aa89a3c-937f-a1e6-3871-1df196ac7af2@rename-it.nl> Message-ID: the server is using CentOS 7 and that is the package that comes through yum. everything is up to date. i am hesitant to install a new package manually as that could cause other compatibility issues? is there another way to test the configuration on the server? On 21/10/2016 01:07, Stephan Bosch wrote: > Op 10/20/2016 om 7:38 PM schreef Matthew Broadhead: >> do i need to provide more information? >> > It still doesn't make sense to me. I do notice that the version you're > using is ancient (dated 26-09-2013), which may well the problem. > > Do have the ability to upgrade? > > Regards, > > Stephan. > >> On 19/10/2016 14:49, Matthew Broadhead wrote: >>> /var/log/maillog showed this >>> Oct 19 13:25:41 ns1 postfix/smtpd[1298]: 7599A2C19C6: >>> client=unknown[127.0.0.1] >>> Oct 19 13:25:41 ns1 postfix/cleanup[1085]: 7599A2C19C6: >>> message-id= >>> Oct 19 13:25:41 ns1 postfix/qmgr[1059]: 7599A2C19C6: >>> from=, size=3190, nrcpt=1 (queue active) >>> Oct 19 13:25:41 ns1 amavis[32367]: (32367-17) Passed CLEAN >>> {RelayedInternal}, ORIGINATING LOCAL [80.30.255.180]:54566 >>> [80.30.255.180] -> >>> , Queue-ID: BFFA62C1965, Message-ID: >>> , mail_id: >>> TlJQ9xQhWjQk, Hits: -2.9, size: 2235, queued_as: 7599A2C19C6, >>> dkim_new=foo:nbmlaw.co.uk, 531 ms >>> Oct 19 13:25:41 ns1 postfix/smtp[1135]: BFFA62C1965: >>> to=, relay=127.0.0.1[127.0.0.1]:10026, >>> delay=0.76, delays=0.22/0/0/0.53, dsn=2.0.0, status=sent (250 2.0.0 >>> from MTA(smtp:[127.0.0.1]:10027): 250 2.0.0 Ok: queued as 7599A2C19C6) >>> Oct 19 13:25:41 ns1 postfix/qmgr[1059]: BFFA62C1965: removed >>> Oct 19 13:25:41 ns1 postfix/smtpd[1114]: connect from >>> ns1.nbmlaw.co.uk[217.174.253.19] >>> Oct 19 13:25:41 ns1 postfix/smtpd[1114]: NOQUEUE: filter: RCPT from >>> ns1.nbmlaw.co.uk[217.174.253.19]: : Sender >>> address triggers FILTER smtp-amavis:[127.0.0.1]:10026; >>> from= to= >>> proto=SMTP helo= >>> Oct 19 13:25:41 ns1 postfix/smtpd[1114]: 8A03F2C1965: >>> client=ns1.nbmlaw.co.uk[217.174.253.19] >>> Oct 19 13:25:41 ns1 postfix/cleanup[1085]: 8A03F2C1965: >>> message-id= >>> Oct 19 13:25:41 ns1 opendmarc[2430]: implicit authentication service: >>> ns1.nbmlaw.co.uk >>> Oct 19 13:25:41 ns1 opendmarc[2430]: 8A03F2C1965: ns1.nbmlaw.co.uk fail >>> Oct 19 13:25:41 ns1 postfix/qmgr[1059]: 8A03F2C1965: >>> from=, size=1077, nrcpt=1 (queue active) >>> Oct 19 13:25:41 ns1 postfix/smtpd[1114]: disconnect from >>> ns1.nbmlaw.co.uk[217.174.253.19] >>> Oct 19 13:25:41 ns1 sSMTP[1895]: Sent mail for vmail at ns1.nbmlaw.co.uk >>> (221 2.0.0 Bye) uid=996 username=vmail outbytes=971 >>> Oct 19 13:25:41 ns1 postfix/smtpd[1898]: connect from unknown[127.0.0.1] >>> Oct 19 13:25:41 ns1 postfix/pipe[1162]: 7599A2C19C6: >>> to=, relay=dovecot, delay=0.46, >>> delays=0/0/0/0.45, dsn=2.0.0, status=sent (delivered via dovecot >>> service) >>> Oct 19 13:25:41 ns1 postfix/qmgr[1059]: 7599A2C19C6: removed >>> Oct 19 13:25:41 ns1 postfix/smtpd[1898]: E53472C19C6: >>> client=unknown[127.0.0.1] >>> Oct 19 13:25:41 ns1 postfix/cleanup[1085]: E53472C19C6: >>> message-id= >>> Oct 19 13:25:41 ns1 postfix/qmgr[1059]: E53472C19C6: >>> from=, size=1619, nrcpt=1 (queue active) >>> Oct 19 13:25:41 ns1 amavis[1885]: (01885-01) Passed CLEAN >>> {RelayedInternal}, ORIGINATING LOCAL [217.174.253.19]:40960 >>> [217.174.253.19] -> >>> , Queue-ID: 8A03F2C1965, Message-ID: >>> , mail_id: >>> mOMO97yjVqjM, Hits: -2.211, size: 1301, queued_as: E53472C19C6, 296 ms >>> Oct 19 13:25:41 ns1 postfix/smtp[1217]: 8A03F2C1965: >>> to=, >>> relay=127.0.0.1[127.0.0.1]:10026, delay=0.38, delays=0.08/0/0/0.29, >>> dsn=2.0.0, status=sent (250 2.0.0 from MTA(smtp:[127.0.0.1]:10027): >>> 250 2.0.0 Ok: queued as E53472C19C6) >>> Oct 19 13:25:41 ns1 postfix/qmgr[1059]: 8A03F2C1965: removed >>> Oct 19 13:25:42 ns1 postfix/pipe[1303]: E53472C19C6: >>> to=, relay=dovecot, delay=0.14, >>> delays=0/0/0/0.14, dsn=2.0.0, status=sent (delivered via dovecot >>> service) >>> Oct 19 13:25:42 ns1 postfix/qmgr[1059]: E53472C19C6: removed >>> >>> On 19/10/2016 13:54, Stephan Bosch wrote: >>>> >>>> Op 19-10-2016 om 13:47 schreef Matthew Broadhead: >>>>> i am not 100% sure how to give you the information you require. >>>>> >>>>> my current setup in /etc/postfix/master.cf is >>>>> flags=DRhu user=vmail:mail argv=/usr/libexec/dovecot/deliver -d >>>>> ${recipient} >>>>> so recipient would presumably be user at domain.tld? or do you want >>>>> the real email address of one of our users? is there some way i >>>>> can output this information directly e.g. in logs? >>>> I am no Postfix expert. I just need to know which values are being >>>> passed to dovecot-lda with what options. I'd assume Postfix allows >>>> logging the command line or at least the values of these variables. >>>> >>>>> the incoming email message could be anything? again i can run an >>>>> example directly if you can advise the best way to do this >>>> As long as the problem occurs with this message. >>>> >>>> BTW, it would also be helpful to have the Dovecot logs from this >>>> delivery, with mail_debug configured to "yes". >>>> >>>> Regards, >>>> >>>> Stephan. >>>> >>>>> On 19/10/2016 12:54, Stephan Bosch wrote: >>>>>> Also, please provide an example scenario; i.e., for one >>>>>> problematic delivery provide: >>>>>> >>>>>> - The values of the variables substituted in the dovecot-lda >>>>>> command line; i.e., provide that command line. >>>>>> - The incoming e-mail message. >>>>>> >>>>>> Regards, >>>>>> >>>>>> Stephan. >>>>>> >>>>>> Op 19-10-2016 om 12:43 schreef Matthew Broadhead: >>>>>>> dovecot is configured by sentora control panel to a certain >>>>>>> extent. if you want those configs i can send them as well >>>>>>> >>>>>>> dovecot -n >>>>>>> >>>>>>> debug_log_path = /var/log/dovecot-debug.log >>>>>>> dict { >>>>>>> quotadict = >>>>>>> mysql:/etc/sentora/configs/dovecot2/dovecot-dict-quota.conf >>>>>>> } >>>>>>> disable_plaintext_auth = no >>>>>>> first_valid_gid = 12 >>>>>>> first_valid_uid = 996 >>>>>>> info_log_path = /var/log/dovecot-info.log >>>>>>> lda_mailbox_autocreate = yes >>>>>>> lda_mailbox_autosubscribe = yes >>>>>>> listen = * >>>>>>> lmtp_save_to_detail_mailbox = yes >>>>>>> log_path = /var/log/dovecot.log >>>>>>> log_timestamp = %Y-%m-%d %H:%M:%S >>>>>>> mail_fsync = never >>>>>>> mail_location = maildir:/var/sentora/vmail/%d/%n >>>>>>> managesieve_notify_capability = mailto >>>>>>> managesieve_sieve_capability = fileinto reject envelope >>>>>>> encoded-character vacation subaddress comparator-i;ascii-numeric >>>>>>> relational regex imap4flags copy include variables body enotify >>>>>>> environment mailbox date ihave >>>>>>> passdb { >>>>>>> args = /etc/sentora/configs/dovecot2/dovecot-mysql.conf >>>>>>> driver = sql >>>>>>> } >>>>>>> plugin { >>>>>>> acl = vfile:/etc/dovecot/acls >>>>>>> quota = maildir:User quota >>>>>>> sieve = ~/dovecot.sieve >>>>>>> sieve_dir = ~/sieve >>>>>>> sieve_global_dir = /var/sentora/sieve/ >>>>>>> sieve_global_path = /var/sentora/sieve/globalfilter.sieve >>>>>>> sieve_max_script_size = 1M >>>>>>> sieve_vacation_send_from_recipient = yes >>>>>>> trash = /etc/sentora/configs/dovecot2/dovecot-trash.conf >>>>>>> } >>>>>>> protocols = imap pop3 lmtp sieve >>>>>>> service auth { >>>>>>> unix_listener /var/spool/postfix/private/auth { >>>>>>> group = postfix >>>>>>> mode = 0666 >>>>>>> user = postfix >>>>>>> } >>>>>>> unix_listener auth-userdb { >>>>>>> group = mail >>>>>>> mode = 0666 >>>>>>> user = vmail >>>>>>> } >>>>>>> } >>>>>>> service dict { >>>>>>> unix_listener dict { >>>>>>> group = mail >>>>>>> mode = 0666 >>>>>>> user = vmail >>>>>>> } >>>>>>> } >>>>>>> service imap-login { >>>>>>> inet_listener imap { >>>>>>> port = 143 >>>>>>> } >>>>>>> process_limit = 500 >>>>>>> process_min_avail = 2 >>>>>>> } >>>>>>> service imap { >>>>>>> vsz_limit = 256 M >>>>>>> } >>>>>>> service managesieve-login { >>>>>>> inet_listener sieve { >>>>>>> port = 4190 >>>>>>> } >>>>>>> process_min_avail = 0 >>>>>>> service_count = 1 >>>>>>> vsz_limit = 64 M >>>>>>> } >>>>>>> service pop3-login { >>>>>>> inet_listener pop3 { >>>>>>> port = 110 >>>>>>> } >>>>>>> } >>>>>>> ssl_cert = >>>>>> ssl_key = >>>>>> ssl_protocols = !SSLv2 !SSLv3 >>>>>>> userdb { >>>>>>> driver = prefetch >>>>>>> } >>>>>>> userdb { >>>>>>> args = /etc/sentora/configs/dovecot2/dovecot-mysql.conf >>>>>>> driver = sql >>>>>>> } >>>>>>> protocol lda { >>>>>>> mail_fsync = optimized >>>>>>> mail_plugins = quota sieve >>>>>>> postmaster_address = postmaster at ns1.nbmlaw.co.uk >>>>>>> } >>>>>>> protocol imap { >>>>>>> imap_client_workarounds = delay-newmail >>>>>>> mail_fsync = optimized >>>>>>> mail_max_userip_connections = 60 >>>>>>> mail_plugins = quota imap_quota trash >>>>>>> } >>>>>>> protocol lmtp { >>>>>>> mail_plugins = quota sieve >>>>>>> } >>>>>>> protocol pop3 { >>>>>>> mail_plugins = quota >>>>>>> pop3_client_workarounds = outlook-no-nuls oe-ns-eoh >>>>>>> pop3_uidl_format = %08Xu%08Xv >>>>>>> } >>>>>>> protocol sieve { >>>>>>> managesieve_implementation_string = Dovecot Pigeonhole >>>>>>> managesieve_max_compile_errors = 5 >>>>>>> managesieve_max_line_length = 65536 >>>>>>> } >>>>>>> >>>>>>> managesieve.sieve >>>>>>> >>>>>>> require ["fileinto","vacation"]; >>>>>>> # rule:[vacation] >>>>>>> if true >>>>>>> { >>>>>>> vacation :days 1 :subject "Vacation subject" text: >>>>>>> i am currently out of the office >>>>>>> >>>>>>> trying some line breaks >>>>>>> >>>>>>> ...zzz >>>>>>> . >>>>>>> ; >>>>>>> } >>>>>>> >>>>>>> On 19/10/2016 12:29, Stephan Bosch wrote: >>>>>>>> Could you send your configuration (output from `dovecot -n`)? >>>>>>>> >>>>>>>> Also, please provide an example scenario; i.e., for one >>>>>>>> problematic delivery provide: >>>>>>>> >>>>>>>> - The values of the variables substituted below. >>>>>>>> >>>>>>>> - The incoming e-mail message. >>>>>>>> >>>>>>>> - The Sieve script (or at least that vacation command). >>>>>>>> >>>>>>>> Regards, >>>>>>>> >>>>>>>> >>>>>>>> Stephan. >>>>>>>> >>>>>>>> Op 19-10-2016 om 11:42 schreef Matthew Broadhead: >>>>>>>>> hi, does anyone have any ideas about this issue? i have not >>>>>>>>> had any response yet >>>>>>>>> >>>>>>>>> i tried changing /etc/postfix/master.cf line: >>>>>>>>> dovecot unix - n n - - pipe >>>>>>>>> flags=DRhu user=vmail:mail argv=/usr/libexec/dovecot/deliver -d >>>>>>>>> ${recipient} >>>>>>>>> >>>>>>>>> to >>>>>>>>> flags=DRhu user=vmail:mail >>>>>>>>> argv=/usr/libexec/dovecot/dovecot-lda -f ${sender} -d >>>>>>>>> ${user}@${nexthop} -a ${original_recipient} >>>>>>>>> >>>>>>>>> and >>>>>>>>> -d ${user}@${domain} -a {recipient} -f ${sender} -m ${extension} >>>>>>>>> >>>>>>>>> but it didn't work >>>>>>>>> >>>>>>>>> On 12/10/2016 13:57, Matthew Broadhead wrote: >>>>>>>>>> I have a server running >>>>>>>>>> centos-release-7-2.1511.el7.centos.2.10.x86_64 with dovecot >>>>>>>>>> version 2.2.10. I am also using roundcube for webmail. when a >>>>>>>>>> vacation filter (reply with message) is created in roundcube >>>>>>>>>> it adds a rule to managesieve.sieve in the user's mailbox. >>>>>>>>>> everything works fine except the reply comes from >>>>>>>>>> vmail at ns1.domain.tld instead of user at domain.tld. >>>>>>>>>> ns1.domain.tld is the fully qualified name of the server. >>>>>>>>>> >>>>>>>>>> it used to work fine on my old CentOS 6 server so I am not >>>>>>>>>> sure what has changed. Can anyone point me in the direction >>>>>>>>>> of where I can configure this behaviour? From mpeters at domblogger.net Fri Oct 21 09:25:31 2016 From: mpeters at domblogger.net (Michael A. Peters) Date: Fri, 21 Oct 2016 02:25:31 -0700 Subject: v2.2.26 release candidate released In-Reply-To: References: Message-ID: <6880d0e5-4293-dac4-fe43-2be7d723d61c@domblogger.net> On 10/19/2016 02:01 PM, Timo Sirainen wrote: > http://dovecot.org/releases/2.2/rc/dovecot-2.2.26.rc1.tar.gz > http://dovecot.org/releases/2.2/rc/dovecot-2.2.26.rc1.tar.gz.sig > > There are quite a lot of changes since v2.2.25. Please try out this RC so we can get a good and stable v2.2.26 out. I am not able to test it but I can verify that it compiles on CentOS against LibreSSL 2.4.3 From mpeters at domblogger.net Fri Oct 21 09:26:41 2016 From: mpeters at domblogger.net (Michael A. Peters) Date: Fri, 21 Oct 2016 02:26:41 -0700 Subject: v2.2.26 release candidate released In-Reply-To: <6880d0e5-4293-dac4-fe43-2be7d723d61c@domblogger.net> References: <6880d0e5-4293-dac4-fe43-2be7d723d61c@domblogger.net> Message-ID: <902d6ce9-d5d6-7113-e186-5f328998f388@domblogger.net> On 10/21/2016 02:25 AM, Michael A. Peters wrote: > On 10/19/2016 02:01 PM, Timo Sirainen wrote: >> http://dovecot.org/releases/2.2/rc/dovecot-2.2.26.rc1.tar.gz >> http://dovecot.org/releases/2.2/rc/dovecot-2.2.26.rc1.tar.gz.sig >> >> There are quite a lot of changes since v2.2.25. Please try out this RC >> so we can get a good and stable v2.2.26 out. > > I am not able to test it but I can verify that it compiles on CentOS > against LibreSSL 2.4.3 On CentOS 7 From skdovecot at smail.inf.fh-brs.de Fri Oct 21 10:08:57 2016 From: skdovecot at smail.inf.fh-brs.de (Steffen Kaiser) Date: Fri, 21 Oct 2016 12:08:57 +0200 (CEST) Subject: Migrating users from a 2.0.19 to a 2.2.24 installation In-Reply-To: <4826F752-1241-4255-A1FF-F7B4B1D1240F@rna.nl> References: <4826F752-1241-4255-A1FF-F7B4B1D1240F@rna.nl> Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Thu, 20 Oct 2016, Gerben Wierda wrote: > I am currently still running an older dovecot (2.0.19apple1 on Mac OS X > 10.8.5) and I want to migrate my users to a new server (macOS 10.12 with > Server 5, which contains dovecot 2.2.24 (a82c823)). > > Basically, I want to create a new server installation on the new server > so I don't bring any junk over (new user accounts, with the same uid/gid > (still need to figure that one out), but after I have done that I need > to move the data over from the old instalation to the new. > > Has anything changed in the formats between 2.0 and 2.2 that will stop me from doing this? The index files etc.pp. will be updated on the fly. You should check if your mailbox storage format is supported, still. Check out http://wiki2.dovecot.org/Upgrading If have moved Maildir with Sieve with no trouble. - -- Steffen Kaiser -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQEVAwUBWAnpOXz1H7kL/d9rAQJf1Qf/coQ9550WukxX/bAivbdW129vDk5DfvRv /JvOequE9R4Vc8ylxA0WFVnQ1cc2hPHNw4ZDiYerypoj9DOA78HKa/xpHPADuSSh U8yEkaVR1bszrheR1CzbN2e3ghfR+dJQ0PTfJzoH8jNvaDWESS5CYAQksNyxEuEB iQZXzCBJmPlTFySxCeVyIiot65a6qyR/S6otF80xqDDrexXOMo7KKwyXtM/UtNZA aUZHS8YbNyta4fnQW73Mg7R36K9enDAaP5xFpSNJ4b8E64xdH2PQ51FG8ZsyUV5s Yp1d7owBjULj/QWyPSX3T9Yy4UkFaMCSBXgHYribVdZAP/jvGzBJbg== =q079 -----END PGP SIGNATURE----- From pierre at jaury.eu Fri Oct 21 11:16:20 2016 From: pierre at jaury.eu (Pierre Jaury) Date: Fri, 21 Oct 2016 13:16:20 +0200 Subject: v2.2.26 release candidate released In-Reply-To: References: Message-ID: <5304703a-d112-01ec-ec6d-9a9ef3d714ea@jaury.eu> Hi, Do you plan on inclinding the following patch in the final release? http://www.dovecot.org/list/dovecot/2016-October/105734.html I have some tests in place and can backport the patch if necessary. Regards, On 10/19/2016 11:01 PM, Timo Sirainen wrote: > http://dovecot.org/releases/2.2/rc/dovecot-2.2.26.rc1.tar.gz > http://dovecot.org/releases/2.2/rc/dovecot-2.2.26.rc1.tar.gz.sig > > There are quite a lot of changes since v2.2.25. Please try out this RC so we can get a good and stable v2.2.26 out. > > * master: Removed hardcoded 511 backlog limit for listen(). The kernel > should limit this as needed. > * doveadm import: Source user is now initialized the same as target > user. Added -U parameter to override the source user. > * Mailbox names are no longer limited to 16 hierarchy levels. We'll > check another way to make sure mailbox names can't grow larger than > 4096 bytes. > > + Added a concept of "alternative usernames" by returning user_* extra > field(s) in passdb. doveadm proxy list shows these alt usernames in > "doveadm proxy list" output. "doveadm director&proxy kick" adds > -f parameter. The alt usernames don't have to be > unique, so this allows creation of user groups and kicking them in > one command. > + auth: passdb/userdb dict allows now %variables in key settings. > + auth: If passdb returns noauthenticate=yes extra field, assume that > it only set extra fields and authentication wasn't actually performed. > + auth: passdb static now supports password={scheme} prefix. > + imapc: Added imapc_max_line_length to limit maximum memory usage. > + imap, pop3: Added rawlog_dir setting to store IMAP/POP3 traffic logs. > This replaces at least partially the rawlog plugin. > + dsync: Added dsync_features=empty-header-workaround setting. This > makes incremental dsyncs work better for servers that randomly return > empty headers for mails. When an empty header is seen for an existing > mail, dsync assumes that it matches the local mail. > + doveadm sync/backup: Added -I parameter to skip too > large mails. > + doveadm sync/backup: Fixed -t parameter and added -e for "end date". > + doveadm mailbox metadata: Added -s parameter to allow accessing > server metadata by using empty mailbox name. > > - master process's listener socket was leaked to all child processes. > This might have allowed untrusted processes to capture and prevent > "doveadm service stop" comands from working. > - auth: userdb fields weren't passed to auth-workers, so %{userdb:*} > from previous userdbs didn't work there. > - auth: Each userdb lookup from cache reset its TTL. > - auth: Fixed auth_bind=yes + sasl_bind=yes to work together > - auth: Blocking userdb lookups reset extra fields set by previous > userdbs. > - auth: Cache keys didn't include %{passdb:*} and %{userdb:*} > - auth-policy: Fixed crash due to using already-freed memory if policy > lookup takes longer than auth request exists. > - lib-auth: Unescape passdb/userdb extra fields. Mainly affected > returning extra fields with LFs or TABs. > - lmtp_user_concurrency_limit>0 setting was logging unnecessary > anvil errors. > - lmtp_user_concurrency_limit is now checked before quota check with > lmtp_rcpt_check_quota=yes to avoid unnecessary quota work. > - lmtp: %{userdb:*} variables didn't work in mail_log_prefix > - autoexpunge settings for mailboxes with wildcards didn't work when > namespace prefix was non-empty. > - Fixed writing >2GB to iostream-temp files (used by fs-compress, > fs-metawrap, doveadm-http) > - director: Ignore duplicates in director_servers setting. > - zlib, IMAP BINARY: Fixed internal caching when accessing multiple > newly created mails. They all had UID=0 and the next mail could have > wrongly used the previously cached mail. > - doveadm stats reset wasn't reseting all the stats. > - auth_stats=yes: Don't update num_logins, since it doubles them when > using with mail stats. > - quota count: Fixed deadlocks when updating vsize header. > - dict-quota: Fixed crashes happening due to memory corruption. > - dict proxy: Fixed various timeout-related bugs. > - doveadm proxying: Fixed -A and -u wildcard handling. > - doveadm proxying: Fixed hangs and bugs related to printing. > - imap: Fixed wrongly triggering assert-crash in > client_check_command_hangs. > - imap proxy: Don't send ID command pipelined with nopipelining=yes > - imap-hibernate: Don't execute quota_over_script or last_login after > un-hibernation. > - imap-hibernate: Don't un-hibernate if client sends DONE+IDLE in one > IP packet. > - imap-hibernate: Fixed various failures when un-hibernating. > - fts: fts_autoindex=yes was broken in 2.2.25 unless > fts_autoindex_exclude settings existed. > - fts-solr: Fixed searching multiple mailboxes (patch by x16a0) > - doveadm fetch body.snippet wasn't working in 2.2.25. Also fixed a > crash with certain emails. > - pop3-migration + dbox: Various fixes related to POP3 UIDL > optimization in 2.2.25. > - pop3-migration: Fixed "truncated email header" workaround. > From aki.tuomi at dovecot.fi Fri Oct 21 11:18:14 2016 From: aki.tuomi at dovecot.fi (Aki Tuomi) Date: Fri, 21 Oct 2016 14:18:14 +0300 Subject: v2.2.26 release candidate released In-Reply-To: <5304703a-d112-01ec-ec6d-9a9ef3d714ea@jaury.eu> References: <5304703a-d112-01ec-ec6d-9a9ef3d714ea@jaury.eu> Message-ID: <869a3920-fb7f-1659-194f-d5afcb92a259@dovecot.fi> Hi! This is included in the rc already, while not mentioned in changelog. Aki On 21.10.2016 14:16, Pierre Jaury wrote: > Hi, > > Do you plan on inclinding the following patch in the final release? > > http://www.dovecot.org/list/dovecot/2016-October/105734.html > > I have some tests in place and can backport the patch if necessary. > > Regards, > > On 10/19/2016 11:01 PM, Timo Sirainen wrote: >> http://dovecot.org/releases/2.2/rc/dovecot-2.2.26.rc1.tar.gz >> http://dovecot.org/releases/2.2/rc/dovecot-2.2.26.rc1.tar.gz.sig >> >> There are quite a lot of changes since v2.2.25. Please try out this RC so we can get a good and stable v2.2.26 out. >> >> * master: Removed hardcoded 511 backlog limit for listen(). The kernel >> should limit this as needed. >> * doveadm import: Source user is now initialized the same as target >> user. Added -U parameter to override the source user. >> * Mailbox names are no longer limited to 16 hierarchy levels. We'll >> check another way to make sure mailbox names can't grow larger than >> 4096 bytes. >> >> + Added a concept of "alternative usernames" by returning user_* extra >> field(s) in passdb. doveadm proxy list shows these alt usernames in >> "doveadm proxy list" output. "doveadm director&proxy kick" adds >> -f parameter. The alt usernames don't have to be >> unique, so this allows creation of user groups and kicking them in >> one command. >> + auth: passdb/userdb dict allows now %variables in key settings. >> + auth: If passdb returns noauthenticate=yes extra field, assume that >> it only set extra fields and authentication wasn't actually performed. >> + auth: passdb static now supports password={scheme} prefix. >> + imapc: Added imapc_max_line_length to limit maximum memory usage. >> + imap, pop3: Added rawlog_dir setting to store IMAP/POP3 traffic logs. >> This replaces at least partially the rawlog plugin. >> + dsync: Added dsync_features=empty-header-workaround setting. This >> makes incremental dsyncs work better for servers that randomly return >> empty headers for mails. When an empty header is seen for an existing >> mail, dsync assumes that it matches the local mail. >> + doveadm sync/backup: Added -I parameter to skip too >> large mails. >> + doveadm sync/backup: Fixed -t parameter and added -e for "end date". >> + doveadm mailbox metadata: Added -s parameter to allow accessing >> server metadata by using empty mailbox name. >> >> - master process's listener socket was leaked to all child processes. >> This might have allowed untrusted processes to capture and prevent >> "doveadm service stop" comands from working. >> - auth: userdb fields weren't passed to auth-workers, so %{userdb:*} >> from previous userdbs didn't work there. >> - auth: Each userdb lookup from cache reset its TTL. >> - auth: Fixed auth_bind=yes + sasl_bind=yes to work together >> - auth: Blocking userdb lookups reset extra fields set by previous >> userdbs. >> - auth: Cache keys didn't include %{passdb:*} and %{userdb:*} >> - auth-policy: Fixed crash due to using already-freed memory if policy >> lookup takes longer than auth request exists. >> - lib-auth: Unescape passdb/userdb extra fields. Mainly affected >> returning extra fields with LFs or TABs. >> - lmtp_user_concurrency_limit>0 setting was logging unnecessary >> anvil errors. >> - lmtp_user_concurrency_limit is now checked before quota check with >> lmtp_rcpt_check_quota=yes to avoid unnecessary quota work. >> - lmtp: %{userdb:*} variables didn't work in mail_log_prefix >> - autoexpunge settings for mailboxes with wildcards didn't work when >> namespace prefix was non-empty. >> - Fixed writing >2GB to iostream-temp files (used by fs-compress, >> fs-metawrap, doveadm-http) >> - director: Ignore duplicates in director_servers setting. >> - zlib, IMAP BINARY: Fixed internal caching when accessing multiple >> newly created mails. They all had UID=0 and the next mail could have >> wrongly used the previously cached mail. >> - doveadm stats reset wasn't reseting all the stats. >> - auth_stats=yes: Don't update num_logins, since it doubles them when >> using with mail stats. >> - quota count: Fixed deadlocks when updating vsize header. >> - dict-quota: Fixed crashes happening due to memory corruption. >> - dict proxy: Fixed various timeout-related bugs. >> - doveadm proxying: Fixed -A and -u wildcard handling. >> - doveadm proxying: Fixed hangs and bugs related to printing. >> - imap: Fixed wrongly triggering assert-crash in >> client_check_command_hangs. >> - imap proxy: Don't send ID command pipelined with nopipelining=yes >> - imap-hibernate: Don't execute quota_over_script or last_login after >> un-hibernation. >> - imap-hibernate: Don't un-hibernate if client sends DONE+IDLE in one >> IP packet. >> - imap-hibernate: Fixed various failures when un-hibernating. >> - fts: fts_autoindex=yes was broken in 2.2.25 unless >> fts_autoindex_exclude settings existed. >> - fts-solr: Fixed searching multiple mailboxes (patch by x16a0) >> - doveadm fetch body.snippet wasn't working in 2.2.25. Also fixed a >> crash with certain emails. >> - pop3-migration + dbox: Various fixes related to POP3 UIDL >> optimization in 2.2.25. >> - pop3-migration: Fixed "truncated email header" workaround. >> From stephan at rename-it.nl Fri Oct 21 17:05:32 2016 From: stephan at rename-it.nl (Stephan Bosch) Date: Fri, 21 Oct 2016 19:05:32 +0200 Subject: Released Pigeonhole v0.4.16.rc1 for Dovecot v2.2.26.rc1. Message-ID: <765ece6b-a8d2-731f-466b-23b4cac0d81c@rename-it.nl> Hello Dovecot users, The upcoming release includes quite a few new features for a change. The most important ones are the Sieve discard-script feature and the new vnd.dovecot.config environment items. Documentation for these features can be found in the package itself. Please test these thoroughly. Changelog v0.4.16: * Part of the Sieve extprograms implementation was moved to Dovecot, which means that this release depends on Dovecot v2.2.26+. * ManageSieve: The PUTSCRIPT command now allows uploading empty Sieve scripts. There was really no good reason to disallow doing that. + Sieve vnd.dovecot.report extension: + Added a Dovecot-Reporting-User field to the report body, which contains the e-mail address of the user sending the report. + Added support for configuring the "From:" address used in the report. + LDA sieve plugin: Implemented support for a "discard script" that is run when the message is going to be discarded. This allows doing something other than throwing the message away for good. + Sieve vnd.dovecot.environment extension: Added vnd.dovecot.config.* environment items. These environment items map to sieve_env_* settings from the plugin {} section in the configuration. Such values can of course also be returned from userdb. + Sieve vacation extension: Use the Microsoft X-Auto-Response-Suppress header to prevent unwanted responses from and to (older) Microsoft products. + ManageSieve: Added rawlog_dir setting to store ManageSieve traffic logs. This replaces at least partially the rawlog plugin (mimics similar IMAP/POP3 change). - doveadm sieve plugin: synchronization: Prevent setting file timestamps to unix epoch time. This occurred when Dovecot passed the timestamp as 'unknown' during synchronization. - Sieve exprograms plugin: Fixed spurious '+' sometimes returned at the end of socket-based program output. - imapsieve plugin: Fixed crash occurring in specific situations. - Performed various fixes based on static analysis and Clang warnings. The release is available as follows: http://pigeonhole.dovecot.org/releases/2.2/rc/dovecot-2.2-pigeonhole-0.4.16.rc1.tar.gz http://pigeonhole.dovecot.org/releases/2.2/rc/dovecot-2.2-pigeonhole-0.4.16.rc1.tar.gz.sig Refer to http://pigeonhole.dovecot.org and the Dovecot v2.x wiki for more information. Have fun testing this release candidate and don't hesitate to notify me when there are any problems. Regards, -- Stephan Bosch stephan at rename-it.nl From larryrtx at gmail.com Fri Oct 21 17:06:35 2016 From: larryrtx at gmail.com (Larry Rosenman) Date: Fri, 21 Oct 2016 12:06:35 -0500 Subject: keent() from Tika - with doveadm Message-ID: getting the following: Oct 21, 2016 12:04:25 PM org.apache.tika.server.resource.TikaResource logRequest INFO: tika/ (application/vnd.openxmlformats-officedocument.wordprocessingml.document) doveadm(ctr): Debug: http-client: conn 127.0.0.1:9998 [1]: Got 200 response for request [Req69: PUT http://localhost:9998/tika/] (took 91 ms + 210 ms in queue) doveadm(ctr): Panic: kevent(): Invalid argument Abort trap (core dumped) if I turn off tika, I do NOt get it. 2.2.26-RC1 what else do you need? -- Larry Rosenman http://www.lerctr.org/~ler Phone: +1 214-642-9640 (c) E-Mail: larryrtx at gmail.com US Mail: 17716 Limpia Crk, Round Rock, TX 78664-7281 From aki.tuomi at dovecot.fi Fri Oct 21 17:17:37 2016 From: aki.tuomi at dovecot.fi (Aki Tuomi) Date: Fri, 21 Oct 2016 20:17:37 +0300 (EEST) Subject: keent() from Tika - with doveadm In-Reply-To: References: Message-ID: <228601064.3453.1477070258416@appsuite-dev.open-xchange.com> > On October 21, 2016 at 8:06 PM Larry Rosenman wrote: > > > getting the following: > > Oct 21, 2016 12:04:25 PM org.apache.tika.server.resource.TikaResource > logRequest > INFO: tika/ > (application/vnd.openxmlformats-officedocument.wordprocessingml.document) > doveadm(ctr): Debug: http-client: conn 127.0.0.1:9998 [1]: Got 200 response > for request [Req69: PUT http://localhost:9998/tika/] (took 91 ms + 210 ms > in queue) > doveadm(ctr): Panic: kevent(): Invalid argument > Abort trap (core dumped) > > if I turn off tika, I do NOt get it. > > 2.2.26-RC1 > > what else do you need? > > > -- > Larry Rosenman http://www.lerctr.org/~ler > Phone: +1 214-642-9640 (c) E-Mail: larryrtx at gmail.com > US Mail: 17716 Limpia Crk, Round Rock, TX 78664-7281 Any hope for exact request? Aki Tuomi Dovecot oy From larryrtx at gmail.com Fri Oct 21 17:27:39 2016 From: larryrtx at gmail.com (Larry Rosenman) Date: Fri, 21 Oct 2016 12:27:39 -0500 Subject: keent() from Tika - with doveadm In-Reply-To: <228601064.3453.1477070258416@appsuite-dev.open-xchange.com> References: <228601064.3453.1477070258416@appsuite-dev.open-xchange.com> Message-ID: Unfortuantely it doesn't seem to log that, and it's not 100% consistent. I did catch one, but the log file is huge so it's at: http://www.lerctr.org/~ler/Dovecot/doveadm0-tika On Fri, Oct 21, 2016 at 12:17 PM, Aki Tuomi wrote: > > > On October 21, 2016 at 8:06 PM Larry Rosenman > wrote: > > > > > > getting the following: > > > > Oct 21, 2016 12:04:25 PM org.apache.tika.server.resource.TikaResource > > logRequest > > INFO: tika/ > > (application/vnd.openxmlformats-officedocument. > wordprocessingml.document) > > doveadm(ctr): Debug: http-client: conn 127.0.0.1:9998 [1]: Got 200 > response > > for request [Req69: PUT http://localhost:9998/tika/] (took 91 ms + 210 > ms > > in queue) > > doveadm(ctr): Panic: kevent(): Invalid argument > > Abort trap (core dumped) > > > > if I turn off tika, I do NOt get it. > > > > 2.2.26-RC1 > > > > what else do you need? > > > > > > -- > > Larry Rosenman http://www.lerctr.org/~ler > > Phone: +1 214-642-9640 (c) E-Mail: larryrtx at gmail.com > > US Mail: 17716 Limpia Crk, Round Rock, TX 78664-7281 > > Any hope for exact request? > > Aki Tuomi > Dovecot oy > -- Larry Rosenman http://www.lerctr.org/~ler Phone: +1 214-642-9640 (c) E-Mail: larryrtx at gmail.com US Mail: 17716 Limpia Crk, Round Rock, TX 78664-7281 From p.heinlein at heinlein-support.de Sat Oct 22 06:16:05 2016 From: p.heinlein at heinlein-support.de (Peer Heinlein) Date: Sat, 22 Oct 2016 08:16:05 +0200 Subject: More informations in doveadm proxy ring status Message-ID: I would love to have a) the number of active users b) the number of active TCP sessions (pop3, imap, lmtp, sieve, doveadm) included in the output of "doveadm director ring status". This would be helpful to get a good overview over load and usage of the whole director ring and would help to plan downtimes and maintenance work. Peer -- Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-42 Fax: 030 / 405051-19 Zwangsangaben lt. ?35a GmbHG: HRB 93818 B / Amtsgericht Berlin-Charlottenburg, Gesch?ftsf?hrer: Peer Heinlein -- Sitz: Berlin From jerry at seibercom.net Sat Oct 22 09:16:07 2016 From: jerry at seibercom.net (Jerry) Date: Sat, 22 Oct 2016 05:16:07 -0400 Subject: Backing up and Importing IMAP folders In-Reply-To: <20161020203635.a894f7324bfc0354b581f87e@domain007.com> References: <20161020091812.00006939@seibercom.net> <20161020164533.67e2d31bb0c7d641c943466d@domain007.com> <15355822.1033.1476971866443@appsuite-dev.open-xchange.com> <20161020203635.a894f7324bfc0354b581f87e@domain007.com> Message-ID: <20161022051607.000027fb@seibercom.net> On Thu, 20 Oct 2016 20:36:35 +0300, Konstantin Khomoutov stated: >On Thu, 20 Oct 2016 16:57:45 +0300 (EEST) >Aki Tuomi wrote: > >[...] >> > Alternatively you can use `dsync` to perform backup with a native >> > Dovecot tool. It's able to sync mailboxes of any Dovecot user -- >> > including synchronizing a mailbox to an empty (yet) spool. >> > You'll need to do a bit of shell scripting which would spin around >> > calling `doveadm user *` and feeding its output to something like >> > >> > while read user; do \ >> > dest="/var/backup/dovecot/$user"; >> > mkdir -p "$dest" && chown vmail:vmail "$dest" \ >> > && chmod 0755 "$dest" >> > dsync -u "$user" backup "maildir:$dest" \ >> > done >> > >> > Note that you will only need this if you don't want to shut down >> > Dovecot to copy its mail spool out. >> >> You can also use doveadm backup -A maildir:%u/ > >Could you please elaborate? > >I have a typical "virtual users" setup where I do have > > mail_home = /var/local/mail/%Ln > mail_location = maildir:~/mail > >and everything is stored with uid=vmail / gid=vmail (much like >described in the wiki, that is). > >I'd like to use a single call to `doveadm backup -A ...` to back up the >whole /var/local/mail/* to another location >(say, /var/backups/dovecot/) so that is has the same structure, just >synchronized with the spool. (The purpose is to then backup the >replica off-site). > >I tried to call > > doveadm backup -A maildir:/var/backups/dovecot/%u > >and it created a directory "/var/backups/dovecot/%u" (with literal >"%u", that is), created what appeared to be a single mailbox structure >under it and after a while scared a heck out of me with a series of >error messages reading > >dsync(user1): Error: Mailbox INBOX sync: mailbox_delete failed: INBOX >can't be deleted. >dsync(user2): Error: Mailbox INBOX sync: mailbox_delete failed: INBOX >can't be deleted. >... > >for each existing user. > >It appears that it luckily failed to delete anything in the source >directory (though I have no idea what it actually tried to do). > >Reading the doveadm-backup(1) multiple times still failed to shed a >light for me on how to actually backup the whole maildir hierarchy for >all existing users. > >So, the question: how do I really should go about backing up the whole >mailbox hierarchy in the case of virtual users? I am experiencing the same problem as Konstantin. Is this s bug or expected behavior. -- Jerry From aki.tuomi at dovecot.fi Sat Oct 22 09:41:50 2016 From: aki.tuomi at dovecot.fi (Aki Tuomi) Date: Sat, 22 Oct 2016 12:41:50 +0300 (EEST) Subject: Backing up and Importing IMAP folders In-Reply-To: <20161022051607.000027fb@seibercom.net> References: <20161020091812.00006939@seibercom.net> <20161020164533.67e2d31bb0c7d641c943466d@domain007.com> <15355822.1033.1476971866443@appsuite-dev.open-xchange.com> <20161020203635.a894f7324bfc0354b581f87e@domain007.com> <20161022051607.000027fb@seibercom.net> Message-ID: <1442980371.1.1477129311397@appsuite-dev.open-xchange.com> > On October 22, 2016 at 12:16 PM Jerry wrote: > > > On Thu, 20 Oct 2016 20:36:35 +0300, Konstantin Khomoutov stated: > > >On Thu, 20 Oct 2016 16:57:45 +0300 (EEST) > >Aki Tuomi wrote: > > > >[...] > >> > Alternatively you can use `dsync` to perform backup with a native > >> > Dovecot tool. It's able to sync mailboxes of any Dovecot user -- > >> > including synchronizing a mailbox to an empty (yet) spool. > >> > You'll need to do a bit of shell scripting which would spin around > >> > calling `doveadm user *` and feeding its output to something like > >> > > >> > while read user; do \ > >> > dest="/var/backup/dovecot/$user"; > >> > mkdir -p "$dest" && chown vmail:vmail "$dest" \ > >> > && chmod 0755 "$dest" > >> > dsync -u "$user" backup "maildir:$dest" \ > >> > done > >> > > >> > Note that you will only need this if you don't want to shut down > >> > Dovecot to copy its mail spool out. > >> > >> You can also use doveadm backup -A maildir:%u/ > > > >Could you please elaborate? > > > >I have a typical "virtual users" setup where I do have > > > > mail_home = /var/local/mail/%Ln > > mail_location = maildir:~/mail > > > >and everything is stored with uid=vmail / gid=vmail (much like > >described in the wiki, that is). > > > >I'd like to use a single call to `doveadm backup -A ...` to back up the > >whole /var/local/mail/* to another location > >(say, /var/backups/dovecot/) so that is has the same structure, just > >synchronized with the spool. (The purpose is to then backup the > >replica off-site). > > > >I tried to call > > > > doveadm backup -A maildir:/var/backups/dovecot/%u > > > >and it created a directory "/var/backups/dovecot/%u" (with literal > >"%u", that is), created what appeared to be a single mailbox structure > >under it and after a while scared a heck out of me with a series of > >error messages reading > > > >dsync(user1): Error: Mailbox INBOX sync: mailbox_delete failed: INBOX > >can't be deleted. > >dsync(user2): Error: Mailbox INBOX sync: mailbox_delete failed: INBOX > >can't be deleted. > >... > > > >for each existing user. > > > >It appears that it luckily failed to delete anything in the source > >directory (though I have no idea what it actually tried to do). > > > >Reading the doveadm-backup(1) multiple times still failed to shed a > >light for me on how to actually backup the whole maildir hierarchy for > >all existing users. > > > >So, the question: how do I really should go about backing up the whole > >mailbox hierarchy in the case of virtual users? > > I am experiencing the same problem as Konstantin. Is this s bug or > expected behavior. > > -- > Jerry I think it's not exactly a bug. But I think it should be expanded. I'll see if we can get that sorted. Aki From carda at two-wings.net Sat Oct 22 10:45:08 2016 From: carda at two-wings.net (Benedikt Carda) Date: Sat, 22 Oct 2016 12:45:08 +0200 Subject: Dovecot does not close connections In-Reply-To: References: <72649af7-5007-8b11-d739-97de24d6adbe@two-wings.net> Message-ID: <3ab5ff72-1035-50b4-fb6b-229870366c87@two-wings.net> How do I check the state of the connection? -- Benedikt Carda Am 14.10.2016 um 14:08 schrieb Steffen Kaiser: > On Fri, 14 Oct 2016, Benedikt Carda wrote: > > > I am running into this error: > > /Maximum number of connections from user+IP exceeded > > (mail_max_userip_connections=10)/ > > > The suggested solution in hundreds of support requests on this mailing > > list and throughout the internet is to increase the number of maximum > > userip connections. But this is not curing the problem, it is just > > postponing it to the moment when the new limit is reached. > > > When i type: > > /doveadm who// > > / > > > I can see that some accounts have several pids running: > > /someaccount 10 imap (25396 25391 25386 25381 25374 7822 7817 > > 5559 5543 5531) (xxx.xxx.xxx.xxx)/ > > > Now when I check these pids with > > /ps aux/ > > > I find out that the oldest pid (5531) has a lifetime of already over 12 > > hours. Anyway I know that the clients that initiated the connections are > > not connected anymore, so there is no way that there is a valid reason > > why this connection should still be open. > > What's the state of the connection ? > > > -- Steffen Kaiser -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From bill-dovecot at carpenter.org Sat Oct 22 16:32:47 2016 From: bill-dovecot at carpenter.org (WJCarpenter) Date: Sat, 22 Oct 2016 09:32:47 -0700 Subject: MFA 2FA TOTP razz-ma-tazz! Message-ID: <580B94AF.6070709@carpenter.org> I'd like to start offering my server's users multi-factor authentication. Right now, I funnel all authentication through dovecot. Before I get too far down the fantasy design path, I'm wondering if anyone else has already done this and could share some details or code. (I loaded up the subject line with acronyms to show how serious I am. :-)) I am specifically thinking of two-factor authentication using TOTP (time-based one-time passwords) as described in RFC-6238. Those are the ones compatible with Google Authenticator and compatible apps. I already am a user of those at several sites. Some of them don't have a separate opportunity to enter the 6-digit code. Instead, you append the 6-digit code to your normal password. If your config on the site shows you as a user of TOTP, they peel those trailing 6 digits off your password and then validate the rest of the password in the normal way. That is what I think I would do for dovecot authentication. So, who's already done this or something like it? From gerben.wierda at rna.nl Sat Oct 22 16:51:57 2016 From: gerben.wierda at rna.nl (Gerben Wierda) Date: Sat, 22 Oct 2016 18:51:57 +0200 Subject: Messed up dovecot mail store, need some repair advice Message-ID: <097CD3C1-C1C8-4DCD-A256-9B295BA47D0E@rna.nl> Hello folks, I have an older dovecot 2.0 (which I will migrate to a 2.2 asap, but at this point in time I need a fix). This is a dovecot 2.0 that came with Mac OS X 10.8.5 Server 2.2.5. Today, my spam/virus filtering (clamav) on the Server broke down. As a result, all my meesages got the ***UNCHECKED*** tag added to each subject. That was clearly unacceptable. SO, for the tim ebeing I have set that tag to undef so the tag is no longer added. But I also wanted to repair the messages that already ended up in dovecot 2.0 So, I did something simple: stopped all mail services on the server, went into the dovecot mail store and edited the messages. I first tried with one small ccount and it seemed OK. But now my mail client is experiencing problems with the messages (cannot display) and I think I've been to simplistic. I have for instance notedthat th esizeof the message is part of the filename. So, I can change these of course, but probably I need to change more. Can someone enlighten me how I can repair the broken data store? Thanks, (Foolish) Gerben From aki.tuomi at dovecot.fi Sat Oct 22 17:04:52 2016 From: aki.tuomi at dovecot.fi (Aki Tuomi) Date: Sat, 22 Oct 2016 20:04:52 +0300 (EEST) Subject: Messed up dovecot mail store, need some repair advice In-Reply-To: <097CD3C1-C1C8-4DCD-A256-9B295BA47D0E@rna.nl> References: <097CD3C1-C1C8-4DCD-A256-9B295BA47D0E@rna.nl> Message-ID: <1016681850.50.1477155894109@appsuite-dev.open-xchange.com> > On October 22, 2016 at 7:51 PM Gerben Wierda wrote: > > > Hello folks, > > I have an older dovecot 2.0 (which I will migrate to a 2.2 asap, but at this point in time I need a fix). This is a dovecot 2.0 that came with Mac OS X 10.8.5 Server 2.2.5. > > Today, my spam/virus filtering (clamav) on the Server broke down. As a result, all my meesages got the ***UNCHECKED*** tag added to each subject. That was clearly unacceptable. SO, for the tim ebeing I have set that tag to undef so the tag is no longer added. But I also wanted to repair the messages that already ended up in dovecot 2.0 > > So, I did something simple: stopped all mail services on the server, went into the dovecot mail store and edited the messages. I first tried with one small ccount and it seemed OK. But now my mail client is experiencing problems with the messages (cannot display) and I think I've been to simplistic. I have for instance notedthat th esizeof the message is part of the filename. So, I can change these of course, but probably I need to change more. > > Can someone enlighten me how I can repair the broken data store? > > Thanks, > > (Foolish) Gerben I think your best bet is to delete dovecot.index* and run dovecot index -u username. Or revert all your changes. Depending what mail store you are using, this might lose any flags on your mails, such as \Read. Aki From tss at iki.fi Sat Oct 22 19:50:25 2016 From: tss at iki.fi (Timo Sirainen) Date: Sat, 22 Oct 2016 22:50:25 +0300 Subject: More informations in doveadm proxy ring status In-Reply-To: References: Message-ID: <424EE300-7B85-48C7-A0B6-DA112521A5F6@iki.fi> On 22 Oct 2016, at 09:16, Peer Heinlein wrote: > > > > I would love to have > > a) the number of active users > b) the number of active TCP sessions (pop3, imap, lmtp, sieve, doveadm) > > included in the output of "doveadm director ring status". > > This would be helpful to get a good overview over load and usage of the > whole director ring and would help to plan downtimes and maintenance work. You could get this same by running "doveadm proxy list" in all directors and merging the output. Having director do this would introduce extra traffic between the directors and annoyingly complicate things. You should be able to get the proxy list output via doveadm HTTP protocol, so it shouldn't be difficult to write a script to do it. From gerben.wierda at rna.nl Sat Oct 22 20:09:17 2016 From: gerben.wierda at rna.nl (Gerben Wierda) Date: Sat, 22 Oct 2016 22:09:17 +0200 Subject: Messed up dovecot mail store, need some repair advice In-Reply-To: <1016681850.50.1477155894109@appsuite-dev.open-xchange.com> References: <097CD3C1-C1C8-4DCD-A256-9B295BA47D0E@rna.nl> <1016681850.50.1477155894109@appsuite-dev.open-xchange.com> Message-ID: > On 22 Oct 2016, at 19:04, Aki Tuomi wrote: > > >> On October 22, 2016 at 7:51 PM Gerben Wierda wrote: >> >> >> Hello folks, >> >> I have an older dovecot 2.0 (which I will migrate to a 2.2 asap, but at this point in time I need a fix). This is a dovecot 2.0 that came with Mac OS X 10.8.5 Server 2.2.5. >> >> Today, my spam/virus filtering (clamav) on the Server broke down. As a result, all my meesages got the ***UNCHECKED*** tag added to each subject. That was clearly unacceptable. SO, for the tim ebeing I have set that tag to undef so the tag is no longer added. But I also wanted to repair the messages that already ended up in dovecot 2.0 >> >> So, I did something simple: stopped all mail services on the server, went into the dovecot mail store and edited the messages. I first tried with one small ccount and it seemed OK. But now my mail client is experiencing problems with the messages (cannot display) and I think I've been to simplistic. I have for instance notedthat th esizeof the message is part of the filename. So, I can change these of course, but probably I need to change more. >> >> Can someone enlighten me how I can repair the broken data store? >> >> Thanks, >> >> (Foolish) Gerben > > I think your best bet is to delete dovecot.index* and run dovecot index -u username. Or revert all your changes. Depending what mail store you are using, this might lose any flags on your mails, such as \Read. Thanks. Losing 300 flags and unread on thousands of emails was not a preferred scenario. I was able to repair by - turning dovecote (and other mail services) off - find all message files that were changed in a certain period - check all their names against their file sizes (this found me the edited ones) - returning the ***UNCHECKED*** string to the Subject lines making the file sizes equal to the size as reported in the name of the file Which leaves me with something I would really like: change the subject line of 5-10 messages in dovecot, without destroying everything. I was thinking about the following scenario: - create a separate mailbox REPAIR within user X?s mail store (the INBOX, btw, is named ?cur?) - move all to be changed messages there using the mail client - kill the mail client - stop dovecot - edit the messages and change the names of the files so the S= W= parts are in line with the new content. (I understand S, but what is W?) - run ?devoid index -u user REPAIR? - start dovecot - start email client (potentially, reload the entire mail store for that user) Would that work? G From gerben.wierda at rna.nl Sat Oct 22 23:05:35 2016 From: gerben.wierda at rna.nl (Gerben Wierda) Date: Sun, 23 Oct 2016 01:05:35 +0200 Subject: Messed up dovecot mail store, need some repair advice In-Reply-To: References: <097CD3C1-C1C8-4DCD-A256-9B295BA47D0E@rna.nl> <1016681850.50.1477155894109@appsuite-dev.open-xchange.com> Message-ID: <16FB8A62-A30F-4C52-9EE6-1317D62BC875@rna.nl> > On 22 Oct 2016, at 22:09, Gerben Wierda wrote: > >> >> On 22 Oct 2016, at 19:04, Aki Tuomi wrote: >> >> >>> On October 22, 2016 at 7:51 PM Gerben Wierda wrote: >>> >>> >>> Hello folks, >>> >>> I have an older dovecot 2.0 (which I will migrate to a 2.2 asap, but at this point in time I need a fix). This is a dovecot 2.0 that came with Mac OS X 10.8.5 Server 2.2.5. >>> >>> Today, my spam/virus filtering (clamav) on the Server broke down. As a result, all my meesages got the ***UNCHECKED*** tag added to each subject. That was clearly unacceptable. SO, for the tim ebeing I have set that tag to undef so the tag is no longer added. But I also wanted to repair the messages that already ended up in dovecot 2.0 >>> >>> So, I did something simple: stopped all mail services on the server, went into the dovecot mail store and edited the messages. I first tried with one small ccount and it seemed OK. But now my mail client is experiencing problems with the messages (cannot display) and I think I've been to simplistic. I have for instance notedthat th esizeof the message is part of the filename. So, I can change these of course, but probably I need to change more. >>> >>> Can someone enlighten me how I can repair the broken data store? >>> >>> Thanks, >>> >>> (Foolish) Gerben >> >> I think your best bet is to delete dovecot.index* and run dovecot index -u username. Or revert all your changes. Depending what mail store you are using, this might lose any flags on your mails, such as \Read. > > Thanks. Losing 300 flags and unread on thousands of emails was not a preferred scenario. > > I was able to repair by > - turning dovecote (and other mail services) off > - find all message files that were changed in a certain period > - check all their names against their file sizes (this found me the edited ones) > - returning the ***UNCHECKED*** string to the Subject lines making the file sizes equal to the size as reported in the name of the file > > Which leaves me with something I would really like: change the subject line of 5-10 messages in dovecot, without destroying everything. > > I was thinking about the following scenario: > - create a separate mailbox REPAIR within user X?s mail store (the INBOX, btw, is named ?cur?) > - move all to be changed messages there using the mail client > - kill the mail client > - stop dovecot > - edit the messages and change the names of the files so the S= W= parts are in line with the new content. (I understand S, but what is W?) > - run ?devoid index -u user REPAIR? > - start dovecot > - start email client (potentially, reload the entire mail store for that user) > > Would that work? There was an easier solution. In my mail program I created a local mailbox, copied the messages there, edited them on disk, rebuilt the local mailbox and then moved them back to IMAP. G From larryrtx at gmail.com Sat Oct 22 23:32:31 2016 From: larryrtx at gmail.com (larryrtx) Date: Sat, 22 Oct 2016 18:32:31 -0500 Subject: keent() from Tika - with doveadm Message-ID: Any news Ali?? Sent from my Sprint Samsung Galaxy S7. -------- Original message --------From: Larry Rosenman Date: 10/21/16 12:27 PM (GMT-06:00) To: Aki Tuomi Cc: Dovecot Mailing List Subject: Re: keent() from Tika - with doveadm Unfortuantely it doesn't seem to log that, and it's not 100% consistent.? I did catch one, but the log file is huge so it's at: http://www.lerctr.org/~ler/Dovecot/doveadm0-tika On Fri, Oct 21, 2016 at 12:17 PM, Aki Tuomi wrote: > On October 21, 2016 at 8:06 PM Larry Rosenman wrote: > > > getting the following: > > Oct 21, 2016 12:04:25 PM org.apache.tika.server.resource.TikaResource > logRequest > INFO: tika/ > (application/vnd.openxmlformats-officedocument.wordprocessingml.document) > doveadm(ctr): Debug: http-client: conn 127.0.0.1:9998 [1]: Got 200 response > for request [Req69: PUT http://localhost:9998/tika/] (took 91 ms + 210 ms > in queue) > doveadm(ctr): Panic: kevent(): Invalid argument > Abort trap (core dumped) > > if I turn off tika, I do NOt get it. > > 2.2.26-RC1 > > what else do you need? > > > -- > Larry Rosenman? ? ? ? ? ? ? ? ? ? ?http://www.lerctr.org/~ler > Phone: +1 214-642-9640 (c)? ? ?E-Mail: larryrtx at gmail.com > US Mail: 17716 Limpia Crk, Round Rock, TX 78664-7281 Any hope for exact request? Aki Tuomi Dovecot oy -- Larry Rosenman? ? ? ? ? ? ? ? ? ?? http://www.lerctr.org/~ler Phone: +1 214-642-9640 (c) ? ? E-Mail: larryrtx at gmail.com US Mail: 17716 Limpia Crk, Round Rock, TX 78664-7281 From aki.tuomi at dovecot.fi Sun Oct 23 08:19:55 2016 From: aki.tuomi at dovecot.fi (Aki Tuomi) Date: Sun, 23 Oct 2016 11:19:55 +0300 (EEST) Subject: keent() from Tika - with doveadm In-Reply-To: References: Message-ID: <1219160790.717.1477210796307@appsuite-dev.open-xchange.com> Please see http://dovecot.org/bugreport.html Aki > On October 23, 2016 at 2:32 AM larryrtx wrote: > > > Any news Ali? > > > Sent from my Sprint Samsung Galaxy S7. > -------- Original message --------From: Larry Rosenman Date: 10/21/16 12:27 PM (GMT-06:00) To: Aki Tuomi Cc: Dovecot Mailing List Subject: Re: keent() from Tika - with doveadm > Unfortuantely it doesn't seem to log that, and it's not 100% consistent. > I did catch one, but the log file is huge so it's at: > http://www.lerctr.org/~ler/Dovecot/doveadm0-tika > On Fri, Oct 21, 2016 at 12:17 PM, Aki Tuomi wrote: > > > > On October 21, 2016 at 8:06 PM Larry Rosenman wrote: > > > > > > > > > getting the following: > > > > > > Oct 21, 2016 12:04:25 PM org.apache.tika.server.resource.TikaResource > > > logRequest > > > INFO: tika/ > > > (application/vnd.openxmlformats-officedocument.wordprocessingml.document) > > > doveadm(ctr): Debug: http-client: conn 127.0.0.1:9998 [1]: Got 200 response > > > for request [Req69: PUT http://localhost:9998/tika/] (took 91 ms + 210 ms > > > in queue) > > > doveadm(ctr): Panic: kevent(): Invalid argument > > > Abort trap (core dumped) > > > > > > if I turn off tika, I do NOt get it. > > > > > > 2.2.26-RC1 > > > > > > what else do you need? > > > > > > > > > -- > > > Larry Rosenman http://www.lerctr.org/~ler > > > Phone: +1 214-642-9640 (c) E-Mail: larryrtx at gmail.com > > > US Mail: 17716 Limpia Crk, Round Rock, TX 78664-7281 > > > > Any hope for exact request? > > > > Aki Tuomi > > Dovecot oy > > > > > -- > Larry Rosenman http://www.lerctr.org/~ler > Phone: +1 214-642-9640 (c) E-Mail: larryrtx at gmail.com > US Mail: 17716 Limpia Crk, Round Rock, TX 78664-7281 > From larryrtx at gmail.com Sun Oct 23 14:39:07 2016 From: larryrtx at gmail.com (Larry Rosenman) Date: Sun, 23 Oct 2016 09:39:07 -0500 Subject: keent() from Tika - with doveadm In-Reply-To: <1219160790.717.1477210796307@appsuite-dev.open-xchange.com> References: <1219160790.717.1477210796307@appsuite-dev.open-xchange.com> Message-ID: doveconf -n attached, what else do you need? On Sun, Oct 23, 2016 at 3:19 AM, Aki Tuomi wrote: > Please see http://dovecot.org/bugreport.html > > Aki > > > On October 23, 2016 at 2:32 AM larryrtx wrote: > > > > > > Any news Ali? > > > > > > Sent from my Sprint Samsung Galaxy S7. > > -------- Original message --------From: Larry Rosenman < > larryrtx at gmail.com> Date: 10/21/16 12:27 PM (GMT-06:00) To: Aki Tuomi < > aki.tuomi at dovecot.fi> Cc: Dovecot Mailing List > Subject: Re: keent() from Tika - with doveadm > > Unfortuantely it doesn't seem to log that, and it's not 100% consistent. > > I did catch one, but the log file is huge so it's at: > > http://www.lerctr.org/~ler/Dovecot/doveadm0-tika > > On Fri, Oct 21, 2016 at 12:17 PM, Aki Tuomi > wrote: > > > > > > > On October 21, 2016 at 8:06 PM Larry Rosenman > wrote: > > > > > > > > > > > > > > > getting the following: > > > > > > > > > > Oct 21, 2016 12:04:25 PM org.apache.tika.server.resource.TikaResource > > > > > logRequest > > > > > INFO: tika/ > > > > > (application/vnd.openxmlformats-officedocument. > wordprocessingml.document) > > > > > doveadm(ctr): Debug: http-client: conn 127.0.0.1:9998 [1]: Got 200 > response > > > > > for request [Req69: PUT http://localhost:9998/tika/] (took 91 ms + > 210 ms > > > > > in queue) > > > > > doveadm(ctr): Panic: kevent(): Invalid argument > > > > > Abort trap (core dumped) > > > > > > > > > > if I turn off tika, I do NOt get it. > > > > > > > > > > 2.2.26-RC1 > > > > > > > > > > what else do you need? > > > > > > > > > > > > > > > -- > > > > > Larry Rosenman http://www.lerctr.org/~ler > > > > > Phone: +1 214-642-9640 (c) E-Mail: larryrtx at gmail.com > > > > > US Mail: 17716 Limpia Crk, Round Rock, TX 78664-7281 > > > > > > > > Any hope for exact request? > > > > > > > > Aki Tuomi > > > > Dovecot oy > > > > > > > > > > -- > > Larry Rosenman http://www.lerctr.org/~ler > > Phone: +1 214-642-9640 (c) E-Mail: larryrtx at gmail.com > > US Mail: 17716 Limpia Crk, Round Rock, TX 78664-7281 > > > -- Larry Rosenman http://www.lerctr.org/~ler Phone: +1 214-642-9640 (c) E-Mail: larryrtx at gmail.com US Mail: 17716 Limpia Crk, Round Rock, TX 78664-7281 -------------- next part -------------- A non-text attachment was scrubbed... Name: tbh-doveconf Type: application/octet-stream Size: 4145 bytes Desc: not available URL: From aki.tuomi at dovecot.fi Sun Oct 23 15:27:14 2016 From: aki.tuomi at dovecot.fi (Aki Tuomi) Date: Sun, 23 Oct 2016 18:27:14 +0300 (EEST) Subject: keent() from Tika - with doveadm In-Reply-To: References: <1219160790.717.1477210796307@appsuite-dev.open-xchange.com> Message-ID: <6177676.109.1477236435200@appsuite-dev.open-xchange.com> gdb full backtrace would be nice... gdb /path/to/bin /path/to/core bt full Aki > On October 23, 2016 at 5:39 PM Larry Rosenman wrote: > > > doveconf -n attached, what else do you need? > > > > On Sun, Oct 23, 2016 at 3:19 AM, Aki Tuomi wrote: > > > Please see http://dovecot.org/bugreport.html > > > > Aki > > > > > On October 23, 2016 at 2:32 AM larryrtx wrote: > > > > > > > > > Any news Ali? > > > > > > > > > Sent from my Sprint Samsung Galaxy S7. > > > -------- Original message --------From: Larry Rosenman < > > larryrtx at gmail.com> Date: 10/21/16 12:27 PM (GMT-06:00) To: Aki Tuomi < > > aki.tuomi at dovecot.fi> Cc: Dovecot Mailing List > > Subject: Re: keent() from Tika - with doveadm > > > Unfortuantely it doesn't seem to log that, and it's not 100% consistent. > > > I did catch one, but the log file is huge so it's at: > > > http://www.lerctr.org/~ler/Dovecot/doveadm0-tika > > > On Fri, Oct 21, 2016 at 12:17 PM, Aki Tuomi > > wrote: > > > > > > > > > > On October 21, 2016 at 8:06 PM Larry Rosenman > > wrote: > > > > > > > > > > > > > > > > > > > > > getting the following: > > > > > > > > > > > > > > Oct 21, 2016 12:04:25 PM org.apache.tika.server.resource.TikaResource > > > > > > > logRequest > > > > > > > INFO: tika/ > > > > > > > (application/vnd.openxmlformats-officedocument. > > wordprocessingml.document) > > > > > > > doveadm(ctr): Debug: http-client: conn 127.0.0.1:9998 [1]: Got 200 > > response > > > > > > > for request [Req69: PUT http://localhost:9998/tika/] (took 91 ms + > > 210 ms > > > > > > > in queue) > > > > > > > doveadm(ctr): Panic: kevent(): Invalid argument > > > > > > > Abort trap (core dumped) > > > > > > > > > > > > > > if I turn off tika, I do NOt get it. > > > > > > > > > > > > > > 2.2.26-RC1 > > > > > > > > > > > > > > what else do you need? > > > > > > > > > > > > > > > > > > > > > -- > > > > > > > Larry Rosenman http://www.lerctr.org/~ler > > > > > > > Phone: +1 214-642-9640 (c) E-Mail: larryrtx at gmail.com > > > > > > > US Mail: 17716 Limpia Crk, Round Rock, TX 78664-7281 > > > > > > > > > > > > Any hope for exact request? > > > > > > > > > > > > Aki Tuomi > > > > > > Dovecot oy > > > > > > > > > > > > > > > -- > > > Larry Rosenman http://www.lerctr.org/~ler > > > Phone: +1 214-642-9640 (c) E-Mail: larryrtx at gmail.com > > > US Mail: 17716 Limpia Crk, Round Rock, TX 78664-7281 > > > > > > > > > -- > Larry Rosenman http://www.lerctr.org/~ler > Phone: +1 214-642-9640 (c) E-Mail: larryrtx at gmail.com > US Mail: 17716 Limpia Crk, Round Rock, TX 78664-7281 From larryrtx at gmail.com Sun Oct 23 15:29:51 2016 From: larryrtx at gmail.com (Larry Rosenman) Date: Sun, 23 Oct 2016 10:29:51 -0500 Subject: keent() from Tika - with doveadm In-Reply-To: <6177676.109.1477236435200@appsuite-dev.open-xchange.com> References: <1219160790.717.1477210796307@appsuite-dev.open-xchange.com> <6177676.109.1477236435200@appsuite-dev.open-xchange.com> Message-ID: $ gdb /usr/local/bin/doveadm `pwd`/doveadm.core GNU gdb 6.1.1 [FreeBSD] Copyright 2004 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "amd64-marcel-freebsd"...(no debugging symbols found)... Core was generated by `doveadm'. Program terminated with signal 6, Aborted. Reading symbols from /lib/libz.so.6...(no debugging symbols found)...done. Loaded symbols for /lib/libz.so.6 Reading symbols from /lib/libcrypt.so.5...(no debugging symbols found)...done. Loaded symbols for /lib/libcrypt.so.5 Reading symbols from /usr/local/lib/dovecot/libdovecot-storage.so.0...(no debugging symbols found)...done. Loaded symbols for /usr/local/lib/dovecot/libdovecot-storage.so.0 Reading symbols from /usr/local/lib/dovecot/libdovecot.so.0...(no debugging symbols found)...done. Loaded symbols for /usr/local/lib/dovecot/libdovecot.so.0 Reading symbols from /lib/libc.so.7...(no debugging symbols found)...done. Loaded symbols for /lib/libc.so.7 Reading symbols from /usr/local/lib/dovecot/lib15_notify_plugin.so...(no debugging symbols found)...done. Loaded symbols for /usr/local/lib/dovecot/lib15_notify_plugin.so Reading symbols from /usr/local/lib/dovecot/lib20_fts_plugin.so...(no debugging symbols found)...done. Loaded symbols for /usr/local/lib/dovecot/lib20_fts_plugin.so Reading symbols from /usr/local/lib/libicui18n.so.57...(no debugging symbols found)...done. Loaded symbols for /usr/local/lib/libicui18n.so.57 Reading symbols from /usr/local/lib/libicuuc.so.57...(no debugging symbols found)...done. Loaded symbols for /usr/local/lib/libicuuc.so.57 Reading symbols from /usr/local/lib/libicudata.so.57... warning: Lowest section in /usr/local/lib/libicudata.so.57 is .hash at 0000000000000120 (no debugging symbols found)...done. Loaded symbols for /usr/local/lib/libicudata.so.57 Reading symbols from /lib/libthr.so.3...(no debugging symbols found)...done. Loaded symbols for /lib/libthr.so.3 Reading symbols from /lib/libm.so.5...(no debugging symbols found)...done. Loaded symbols for /lib/libm.so.5 Reading symbols from /usr/lib/libc++.so.1...(no debugging symbols found)...done. Loaded symbols for /usr/lib/libc++.so.1 Reading symbols from /lib/libcxxrt.so.1...(no debugging symbols found)...done. Loaded symbols for /lib/libcxxrt.so.1 Reading symbols from /lib/libgcc_s.so.1...(no debugging symbols found)...done. Loaded symbols for /lib/libgcc_s.so.1 Reading symbols from /usr/local/lib/dovecot/lib21_fts_lucene_plugin.so...(no debugging symbols found)...done. Loaded symbols for /usr/local/lib/dovecot/lib21_fts_lucene_plugin.so Reading symbols from /usr/local/lib/libclucene-core.so.1...(no debugging symbols found)...done. Loaded symbols for /usr/local/lib/libclucene-core.so.1 Reading symbols from /usr/local/lib/libclucene-shared.so.1...(no debugging symbols found)...done. Loaded symbols for /usr/local/lib/libclucene-shared.so.1 Reading symbols from /usr/local/lib/dovecot/lib90_stats_plugin.so...(no debugging symbols found)...done. Loaded symbols for /usr/local/lib/dovecot/lib90_stats_plugin.so Reading symbols from /usr/local/lib/dovecot/doveadm/lib10_doveadm_sieve_plugin.so...(no debugging symbols found)...done. Loaded symbols for /usr/local/lib/dovecot/doveadm/lib10_doveadm_sieve_plugin.so Reading symbols from /usr/local/lib/dovecot-2.2-pigeonhole/libdovecot-sieve.so.0...(no debugging symbols found)...done. Loaded symbols for /usr/local/lib/dovecot-2.2-pigeonhole/libdovecot-sieve.so.0 Reading symbols from /usr/local/lib/dovecot/libdovecot-lda.so.0...(no debugging symbols found)...done. Loaded symbols for /usr/local/lib/dovecot/libdovecot-lda.so.0 Reading symbols from /usr/local/lib/dovecot/doveadm/lib20_doveadm_fts_lucene_plugin.so...(no debugging symbols found)...done. Loaded symbols for /usr/local/lib/dovecot/doveadm/lib20_doveadm_fts_lucene_plugin.so Reading symbols from /usr/local/lib/dovecot/doveadm/lib20_doveadm_fts_plugin.so...(no debugging symbols found)...done. Loaded symbols for /usr/local/lib/dovecot/doveadm/lib20_doveadm_fts_plugin.so Reading symbols from /usr/lib/i18n/libiconv_std.so.4...(no debugging symbols found)...done. Loaded symbols for /usr/lib/i18n/libiconv_std.so.4 Reading symbols from /usr/lib/i18n/libUTF8.so.4...(no debugging symbols found)...done. Loaded symbols for /usr/lib/i18n/libUTF8.so.4 Reading symbols from /usr/lib/i18n/libmapper_none.so.4...(no debugging symbols found)...done. Loaded symbols for /usr/lib/i18n/libmapper_none.so.4 Reading symbols from /usr/lib/i18n/libmapper_std.so.4...(no debugging symbols found)...done. Loaded symbols for /usr/lib/i18n/libmapper_std.so.4 Reading symbols from /libexec/ld-elf.so.1...(no debugging symbols found)...done. Loaded symbols for /libexec/ld-elf.so.1 #0 0x00000008013c4e2a in __cxa_thread_call_dtors () from /lib/libc.so.7 [New Thread 801c2a800 (LWP 100849/)] (gdb) bt full #0 0x00000008013c4e2a in __cxa_thread_call_dtors () from /lib/libc.so.7 No symbol table info available. #1 0x00000008013c4d99 in __cxa_thread_call_dtors () from /lib/libc.so.7 No symbol table info available. #2 0x0000000801087de4 in default_fatal_handler () from /usr/local/lib/dovecot/libdovecot.so.0 No symbol table info available. #3 0x0000000801087b73 in default_fatal_handler () from /usr/local/lib/dovecot/libdovecot.so.0 No symbol table info available. #4 0x0000000801088089 in i_panic () from /usr/local/lib/dovecot/libdovecot.so.0 No symbol table info available. #5 0x000000080109e164 in io_loop_handler_run_internal () from /usr/local/lib/dovecot/libdovecot.so.0 No symbol table info available. #6 0x000000080109ca74 in io_loop_handler_run () from /usr/local/lib/dovecot/libdovecot.so.0 No symbol table info available. #7 0x000000080109c858 in io_loop_run () from /usr/local/lib/dovecot/libdovecot.so.0 No symbol table info available. #8 0x00000008022119ed in fts_parsers_unload () from /usr/local/lib/dovecot/lib20_fts_plugin.so ---Type to continue, or q to quit--- No symbol table info available. #9 0x0000000802210bc2 in fts_parser_more () from /usr/local/lib/dovecot/lib20_fts_plugin.so No symbol table info available. #10 0x000000080220ec6f in fts_build_mail () from /usr/local/lib/dovecot/lib20_fts_plugin.so No symbol table info available. #11 0x0000000802213a2d in fts_mail_allocated () from /usr/local/lib/dovecot/lib20_fts_plugin.so No symbol table info available. #12 0x0000000800d0f2c9 in mail_precache () from /usr/local/lib/dovecot/libdovecot-storage.so.0 No symbol table info available. #13 0x0000000000429f0f in expunge_search_args_check () No symbol table info available. #14 0x0000000000424c2f in doveadm_mail_single_user () No symbol table info available. #15 0x0000000000425ed4 in doveadm_cmd_ver2_to_mail_cmd_wrapper () No symbol table info available. #16 0x0000000000425cf7 in doveadm_cmd_ver2_to_mail_cmd_wrapper () No symbol table info available. #17 0x0000000000433b29 in doveadm_cmd_run_ver2 () No symbol table info available. #18 0x00000000004336b7 in doveadm_cmd_try_run_ver2 () ---Type to continue, or q to quit--- No symbol table info available. #19 0x00000000004362b2 in main () No symbol table info available. (gdb) $ On Sun, Oct 23, 2016 at 10:27 AM, Aki Tuomi wrote: > gdb full backtrace would be nice... > > gdb /path/to/bin /path/to/core > bt full > > Aki > > > On October 23, 2016 at 5:39 PM Larry Rosenman > wrote: > > > > > > doveconf -n attached, what else do you need? > > > > > > > > On Sun, Oct 23, 2016 at 3:19 AM, Aki Tuomi wrote: > > > > > Please see http://dovecot.org/bugreport.html > > > > > > Aki > > > > > > > On October 23, 2016 at 2:32 AM larryrtx wrote: > > > > > > > > > > > > Any news Ali? > > > > > > > > > > > > Sent from my Sprint Samsung Galaxy S7. > > > > -------- Original message --------From: Larry Rosenman < > > > larryrtx at gmail.com> Date: 10/21/16 12:27 PM (GMT-06:00) To: Aki > Tuomi < > > > aki.tuomi at dovecot.fi> Cc: Dovecot Mailing List > > > Subject: Re: keent() from Tika - with doveadm > > > > Unfortuantely it doesn't seem to log that, and it's not 100% > consistent. > > > > I did catch one, but the log file is huge so it's at: > > > > http://www.lerctr.org/~ler/Dovecot/doveadm0-tika > > > > On Fri, Oct 21, 2016 at 12:17 PM, Aki Tuomi > > > wrote: > > > > > > > > > > > > > On October 21, 2016 at 8:06 PM Larry Rosenman > > > wrote: > > > > > > > > > > > > > > > > > > > > > > > > > > > getting the following: > > > > > > > > > > > > > > > > > > Oct 21, 2016 12:04:25 PM org.apache.tika.server. > resource.TikaResource > > > > > > > > > logRequest > > > > > > > > > INFO: tika/ > > > > > > > > > (application/vnd.openxmlformats-officedocument. > > > wordprocessingml.document) > > > > > > > > > doveadm(ctr): Debug: http-client: conn 127.0.0.1:9998 [1]: Got 200 > > > response > > > > > > > > > for request [Req69: PUT http://localhost:9998/tika/] (took 91 ms + > > > 210 ms > > > > > > > > > in queue) > > > > > > > > > doveadm(ctr): Panic: kevent(): Invalid argument > > > > > > > > > Abort trap (core dumped) > > > > > > > > > > > > > > > > > > if I turn off tika, I do NOt get it. > > > > > > > > > > > > > > > > > > 2.2.26-RC1 > > > > > > > > > > > > > > > > > > what else do you need? > > > > > > > > > > > > > > > > > > > > > > > > > > > -- > > > > > > > > > Larry Rosenman http://www.lerctr.org/~ler > > > > > > > > > Phone: +1 214-642-9640 (c) E-Mail: larryrtx at gmail.com > > > > > > > > > US Mail: 17716 Limpia Crk, Round Rock, TX 78664-7281 > > > > > > > > > > > > > > > > Any hope for exact request? > > > > > > > > > > > > > > > > Aki Tuomi > > > > > > > > Dovecot oy > > > > > > > > > > > > > > > > > > > > -- > > > > Larry Rosenman http://www.lerctr.org/~ler > > > > Phone: +1 214-642-9640 (c) E-Mail: larryrtx at gmail.com > > > > US Mail: 17716 Limpia Crk, Round Rock, TX 78664-7281 > > > > > > > > > > > > > > > -- > > Larry Rosenman http://www.lerctr.org/~ler > > Phone: +1 214-642-9640 (c) E-Mail: larryrtx at gmail.com > > US Mail: 17716 Limpia Crk, Round Rock, TX 78664-7281 > -- Larry Rosenman http://www.lerctr.org/~ler Phone: +1 214-642-9640 (c) E-Mail: larryrtx at gmail.com US Mail: 17716 Limpia Crk, Round Rock, TX 78664-7281 From aki.tuomi at dovecot.fi Sun Oct 23 15:36:43 2016 From: aki.tuomi at dovecot.fi (Aki Tuomi) Date: Sun, 23 Oct 2016 18:36:43 +0300 (EEST) Subject: keent() from Tika - with doveadm In-Reply-To: References: <1219160790.717.1477210796307@appsuite-dev.open-xchange.com> <6177676.109.1477236435200@appsuite-dev.open-xchange.com> Message-ID: <191657457.111.1477237004555@appsuite-dev.open-xchange.com> Can you install debug symbols in FreeBSD? Aki > On October 23, 2016 at 6:29 PM Larry Rosenman wrote: > > > $ gdb /usr/local/bin/doveadm `pwd`/doveadm.core > GNU gdb 6.1.1 [FreeBSD] > Copyright 2004 Free Software Foundation, Inc. > GDB is free software, covered by the GNU General Public License, and you are > welcome to change it and/or distribute copies of it under certain > conditions. > Type "show copying" to see the conditions. > There is absolutely no warranty for GDB. Type "show warranty" for details. > This GDB was configured as "amd64-marcel-freebsd"...(no debugging symbols > found)... > Core was generated by `doveadm'. > Program terminated with signal 6, Aborted. > Reading symbols from /lib/libz.so.6...(no debugging symbols found)...done. > Loaded symbols for /lib/libz.so.6 > Reading symbols from /lib/libcrypt.so.5...(no debugging symbols > found)...done. > Loaded symbols for /lib/libcrypt.so.5 > Reading symbols from /usr/local/lib/dovecot/libdovecot-storage.so.0...(no > debugging symbols found)...done. > Loaded symbols for /usr/local/lib/dovecot/libdovecot-storage.so.0 > Reading symbols from /usr/local/lib/dovecot/libdovecot.so.0...(no debugging > symbols found)...done. > Loaded symbols for /usr/local/lib/dovecot/libdovecot.so.0 > Reading symbols from /lib/libc.so.7...(no debugging symbols found)...done. > Loaded symbols for /lib/libc.so.7 > Reading symbols from /usr/local/lib/dovecot/lib15_notify_plugin.so...(no > debugging symbols found)...done. > Loaded symbols for /usr/local/lib/dovecot/lib15_notify_plugin.so > Reading symbols from /usr/local/lib/dovecot/lib20_fts_plugin.so...(no > debugging symbols found)...done. > Loaded symbols for /usr/local/lib/dovecot/lib20_fts_plugin.so > Reading symbols from /usr/local/lib/libicui18n.so.57...(no debugging > symbols found)...done. > Loaded symbols for /usr/local/lib/libicui18n.so.57 > Reading symbols from /usr/local/lib/libicuuc.so.57...(no debugging symbols > found)...done. > Loaded symbols for /usr/local/lib/libicuuc.so.57 > Reading symbols from /usr/local/lib/libicudata.so.57... > warning: Lowest section in /usr/local/lib/libicudata.so.57 is .hash at > 0000000000000120 > (no debugging symbols found)...done. > Loaded symbols for /usr/local/lib/libicudata.so.57 > Reading symbols from /lib/libthr.so.3...(no debugging symbols found)...done. > Loaded symbols for /lib/libthr.so.3 > Reading symbols from /lib/libm.so.5...(no debugging symbols found)...done. > Loaded symbols for /lib/libm.so.5 > Reading symbols from /usr/lib/libc++.so.1...(no debugging symbols > found)...done. > Loaded symbols for /usr/lib/libc++.so.1 > Reading symbols from /lib/libcxxrt.so.1...(no debugging symbols > found)...done. > Loaded symbols for /lib/libcxxrt.so.1 > Reading symbols from /lib/libgcc_s.so.1...(no debugging symbols > found)...done. > Loaded symbols for /lib/libgcc_s.so.1 > Reading symbols from > /usr/local/lib/dovecot/lib21_fts_lucene_plugin.so...(no debugging symbols > found)...done. > Loaded symbols for /usr/local/lib/dovecot/lib21_fts_lucene_plugin.so > Reading symbols from /usr/local/lib/libclucene-core.so.1...(no debugging > symbols found)...done. > Loaded symbols for /usr/local/lib/libclucene-core.so.1 > Reading symbols from /usr/local/lib/libclucene-shared.so.1...(no debugging > symbols found)...done. > Loaded symbols for /usr/local/lib/libclucene-shared.so.1 > Reading symbols from /usr/local/lib/dovecot/lib90_stats_plugin.so...(no > debugging symbols found)...done. > Loaded symbols for /usr/local/lib/dovecot/lib90_stats_plugin.so > Reading symbols from > /usr/local/lib/dovecot/doveadm/lib10_doveadm_sieve_plugin.so...(no > debugging symbols found)...done. > Loaded symbols for > /usr/local/lib/dovecot/doveadm/lib10_doveadm_sieve_plugin.so > Reading symbols from > /usr/local/lib/dovecot-2.2-pigeonhole/libdovecot-sieve.so.0...(no debugging > symbols found)...done. > Loaded symbols for > /usr/local/lib/dovecot-2.2-pigeonhole/libdovecot-sieve.so.0 > Reading symbols from /usr/local/lib/dovecot/libdovecot-lda.so.0...(no > debugging symbols found)...done. > Loaded symbols for /usr/local/lib/dovecot/libdovecot-lda.so.0 > Reading symbols from > /usr/local/lib/dovecot/doveadm/lib20_doveadm_fts_lucene_plugin.so...(no > debugging symbols found)...done. > Loaded symbols for > /usr/local/lib/dovecot/doveadm/lib20_doveadm_fts_lucene_plugin.so > Reading symbols from > /usr/local/lib/dovecot/doveadm/lib20_doveadm_fts_plugin.so...(no debugging > symbols found)...done. > Loaded symbols for > /usr/local/lib/dovecot/doveadm/lib20_doveadm_fts_plugin.so > Reading symbols from /usr/lib/i18n/libiconv_std.so.4...(no debugging > symbols found)...done. > Loaded symbols for /usr/lib/i18n/libiconv_std.so.4 > Reading symbols from /usr/lib/i18n/libUTF8.so.4...(no debugging symbols > found)...done. > Loaded symbols for /usr/lib/i18n/libUTF8.so.4 > Reading symbols from /usr/lib/i18n/libmapper_none.so.4...(no debugging > symbols found)...done. > Loaded symbols for /usr/lib/i18n/libmapper_none.so.4 > Reading symbols from /usr/lib/i18n/libmapper_std.so.4...(no debugging > symbols found)...done. > Loaded symbols for /usr/lib/i18n/libmapper_std.so.4 > Reading symbols from /libexec/ld-elf.so.1...(no debugging symbols > found)...done. > Loaded symbols for /libexec/ld-elf.so.1 > #0 0x00000008013c4e2a in __cxa_thread_call_dtors () from /lib/libc.so.7 > [New Thread 801c2a800 (LWP 100849/)] > (gdb) bt full > #0 0x00000008013c4e2a in __cxa_thread_call_dtors () from /lib/libc.so.7 > No symbol table info available. > #1 0x00000008013c4d99 in __cxa_thread_call_dtors () from /lib/libc.so.7 > No symbol table info available. > #2 0x0000000801087de4 in default_fatal_handler () > from /usr/local/lib/dovecot/libdovecot.so.0 > No symbol table info available. > #3 0x0000000801087b73 in default_fatal_handler () > from /usr/local/lib/dovecot/libdovecot.so.0 > No symbol table info available. > #4 0x0000000801088089 in i_panic () > from /usr/local/lib/dovecot/libdovecot.so.0 > No symbol table info available. > #5 0x000000080109e164 in io_loop_handler_run_internal () > from /usr/local/lib/dovecot/libdovecot.so.0 > No symbol table info available. > #6 0x000000080109ca74 in io_loop_handler_run () > from /usr/local/lib/dovecot/libdovecot.so.0 > No symbol table info available. > #7 0x000000080109c858 in io_loop_run () > from /usr/local/lib/dovecot/libdovecot.so.0 > No symbol table info available. > #8 0x00000008022119ed in fts_parsers_unload () > from /usr/local/lib/dovecot/lib20_fts_plugin.so > ---Type to continue, or q to quit--- > No symbol table info available. > #9 0x0000000802210bc2 in fts_parser_more () > from /usr/local/lib/dovecot/lib20_fts_plugin.so > No symbol table info available. > #10 0x000000080220ec6f in fts_build_mail () > from /usr/local/lib/dovecot/lib20_fts_plugin.so > No symbol table info available. > #11 0x0000000802213a2d in fts_mail_allocated () > from /usr/local/lib/dovecot/lib20_fts_plugin.so > No symbol table info available. > #12 0x0000000800d0f2c9 in mail_precache () > from /usr/local/lib/dovecot/libdovecot-storage.so.0 > No symbol table info available. > #13 0x0000000000429f0f in expunge_search_args_check () > No symbol table info available. > #14 0x0000000000424c2f in doveadm_mail_single_user () > No symbol table info available. > #15 0x0000000000425ed4 in doveadm_cmd_ver2_to_mail_cmd_wrapper () > No symbol table info available. > #16 0x0000000000425cf7 in doveadm_cmd_ver2_to_mail_cmd_wrapper () > No symbol table info available. > #17 0x0000000000433b29 in doveadm_cmd_run_ver2 () > No symbol table info available. > #18 0x00000000004336b7 in doveadm_cmd_try_run_ver2 () > ---Type to continue, or q to quit--- > No symbol table info available. > #19 0x00000000004362b2 in main () > No symbol table info available. > (gdb) $ > > On Sun, Oct 23, 2016 at 10:27 AM, Aki Tuomi wrote: > > > gdb full backtrace would be nice... > > > > gdb /path/to/bin /path/to/core > > bt full > > > > Aki > > > > > On October 23, 2016 at 5:39 PM Larry Rosenman > > wrote: > > > > > > > > > doveconf -n attached, what else do you need? > > > > > > > > > > > > On Sun, Oct 23, 2016 at 3:19 AM, Aki Tuomi wrote: > > > > > > > Please see http://dovecot.org/bugreport.html > > > > > > > > Aki > > > > > > > > > On October 23, 2016 at 2:32 AM larryrtx wrote: > > > > > > > > > > > > > > > Any news Ali? > > > > > > > > > > > > > > > Sent from my Sprint Samsung Galaxy S7. > > > > > -------- Original message --------From: Larry Rosenman < > > > > larryrtx at gmail.com> Date: 10/21/16 12:27 PM (GMT-06:00) To: Aki > > Tuomi < > > > > aki.tuomi at dovecot.fi> Cc: Dovecot Mailing List > > > > Subject: Re: keent() from Tika - with doveadm > > > > > Unfortuantely it doesn't seem to log that, and it's not 100% > > consistent. > > > > > I did catch one, but the log file is huge so it's at: > > > > > http://www.lerctr.org/~ler/Dovecot/doveadm0-tika > > > > > On Fri, Oct 21, 2016 at 12:17 PM, Aki Tuomi > > > > wrote: > > > > > > > > > > > > > > > > On October 21, 2016 at 8:06 PM Larry Rosenman > > > > wrote: > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > getting the following: > > > > > > > > > > > > > > > > > > > > > > Oct 21, 2016 12:04:25 PM org.apache.tika.server. > > resource.TikaResource > > > > > > > > > > > logRequest > > > > > > > > > > > INFO: tika/ > > > > > > > > > > > (application/vnd.openxmlformats-officedocument. > > > > wordprocessingml.document) > > > > > > > > > > > doveadm(ctr): Debug: http-client: conn 127.0.0.1:9998 [1]: Got 200 > > > > response > > > > > > > > > > > for request [Req69: PUT http://localhost:9998/tika/] (took 91 ms + > > > > 210 ms > > > > > > > > > > > in queue) > > > > > > > > > > > doveadm(ctr): Panic: kevent(): Invalid argument > > > > > > > > > > > Abort trap (core dumped) > > > > > > > > > > > > > > > > > > > > > > if I turn off tika, I do NOt get it. > > > > > > > > > > > > > > > > > > > > > > 2.2.26-RC1 > > > > > > > > > > > > > > > > > > > > > > what else do you need? > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -- > > > > > > > > > > > Larry Rosenman http://www.lerctr.org/~ler > > > > > > > > > > > Phone: +1 214-642-9640 (c) E-Mail: larryrtx at gmail.com > > > > > > > > > > > US Mail: 17716 Limpia Crk, Round Rock, TX 78664-7281 > > > > > > > > > > > > > > > > > > > > Any hope for exact request? > > > > > > > > > > > > > > > > > > > > Aki Tuomi > > > > > > > > > > Dovecot oy > > > > > > > > > > > > > > > > > > > > > > > > > -- > > > > > Larry Rosenman http://www.lerctr.org/~ler > > > > > Phone: +1 214-642-9640 (c) E-Mail: larryrtx at gmail.com > > > > > US Mail: 17716 Limpia Crk, Round Rock, TX 78664-7281 > > > > > > > > > > > > > > > > > > > > > -- > > > Larry Rosenman http://www.lerctr.org/~ler > > > Phone: +1 214-642-9640 (c) E-Mail: larryrtx at gmail.com > > > US Mail: 17716 Limpia Crk, Round Rock, TX 78664-7281 > > > > > > -- > Larry Rosenman http://www.lerctr.org/~ler > Phone: +1 214-642-9640 (c) E-Mail: larryrtx at gmail.com > US Mail: 17716 Limpia Crk, Round Rock, TX 78664-7281 From larryrtx at gmail.com Sun Oct 23 15:37:31 2016 From: larryrtx at gmail.com (Larry Rosenman) Date: Sun, 23 Oct 2016 10:37:31 -0500 Subject: keent() from Tika - with doveadm In-Reply-To: <191657457.111.1477237004555@appsuite-dev.open-xchange.com> References: <1219160790.717.1477210796307@appsuite-dev.open-xchange.com> <6177676.109.1477236435200@appsuite-dev.open-xchange.com> <191657457.111.1477237004555@appsuite-dev.open-xchange.com> Message-ID: I can try. -- give me a bit. On Sun, Oct 23, 2016 at 10:36 AM, Aki Tuomi wrote: > Can you install debug symbols in FreeBSD? > > Aki > > > On October 23, 2016 at 6:29 PM Larry Rosenman > wrote: > > > > > > $ gdb /usr/local/bin/doveadm `pwd`/doveadm.core > > GNU gdb 6.1.1 [FreeBSD] > > Copyright 2004 Free Software Foundation, Inc. > > GDB is free software, covered by the GNU General Public License, and you > are > > welcome to change it and/or distribute copies of it under certain > > conditions. > > Type "show copying" to see the conditions. > > There is absolutely no warranty for GDB. Type "show warranty" for > details. > > This GDB was configured as "amd64-marcel-freebsd"...(no debugging symbols > > found)... > > Core was generated by `doveadm'. > > Program terminated with signal 6, Aborted. > > Reading symbols from /lib/libz.so.6...(no debugging symbols > found)...done. > > Loaded symbols for /lib/libz.so.6 > > Reading symbols from /lib/libcrypt.so.5...(no debugging symbols > > found)...done. > > Loaded symbols for /lib/libcrypt.so.5 > > Reading symbols from /usr/local/lib/dovecot/ > libdovecot-storage.so.0...(no > > debugging symbols found)...done. > > Loaded symbols for /usr/local/lib/dovecot/libdovecot-storage.so.0 > > Reading symbols from /usr/local/lib/dovecot/libdovecot.so.0...(no > debugging > > symbols found)...done. > > Loaded symbols for /usr/local/lib/dovecot/libdovecot.so.0 > > Reading symbols from /lib/libc.so.7...(no debugging symbols > found)...done. > > Loaded symbols for /lib/libc.so.7 > > Reading symbols from /usr/local/lib/dovecot/lib15_notify_plugin.so...(no > > debugging symbols found)...done. > > Loaded symbols for /usr/local/lib/dovecot/lib15_notify_plugin.so > > Reading symbols from /usr/local/lib/dovecot/lib20_fts_plugin.so...(no > > debugging symbols found)...done. > > Loaded symbols for /usr/local/lib/dovecot/lib20_fts_plugin.so > > Reading symbols from /usr/local/lib/libicui18n.so.57...(no debugging > > symbols found)...done. > > Loaded symbols for /usr/local/lib/libicui18n.so.57 > > Reading symbols from /usr/local/lib/libicuuc.so.57...(no debugging > symbols > > found)...done. > > Loaded symbols for /usr/local/lib/libicuuc.so.57 > > Reading symbols from /usr/local/lib/libicudata.so.57... > > warning: Lowest section in /usr/local/lib/libicudata.so.57 is .hash at > > 0000000000000120 > > (no debugging symbols found)...done. > > Loaded symbols for /usr/local/lib/libicudata.so.57 > > Reading symbols from /lib/libthr.so.3...(no debugging symbols > found)...done. > > Loaded symbols for /lib/libthr.so.3 > > Reading symbols from /lib/libm.so.5...(no debugging symbols > found)...done. > > Loaded symbols for /lib/libm.so.5 > > Reading symbols from /usr/lib/libc++.so.1...(no debugging symbols > > found)...done. > > Loaded symbols for /usr/lib/libc++.so.1 > > Reading symbols from /lib/libcxxrt.so.1...(no debugging symbols > > found)...done. > > Loaded symbols for /lib/libcxxrt.so.1 > > Reading symbols from /lib/libgcc_s.so.1...(no debugging symbols > > found)...done. > > Loaded symbols for /lib/libgcc_s.so.1 > > Reading symbols from > > /usr/local/lib/dovecot/lib21_fts_lucene_plugin.so...(no debugging > symbols > > found)...done. > > Loaded symbols for /usr/local/lib/dovecot/lib21_fts_lucene_plugin.so > > Reading symbols from /usr/local/lib/libclucene-core.so.1...(no debugging > > symbols found)...done. > > Loaded symbols for /usr/local/lib/libclucene-core.so.1 > > Reading symbols from /usr/local/lib/libclucene-shared.so.1...(no > debugging > > symbols found)...done. > > Loaded symbols for /usr/local/lib/libclucene-shared.so.1 > > Reading symbols from /usr/local/lib/dovecot/lib90_stats_plugin.so...(no > > debugging symbols found)...done. > > Loaded symbols for /usr/local/lib/dovecot/lib90_stats_plugin.so > > Reading symbols from > > /usr/local/lib/dovecot/doveadm/lib10_doveadm_sieve_plugin.so...(no > > debugging symbols found)...done. > > Loaded symbols for > > /usr/local/lib/dovecot/doveadm/lib10_doveadm_sieve_plugin.so > > Reading symbols from > > /usr/local/lib/dovecot-2.2-pigeonhole/libdovecot-sieve.so.0...(no > debugging > > symbols found)...done. > > Loaded symbols for > > /usr/local/lib/dovecot-2.2-pigeonhole/libdovecot-sieve.so.0 > > Reading symbols from /usr/local/lib/dovecot/libdovecot-lda.so.0...(no > > debugging symbols found)...done. > > Loaded symbols for /usr/local/lib/dovecot/libdovecot-lda.so.0 > > Reading symbols from > > /usr/local/lib/dovecot/doveadm/lib20_doveadm_fts_lucene_plugin.so...(no > > debugging symbols found)...done. > > Loaded symbols for > > /usr/local/lib/dovecot/doveadm/lib20_doveadm_fts_lucene_plugin.so > > Reading symbols from > > /usr/local/lib/dovecot/doveadm/lib20_doveadm_fts_plugin.so...(no > debugging > > symbols found)...done. > > Loaded symbols for > > /usr/local/lib/dovecot/doveadm/lib20_doveadm_fts_plugin.so > > Reading symbols from /usr/lib/i18n/libiconv_std.so.4...(no debugging > > symbols found)...done. > > Loaded symbols for /usr/lib/i18n/libiconv_std.so.4 > > Reading symbols from /usr/lib/i18n/libUTF8.so.4...(no debugging symbols > > found)...done. > > Loaded symbols for /usr/lib/i18n/libUTF8.so.4 > > Reading symbols from /usr/lib/i18n/libmapper_none.so.4...(no debugging > > symbols found)...done. > > Loaded symbols for /usr/lib/i18n/libmapper_none.so.4 > > Reading symbols from /usr/lib/i18n/libmapper_std.so.4...(no debugging > > symbols found)...done. > > Loaded symbols for /usr/lib/i18n/libmapper_std.so.4 > > Reading symbols from /libexec/ld-elf.so.1...(no debugging symbols > > found)...done. > > Loaded symbols for /libexec/ld-elf.so.1 > > #0 0x00000008013c4e2a in __cxa_thread_call_dtors () from /lib/libc.so.7 > > [New Thread 801c2a800 (LWP 100849/)] > > (gdb) bt full > > #0 0x00000008013c4e2a in __cxa_thread_call_dtors () from /lib/libc.so.7 > > No symbol table info available. > > #1 0x00000008013c4d99 in __cxa_thread_call_dtors () from /lib/libc.so.7 > > No symbol table info available. > > #2 0x0000000801087de4 in default_fatal_handler () > > from /usr/local/lib/dovecot/libdovecot.so.0 > > No symbol table info available. > > #3 0x0000000801087b73 in default_fatal_handler () > > from /usr/local/lib/dovecot/libdovecot.so.0 > > No symbol table info available. > > #4 0x0000000801088089 in i_panic () > > from /usr/local/lib/dovecot/libdovecot.so.0 > > No symbol table info available. > > #5 0x000000080109e164 in io_loop_handler_run_internal () > > from /usr/local/lib/dovecot/libdovecot.so.0 > > No symbol table info available. > > #6 0x000000080109ca74 in io_loop_handler_run () > > from /usr/local/lib/dovecot/libdovecot.so.0 > > No symbol table info available. > > #7 0x000000080109c858 in io_loop_run () > > from /usr/local/lib/dovecot/libdovecot.so.0 > > No symbol table info available. > > #8 0x00000008022119ed in fts_parsers_unload () > > from /usr/local/lib/dovecot/lib20_fts_plugin.so > > ---Type to continue, or q to quit--- > > No symbol table info available. > > #9 0x0000000802210bc2 in fts_parser_more () > > from /usr/local/lib/dovecot/lib20_fts_plugin.so > > No symbol table info available. > > #10 0x000000080220ec6f in fts_build_mail () > > from /usr/local/lib/dovecot/lib20_fts_plugin.so > > No symbol table info available. > > #11 0x0000000802213a2d in fts_mail_allocated () > > from /usr/local/lib/dovecot/lib20_fts_plugin.so > > No symbol table info available. > > #12 0x0000000800d0f2c9 in mail_precache () > > from /usr/local/lib/dovecot/libdovecot-storage.so.0 > > No symbol table info available. > > #13 0x0000000000429f0f in expunge_search_args_check () > > No symbol table info available. > > #14 0x0000000000424c2f in doveadm_mail_single_user () > > No symbol table info available. > > #15 0x0000000000425ed4 in doveadm_cmd_ver2_to_mail_cmd_wrapper () > > No symbol table info available. > > #16 0x0000000000425cf7 in doveadm_cmd_ver2_to_mail_cmd_wrapper () > > No symbol table info available. > > #17 0x0000000000433b29 in doveadm_cmd_run_ver2 () > > No symbol table info available. > > #18 0x00000000004336b7 in doveadm_cmd_try_run_ver2 () > > ---Type to continue, or q to quit--- > > No symbol table info available. > > #19 0x00000000004362b2 in main () > > No symbol table info available. > > (gdb) $ > > > > On Sun, Oct 23, 2016 at 10:27 AM, Aki Tuomi > wrote: > > > > > gdb full backtrace would be nice... > > > > > > gdb /path/to/bin /path/to/core > > > bt full > > > > > > Aki > > > > > > > On October 23, 2016 at 5:39 PM Larry Rosenman > > > wrote: > > > > > > > > > > > > doveconf -n attached, what else do you need? > > > > > > > > > > > > > > > > On Sun, Oct 23, 2016 at 3:19 AM, Aki Tuomi > wrote: > > > > > > > > > Please see http://dovecot.org/bugreport.html > > > > > > > > > > Aki > > > > > > > > > > > On October 23, 2016 at 2:32 AM larryrtx > wrote: > > > > > > > > > > > > > > > > > > Any news Ali? > > > > > > > > > > > > > > > > > > Sent from my Sprint Samsung Galaxy S7. > > > > > > -------- Original message --------From: Larry Rosenman < > > > > > larryrtx at gmail.com> Date: 10/21/16 12:27 PM (GMT-06:00) To: Aki > > > Tuomi < > > > > > aki.tuomi at dovecot.fi> Cc: Dovecot Mailing List < > dovecot at dovecot.org> > > > > > Subject: Re: keent() from Tika - with doveadm > > > > > > Unfortuantely it doesn't seem to log that, and it's not 100% > > > consistent. > > > > > > I did catch one, but the log file is huge so it's at: > > > > > > http://www.lerctr.org/~ler/Dovecot/doveadm0-tika > > > > > > On Fri, Oct 21, 2016 at 12:17 PM, Aki Tuomi < > aki.tuomi at dovecot.fi> > > > > > wrote: > > > > > > > > > > > > > > > > > > > On October 21, 2016 at 8:06 PM Larry Rosenman < > larryrtx at gmail.com> > > > > > wrote: > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > getting the following: > > > > > > > > > > > > > > > > > > > > > > > > > > Oct 21, 2016 12:04:25 PM org.apache.tika.server. > > > resource.TikaResource > > > > > > > > > > > > > logRequest > > > > > > > > > > > > > INFO: tika/ > > > > > > > > > > > > > (application/vnd.openxmlformats-officedocument. > > > > > wordprocessingml.document) > > > > > > > > > > > > > doveadm(ctr): Debug: http-client: conn 127.0.0.1:9998 [1]: > Got 200 > > > > > response > > > > > > > > > > > > > for request [Req69: PUT http://localhost:9998/tika/] (took 91 > ms + > > > > > 210 ms > > > > > > > > > > > > > in queue) > > > > > > > > > > > > > doveadm(ctr): Panic: kevent(): Invalid argument > > > > > > > > > > > > > Abort trap (core dumped) > > > > > > > > > > > > > > > > > > > > > > > > > > if I turn off tika, I do NOt get it. > > > > > > > > > > > > > > > > > > > > > > > > > > 2.2.26-RC1 > > > > > > > > > > > > > > > > > > > > > > > > > > what else do you need? > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -- > > > > > > > > > > > > > Larry Rosenman http://www.lerctr.org/~ler > > > > > > > > > > > > > Phone: +1 214-642-9640 (c) E-Mail: larryrtx at gmail.com > > > > > > > > > > > > > US Mail: 17716 Limpia Crk, Round Rock, TX 78664-7281 > > > > > > > > > > > > > > > > > > > > > > > > Any hope for exact request? > > > > > > > > > > > > > > > > > > > > > > > > Aki Tuomi > > > > > > > > > > > > Dovecot oy > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -- > > > > > > Larry Rosenman http://www.lerctr.org/~ler > > > > > > Phone: +1 214-642-9640 (c) E-Mail: larryrtx at gmail.com > > > > > > US Mail: 17716 Limpia Crk, Round Rock, TX 78664-7281 > > > > > > > > > > > > > > > > > > > > > > > > > > > -- > > > > Larry Rosenman http://www.lerctr.org/~ler > > > > Phone: +1 214-642-9640 (c) E-Mail: larryrtx at gmail.com > > > > US Mail: 17716 Limpia Crk, Round Rock, TX 78664-7281 > > > > > > > > > > > -- > > Larry Rosenman http://www.lerctr.org/~ler > > Phone: +1 214-642-9640 (c) E-Mail: larryrtx at gmail.com > > US Mail: 17716 Limpia Crk, Round Rock, TX 78664-7281 > -- Larry Rosenman http://www.lerctr.org/~ler Phone: +1 214-642-9640 (c) E-Mail: larryrtx at gmail.com US Mail: 17716 Limpia Crk, Round Rock, TX 78664-7281 From aki.tuomi at dovecot.fi Sun Oct 23 16:14:26 2016 From: aki.tuomi at dovecot.fi (Aki Tuomi) Date: Sun, 23 Oct 2016 19:14:26 +0300 (EEST) Subject: keent() from Tika - with doveadm In-Reply-To: References: <1219160790.717.1477210796307@appsuite-dev.open-xchange.com> <6177676.109.1477236435200@appsuite-dev.open-xchange.com> <191657457.111.1477237004555@appsuite-dev.open-xchange.com> Message-ID: <1063773824.113.1477239267493@appsuite-dev.open-xchange.com> Hi, can you run doveadm in gdb, wait for it to crash, and then go to frame 6 ( io_loop_handler_run_internal) and run p errno p ret p *ioloop p *ctx p *events Sorry but the crash doesn't make enough sense yet to me, we need to determine what the invalid parameter is. > Larry Rosenman http://www.lerctr.org/~ler > Phone: +1 214-642-9640 (c) E-Mail: larryrtx at gmail.com > US Mail: 17716 Limpia Crk, Round Rock, TX 78664-7281 From larryrtx at gmail.com Sun Oct 23 16:27:26 2016 From: larryrtx at gmail.com (Larry Rosenman) Date: Sun, 23 Oct 2016 11:27:26 -0500 Subject: keent() from Tika - with doveadm In-Reply-To: <1063773824.113.1477239267493@appsuite-dev.open-xchange.com> References: <1219160790.717.1477210796307@appsuite-dev.open-xchange.com> <6177676.109.1477236435200@appsuite-dev.open-xchange.com> <191657457.111.1477237004555@appsuite-dev.open-xchange.com> <1063773824.113.1477239267493@appsuite-dev.open-xchange.com> Message-ID: grrr. /home/mrm $ gdb /usr/local/bin/doveadm GNU gdb 6.1.1 [FreeBSD] Copyright 2004 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "amd64-marcel-freebsd"... (gdb) run -D -vvvvvv index * Starting program: /usr/local/bin/doveadm -D -vvvvvv index * Program received signal SIGTRAP, Trace/breakpoint trap. Cannot remove breakpoints because program is no longer writable. It might be running in another process. Further execution is probably impossible. 0x0000000800624490 in ?? () (gdb) Ideas? On Sun, Oct 23, 2016 at 11:14 AM, Aki Tuomi wrote: > Hi, > > can you run doveadm in gdb, wait for it to crash, and then go to frame 6 ( > io_loop_handler_run_internal) and run > > p errno > p ret > p *ioloop > p *ctx > p *events > > Sorry but the crash doesn't make enough sense yet to me, we need to > determine what the invalid parameter is. > > > Larry Rosenman http://www.lerctr.org/~ler > > Phone: +1 214-642-9640 (c) E-Mail: larryrtx at gmail.com > > US Mail: 17716 Limpia Crk, Round Rock, TX 78664-7281 > -- Larry Rosenman http://www.lerctr.org/~ler Phone: +1 214-642-9640 (c) E-Mail: larryrtx at gmail.com US Mail: 17716 Limpia Crk, Round Rock, TX 78664-7281 From larryrtx at gmail.com Sun Oct 23 16:42:03 2016 From: larryrtx at gmail.com (Larry Rosenman) Date: Sun, 23 Oct 2016 11:42:03 -0500 Subject: keent() from Tika - with doveadm In-Reply-To: References: <1219160790.717.1477210796307@appsuite-dev.open-xchange.com> <6177676.109.1477236435200@appsuite-dev.open-xchange.com> <191657457.111.1477237004555@appsuite-dev.open-xchange.com> <1063773824.113.1477239267493@appsuite-dev.open-xchange.com> Message-ID: ok, gdb7 works: (gdb) fr 6 #6 0x00000008011a3e49 in io_loop_handler_run_internal (ioloop=0x801c214e0) at ioloop-kqueue.c:131 131 i_panic("kevent(): %m"); (gdb) p errno $1 = 22 (gdb) p ret $2 = -1 (gdb) p *ioloop $3 = {prev = 0x801c21080, cur_ctx = 0x0, io_files = 0x801c4f980, next_io_file = 0x0, timeouts = 0x801c19e60, timeouts_new = {arr = {buffer = 0x801c5ac80, element_size = 8}, v = 0x801c5ac80, v_modifiable = 0x801c5ac80}, handler_context = 0x801c19e80, notify_handler_context = 0x0, max_fd_count = 0, time_moved_callback = 0x800d53bb0 , next_max_time = 1477240784, ioloop_wait_usecs = 29863, io_pending_count = 1, running = 1, iolooping = 1} (gdb) p *ctx $4 = {kq = 22, deleted_count = 0, events = {arr = {buffer = 0x801c5acc0, element_size = 32}, v = 0x801c5acc0, v_modifiable = 0x801c5acc0}} (gdb) p *events $5 = {ident = 23, filter = -1, flags = 0, fflags = 0, data = 8, udata = 0x801c4f980} (gdb) On Sun, Oct 23, 2016 at 11:27 AM, Larry Rosenman wrote: > grrr. > > /home/mrm $ gdb /usr/local/bin/doveadm > GNU gdb 6.1.1 [FreeBSD] > Copyright 2004 Free Software Foundation, Inc. > GDB is free software, covered by the GNU General Public License, and you > are > welcome to change it and/or distribute copies of it under certain > conditions. > Type "show copying" to see the conditions. > There is absolutely no warranty for GDB. Type "show warranty" for details. > This GDB was configured as "amd64-marcel-freebsd"... > (gdb) run -D -vvvvvv index * > Starting program: /usr/local/bin/doveadm -D -vvvvvv index * > > Program received signal SIGTRAP, Trace/breakpoint trap. > Cannot remove breakpoints because program is no longer writable. > It might be running in another process. > Further execution is probably impossible. > 0x0000000800624490 in ?? () > (gdb) > > Ideas? > > > On Sun, Oct 23, 2016 at 11:14 AM, Aki Tuomi wrote: > >> Hi, >> >> can you run doveadm in gdb, wait for it to crash, and then go to frame 6 >> ( io_loop_handler_run_internal) and run >> >> p errno >> p ret >> p *ioloop >> p *ctx >> p *events >> >> Sorry but the crash doesn't make enough sense yet to me, we need to >> determine what the invalid parameter is. >> >> > Larry Rosenman http://www.lerctr.org/~ler >> > Phone: +1 214-642-9640 (c) E-Mail: larryrtx at gmail.com >> > US Mail: 17716 Limpia Crk, Round Rock, TX 78664-7281 >> > > > > -- > Larry Rosenman http://www.lerctr.org/~ler > Phone: +1 214-642-9640 (c) E-Mail: larryrtx at gmail.com > US Mail: 17716 Limpia Crk, Round Rock, TX 78664-7281 > -- Larry Rosenman http://www.lerctr.org/~ler Phone: +1 214-642-9640 (c) E-Mail: larryrtx at gmail.com US Mail: 17716 Limpia Crk, Round Rock, TX 78664-7281 From aki.tuomi at dovecot.fi Sun Oct 23 17:20:42 2016 From: aki.tuomi at dovecot.fi (Aki Tuomi) Date: Sun, 23 Oct 2016 20:20:42 +0300 (EEST) Subject: keent() from Tika - with doveadm In-Reply-To: References: <1219160790.717.1477210796307@appsuite-dev.open-xchange.com> <6177676.109.1477236435200@appsuite-dev.open-xchange.com> <191657457.111.1477237004555@appsuite-dev.open-xchange.com> <1063773824.113.1477239267493@appsuite-dev.open-xchange.com> Message-ID: <445708024.118.1477243243399@appsuite-dev.open-xchange.com> According to man page, the only way it can return EINVAL (22) is either bad filter, or bad timeout. I can't see how the filter would be bad, so I'm guessing ts must be bad. Unfortunately I forgot to ask for it, so I am going to have to ask you run it again and run p ts if that's valid, then the only thing that can be bad if the file descriptor 23. Aki > On October 23, 2016 at 7:42 PM Larry Rosenman wrote: > > > ok, gdb7 works: > (gdb) fr 6 > #6 0x00000008011a3e49 in io_loop_handler_run_internal (ioloop=0x801c214e0) > at ioloop-kqueue.c:131 > 131 i_panic("kevent(): %m"); > (gdb) p errno > $1 = 22 > (gdb) p ret > $2 = -1 > (gdb) p *ioloop > $3 = {prev = 0x801c21080, cur_ctx = 0x0, io_files = 0x801c4f980, > next_io_file = 0x0, timeouts = 0x801c19e60, timeouts_new = {arr = {buffer = > 0x801c5ac80, element_size = 8}, v = 0x801c5ac80, > v_modifiable = 0x801c5ac80}, handler_context = 0x801c19e80, > notify_handler_context = 0x0, max_fd_count = 0, time_moved_callback = > 0x800d53bb0 , > next_max_time = 1477240784, ioloop_wait_usecs = 29863, io_pending_count = > 1, running = 1, iolooping = 1} > (gdb) p *ctx > $4 = {kq = 22, deleted_count = 0, events = {arr = {buffer = 0x801c5acc0, > element_size = 32}, v = 0x801c5acc0, v_modifiable = 0x801c5acc0}} > (gdb) p *events > $5 = {ident = 23, filter = -1, flags = 0, fflags = 0, data = 8, udata = > 0x801c4f980} > (gdb) > > > > On Sun, Oct 23, 2016 at 11:27 AM, Larry Rosenman wrote: > > > grrr. > > > > /home/mrm $ gdb /usr/local/bin/doveadm > > GNU gdb 6.1.1 [FreeBSD] > > Copyright 2004 Free Software Foundation, Inc. > > GDB is free software, covered by the GNU General Public License, and you > > are > > welcome to change it and/or distribute copies of it under certain > > conditions. > > Type "show copying" to see the conditions. > > There is absolutely no warranty for GDB. Type "show warranty" for details. > > This GDB was configured as "amd64-marcel-freebsd"... > > (gdb) run -D -vvvvvv index * > > Starting program: /usr/local/bin/doveadm -D -vvvvvv index * > > > > Program received signal SIGTRAP, Trace/breakpoint trap. > > Cannot remove breakpoints because program is no longer writable. > > It might be running in another process. > > Further execution is probably impossible. > > 0x0000000800624490 in ?? () > > (gdb) > > > > Ideas? > > > > > > On Sun, Oct 23, 2016 at 11:14 AM, Aki Tuomi wrote: > > > >> Hi, > >> > >> can you run doveadm in gdb, wait for it to crash, and then go to frame 6 > >> ( io_loop_handler_run_internal) and run > >> > >> p errno > >> p ret > >> p *ioloop > >> p *ctx > >> p *events > >> > >> Sorry but the crash doesn't make enough sense yet to me, we need to > >> determine what the invalid parameter is. > >> > >> > Larry Rosenman http://www.lerctr.org/~ler > >> > Phone: +1 214-642-9640 (c) E-Mail: larryrtx at gmail.com > >> > US Mail: 17716 Limpia Crk, Round Rock, TX 78664-7281 > >> > > > > > > > > -- > > Larry Rosenman http://www.lerctr.org/~ler > > Phone: +1 214-642-9640 (c) E-Mail: larryrtx at gmail.com > > US Mail: 17716 Limpia Crk, Round Rock, TX 78664-7281 > > > > > > -- > Larry Rosenman http://www.lerctr.org/~ler > Phone: +1 214-642-9640 (c) E-Mail: larryrtx at gmail.com > US Mail: 17716 Limpia Crk, Round Rock, TX 78664-7281 From larryrtx at gmail.com Sun Oct 23 21:22:44 2016 From: larryrtx at gmail.com (Larry Rosenman) Date: Sun, 23 Oct 2016 16:22:44 -0500 Subject: keent() from Tika - with doveadm In-Reply-To: <445708024.118.1477243243399@appsuite-dev.open-xchange.com> References: <1219160790.717.1477210796307@appsuite-dev.open-xchange.com> <6177676.109.1477236435200@appsuite-dev.open-xchange.com> <191657457.111.1477237004555@appsuite-dev.open-xchange.com> <1063773824.113.1477239267493@appsuite-dev.open-xchange.com> <445708024.118.1477243243399@appsuite-dev.open-xchange.com> Message-ID: doveadm(mrm): Debug: http-client: conn 127.0.0.1:9998 [1]: Got 200 response for request [Req38: PUT http://localhost:9998/tika/] (took 296 ms + 8 ms in queue) doveadm(mrm): Panic: kevent(): Invalid argument Program received signal SIGABRT, Aborted. 0x00000008014e6f7a in thr_kill () from /lib/libc.so.7 (gdb) fr 6 #6 0x00000008011a3e49 in io_loop_handler_run_internal (ioloop=0x801c214e0) at ioloop-kqueue.c:131 131 i_panic("kevent(): %m"); (gdb) p ts $1 = {tv_sec = 34389923520, tv_nsec = 140737488345872000} (gdb) p errno $2 = 22 (gdb) p ret $3 = -1 (gdb) p *ioloop $4 = {prev = 0x801c21080, cur_ctx = 0x0, io_files = 0x801c4f980, next_io_file = 0x0, timeouts = 0x801d17540, timeouts_new = {arr = { buffer = 0x801cd9700, element_size = 8}, v = 0x801cd9700, v_modifiable = 0x801cd9700}, handler_context = 0x801d17560, notify_handler_context = 0x0, max_fd_count = 0, time_moved_callback = 0x800d53bb0 , next_max_time = 1477257580, ioloop_wait_usecs = 27148, io_pending_count = 1, running = 1, iolooping = 1} (gdb) p* ctx $5 = {kq = 21, deleted_count = 0, events = {arr = {buffer = 0x801cd9740, element_size = 32}, v = 0x801cd9740, v_modifiable = 0x801cd9740}} (gdb) p *events $6 = {ident = 22, filter = -1, flags = 0, fflags = 0, data = 8, udata = 0x801c4f980} (gdb) thebighonker.lerctr.org ~ $ ps auxw|grep doveadm mrm 46965 0.0 0.2 108516 55264 0 I+ 4:19PM 0:02.28 gdb /usr/local/bin/doveadm (gdb7111) mrm 46985 0.0 0.0 81236 15432 0 TX 4:19PM 0:03.51 /usr/local/bin/doveadm -D -vvvvvvv index * ler 47221 0.0 0.0 18856 2360 1 S+ 4:21PM 0:00.00 grep doveadm thebighonker.lerctr.org ~ $ sudo lsof -p 46985 Password: COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME doveadm 46985 mrm cwd VDIR 22,2669215774 152 4 /home/mrm doveadm 46985 mrm rtd VDIR 19,766509061 28 4 / doveadm 46985 mrm txt VREG 119,3584295129 1714125 182952 /usr/local/bin/doveadm doveadm 46985 mrm txt VREG 19,766509061 132272 14382 /libexec/ld-elf.so.1 doveadm 46985 mrm txt VREG 22,2669215774 6920 10680 /home/mrm/mail/TRAVEL/.imap/hawaiian.airlines/dovecot.index.log doveadm 46985 mrm txt VREG 22,2669215774 7224 10716 /home/mrm/mail/TRAVEL/.imap/priceline/dovecot.index.log doveadm 46985 mrm txt VREG 22,2669215774 11080 10650 /home/mrm/mail/TRAVEL/.imap/alamo/dovecot.index.log doveadm 46985 mrm txt VREG 22,2669215774 2968 10679 /home/mrm/mail/TRAVEL/.imap/hawaiian.airlines/dovecot.index.cache doveadm 46985 mrm txt VREG 22,2669215774 3108 10715 /home/mrm/mail/TRAVEL/.imap/priceline/dovecot.index.cache doveadm 46985 mrm txt VREG 22,2669215774 6520 139902 /home/mrm/mail/.imap/Sent/dovecot.index.log doveadm 46985 mrm txt VREG 22,2669215774 9236 10648 /home/mrm/mail/TRAVEL/.imap/alamo/dovecot.index.cache doveadm 46985 mrm txt VREG 22,2669215774 174892 143343 /home/mrm/mail/.imap/Sent/dovecot.index.cache doveadm 46985 mrm txt VREG 22,2669215774 32656 143058 /home/mrm/mail/.imap/INBOX/dovecot.index.log doveadm 46985 mrm txt VREG 19,766509061 720 30627 /usr/share/i18n/csmapper/CP/CP1251%UCS.mps doveadm 46985 mrm txt VREG 19,766509061 720 30630 /usr/share/i18n/csmapper/CP/CP1252%UCS.mps doveadm 46985 mrm txt VREG 19,766509061 89576 6846 /lib/libz.so.6 doveadm 46985 mrm txt VREG 19,766509061 62008 5994 /lib/libcrypt.so.5 doveadm 46985 mrm txt VREG 119,3584295129 6725689 183611 /usr/local/lib/dovecot/libdovecot-storage.so.0.0.0 doveadm 46985 mrm txt VREG 119,3584295129 3162259 183615 /usr/local/lib/dovecot/libdovecot.so.0.0.0 doveadm 46985 mrm txt VREG 19,766509061 1649944 4782 /lib/libc.so.7 doveadm 46985 mrm txt VREG 119,3584295129 80142 183550 /usr/local/lib/dovecot/lib15_notify_plugin.so doveadm 46985 mrm txt VREG 119,3584295129 652615 183556 /usr/local/lib/dovecot/lib20_fts_plugin.so doveadm 46985 mrm txt VREG 119,3584295129 2730888 268825 /usr/local/lib/libicui18n.so.57.1 doveadm 46985 mrm txt VREG 119,3584295129 1753976 268849 /usr/local/lib/libicuuc.so.57.1 doveadm 46985 mrm txt VREG 119,3584295129 1704 268821 /usr/local/lib/libicudata.so.57.1 doveadm 46985 mrm txt VREG 19,766509061 102560 6745 /lib/libthr.so.3 doveadm 46985 mrm txt VREG 19,766509061 184712 5795 /lib/libm.so.5 doveadm 46985 mrm txt VREG 19,766509061 774000 5642 /usr/lib/libc++.so.1 doveadm 46985 mrm txt VREG 19,766509061 103304 5742 /lib/libcxxrt.so.1 doveadm 46985 mrm txt VREG 19,766509061 56344 7436 /lib/libgcc_s.so.1 doveadm 46985 mrm txt VREG 119,3584295129 349981 183782 /usr/local/lib/dovecot/lib21_fts_lucene_plugin.so doveadm 46985 mrm txt VREG 119,3584295129 1969384 113258 /usr/local/lib/libclucene-core.so.2.3.3.4 doveadm 46985 mrm txt VREG 119,3584295129 128992 113261 /usr/local/lib/libclucene-shared.so.2.3.3.4 doveadm 46985 mrm txt VREG 119,3584295129 143141 183578 /usr/local/lib/dovecot/lib90_stats_plugin.so doveadm 46985 mrm txt VREG 119,3584295129 37368 151926 /usr/local/lib/dovecot/doveadm/lib10_doveadm_sieve_plugin.so doveadm 46985 mrm txt VREG 119,3584295129 693808 151924 /usr/local/lib/dovecot-2.2-pigeonhole/libdovecot-sieve.so.0.0.0 doveadm 46985 mrm txt VREG 119,3584295129 146477 183599 /usr/local/lib/dovecot/libdovecot-lda.so.0.0.0 doveadm 46985 mrm txt VREG 119,3584295129 13823 183780 /usr/local/lib/dovecot/doveadm/lib20_doveadm_fts_lucene_plugin.so doveadm 46985 mrm txt VREG 119,3584295129 88081 183527 /usr/local/lib/dovecot/doveadm/lib20_doveadm_fts_plugin.so doveadm 46985 mrm txt VREG 19,766509061 8304 6330 /usr/lib/i18n/libiconv_std.so.4 doveadm 46985 mrm txt VREG 19,766509061 6744 6318 /usr/lib/i18n/libUTF8.so.4 doveadm 46985 mrm txt VREG 19,766509061 4384 6336 /usr/lib/i18n/libmapper_none.so.4 doveadm 46985 mrm txt VREG 19,766509061 7584 6345 /usr/lib/i18n/libmapper_std.so.4 doveadm 46985 mrm 0u VCHR 0,188 0t390889 188 /dev/pts/0 doveadm 46985 mrm 1u VCHR 0,188 0t390889 188 /dev/pts/0 doveadm 46985 mrm 2u VCHR 0,188 0t390889 188 /dev/pts/0 doveadm 46985 mrm 3u PIPE 0xfffff806fdf505d0 16384 ->0xfffff806fdf50730 doveadm 46985 mrm 4u PIPE 0xfffff806fdf50730 0 ->0xfffff806fdf505d0 doveadm 46985 mrm 5u KQUEUE 0xfffff806350b0600 count=0, state=0 doveadm 46985 mrm 6w FIFO 163,709754999 0t0 29707 /var/run/dovecot/stats-mail doveadm 46985 mrm 7u VREG 22,2669215774 11080 10650 /home/mrm/mail/TRAVEL/.imap/alamo/dovecot.index.log doveadm 46985 mrm 8u VREG 22,2669215774 536 137895 /home/mrm/mail/TRAVEL/.imap/alamo/dovecot.index doveadm 46985 mrm 9u VREG 22,2669215774 6920 10680 /home/mrm/mail/TRAVEL/.imap/hawaiian.airlines/dovecot.index.log doveadm 46985 mrm 10u VREG 22,2669215774 2968 10679 /home/mrm/mail/TRAVEL/.imap/hawaiian.airlines/dovecot.index.cache doveadm 46985 mrm 11u VREG 22,2669215774 6520 139902 /home/mrm/mail/.imap/Sent/dovecot.index.log doveadm 46985 mrm 12u VREG 22,2669215774 9288 139905 /home/mrm/mail/.imap/Sent/dovecot.index doveadm 46985 mrm 13u VREG 22,2669215774 7224 10716 /home/mrm/mail/TRAVEL/.imap/priceline/dovecot.index.log doveadm 46985 mrm 14u VREG 22,2669215774 3108 10715 /home/mrm/mail/TRAVEL/.imap/priceline/dovecot.index.cache doveadm 46985 mrm 15u VREG 22,2669215774 9236 10648 /home/mrm/mail/TRAVEL/.imap/alamo/dovecot.index.cache doveadm 46985 mrm 16u VREG 22,2669215774 174892 143343 /home/mrm/mail/.imap/Sent/dovecot.index.cache doveadm 46985 mrm 17u VREG 22,2669215774 32656 143058 /home/mrm/mail/.imap/INBOX/dovecot.index.log doveadm 46985 mrm 18u VREG 22,2669215774 0 135848 /home/mrm (zroot/home/mrm) doveadm 46985 mrm 19u VREG 22,2669215774 35656 135336 /home/mrm/mail/.imap/INBOX/dovecot.index doveadm 46985 mrm 20u VREG 22,2669215774 0 135849 /home/mrm (zroot/home/mrm) doveadm 46985 mrm 21u KQUEUE 0xfffff80163b1ba00 count=1, state=0 doveadm 46985 mrm 22u IPv4 0xfffff805ea69a000 0t0 TCP localhost:44730->localhost:9998 (ESTABLISHED) doveadm 46985 mrm 25uR VREG 22,2669215774 32997612 4151 /home/mrm/mail/Sent thebighonker.lerctr.org On Sun, Oct 23, 2016 at 12:20 PM, Aki Tuomi wrote: > According to man page, the only way it can return EINVAL (22) is either > bad filter, or bad timeout. I can't see how the filter would be bad, so I'm > guessing ts must be bad. Unfortunately I forgot to ask for it, so I am > going to have to ask you run it again and run > > p ts > > if that's valid, then the only thing that can be bad if the file > descriptor 23. > > Aki > > > On October 23, 2016 at 7:42 PM Larry Rosenman > wrote: > > > > > > ok, gdb7 works: > > (gdb) fr 6 > > #6 0x00000008011a3e49 in io_loop_handler_run_internal > (ioloop=0x801c214e0) > > at ioloop-kqueue.c:131 > > 131 i_panic("kevent(): %m"); > > (gdb) p errno > > $1 = 22 > > (gdb) p ret > > $2 = -1 > > (gdb) p *ioloop > > $3 = {prev = 0x801c21080, cur_ctx = 0x0, io_files = 0x801c4f980, > > next_io_file = 0x0, timeouts = 0x801c19e60, timeouts_new = {arr = > {buffer = > > 0x801c5ac80, element_size = 8}, v = 0x801c5ac80, > > v_modifiable = 0x801c5ac80}, handler_context = 0x801c19e80, > > notify_handler_context = 0x0, max_fd_count = 0, time_moved_callback = > > 0x800d53bb0 , > > next_max_time = 1477240784, ioloop_wait_usecs = 29863, > io_pending_count = > > 1, running = 1, iolooping = 1} > > (gdb) p *ctx > > $4 = {kq = 22, deleted_count = 0, events = {arr = {buffer = 0x801c5acc0, > > element_size = 32}, v = 0x801c5acc0, v_modifiable = 0x801c5acc0}} > > (gdb) p *events > > $5 = {ident = 23, filter = -1, flags = 0, fflags = 0, data = 8, udata = > > 0x801c4f980} > > (gdb) > > > > > > > > On Sun, Oct 23, 2016 at 11:27 AM, Larry Rosenman > wrote: > > > > > grrr. > > > > > > /home/mrm $ gdb /usr/local/bin/doveadm > > > GNU gdb 6.1.1 [FreeBSD] > > > Copyright 2004 Free Software Foundation, Inc. > > > GDB is free software, covered by the GNU General Public License, and > you > > > are > > > welcome to change it and/or distribute copies of it under certain > > > conditions. > > > Type "show copying" to see the conditions. > > > There is absolutely no warranty for GDB. Type "show warranty" for > details. > > > This GDB was configured as "amd64-marcel-freebsd"... > > > (gdb) run -D -vvvvvv index * > > > Starting program: /usr/local/bin/doveadm -D -vvvvvv index * > > > > > > Program received signal SIGTRAP, Trace/breakpoint trap. > > > Cannot remove breakpoints because program is no longer writable. > > > It might be running in another process. > > > Further execution is probably impossible. > > > 0x0000000800624490 in ?? () > > > (gdb) > > > > > > Ideas? > > > > > > > > > On Sun, Oct 23, 2016 at 11:14 AM, Aki Tuomi > wrote: > > > > > >> Hi, > > >> > > >> can you run doveadm in gdb, wait for it to crash, and then go to > frame 6 > > >> ( io_loop_handler_run_internal) and run > > >> > > >> p errno > > >> p ret > > >> p *ioloop > > >> p *ctx > > >> p *events > > >> > > >> Sorry but the crash doesn't make enough sense yet to me, we need to > > >> determine what the invalid parameter is. > > >> > > >> > Larry Rosenman http://www.lerctr.org/~ler > > >> > Phone: +1 214-642-9640 (c) E-Mail: larryrtx at gmail.com > > >> > US Mail: 17716 Limpia Crk, Round Rock, TX 78664-7281 > > >> > > > > > > > > > > > > -- > > > Larry Rosenman http://www.lerctr.org/~ler > > > Phone: +1 214-642-9640 (c) E-Mail: larryrtx at gmail.com > > > US Mail: 17716 Limpia Crk, Round Rock, TX 78664-7281 > > > > > > > > > > > -- > > Larry Rosenman http://www.lerctr.org/~ler > > Phone: +1 214-642-9640 (c) E-Mail: larryrtx at gmail.com > > US Mail: 17716 Limpia Crk, Round Rock, TX 78664-7281 > -- Larry Rosenman http://www.lerctr.org/~ler Phone: +1 214-642-9640 (c) E-Mail: larryrtx at gmail.com US Mail: 17716 Limpia Crk, Round Rock, TX 78664-7281 From aki.tuomi at dovecot.fi Mon Oct 24 05:48:29 2016 From: aki.tuomi at dovecot.fi (Aki Tuomi) Date: Mon, 24 Oct 2016 08:48:29 +0300 (EEST) Subject: keent() from Tika - with doveadm In-Reply-To: References: <1219160790.717.1477210796307@appsuite-dev.open-xchange.com> <6177676.109.1477236435200@appsuite-dev.open-xchange.com> <191657457.111.1477237004555@appsuite-dev.open-xchange.com> <1063773824.113.1477239267493@appsuite-dev.open-xchange.com> <445708024.118.1477243243399@appsuite-dev.open-xchange.com> Message-ID: <429756207.896.1477288110361@appsuite-dev.open-xchange.com> Ok so that timeval makes no sense. We'll look into it. Aki > On October 24, 2016 at 12:22 AM Larry Rosenman wrote: > > > doveadm(mrm): Debug: http-client: conn 127.0.0.1:9998 [1]: Got 200 response > for request [Req38: PUT http://localhost:9998/tika/] (took 296 ms + 8 ms in > queue) > doveadm(mrm): Panic: kevent(): Invalid argument > > Program received signal SIGABRT, Aborted. > 0x00000008014e6f7a in thr_kill () from /lib/libc.so.7 > (gdb) fr 6 > #6 0x00000008011a3e49 in io_loop_handler_run_internal (ioloop=0x801c214e0) > at ioloop-kqueue.c:131 > 131 i_panic("kevent(): %m"); > (gdb) p ts > $1 = {tv_sec = 34389923520, tv_nsec = 140737488345872000} > (gdb) p errno > $2 = 22 > (gdb) p ret > $3 = -1 > (gdb) p *ioloop > $4 = {prev = 0x801c21080, cur_ctx = 0x0, io_files = 0x801c4f980, > next_io_file = 0x0, timeouts = 0x801d17540, timeouts_new = {arr = { > buffer = 0x801cd9700, element_size = 8}, v = 0x801cd9700, > v_modifiable = 0x801cd9700}, handler_context = 0x801d17560, > notify_handler_context = 0x0, max_fd_count = 0, > time_moved_callback = 0x800d53bb0 , > next_max_time = 1477257580, ioloop_wait_usecs = 27148, io_pending_count = > 1, > running = 1, iolooping = 1} > (gdb) p* ctx > $5 = {kq = 21, deleted_count = 0, events = {arr = {buffer = 0x801cd9740, > element_size = 32}, v = 0x801cd9740, v_modifiable = 0x801cd9740}} > (gdb) p *events > $6 = {ident = 22, filter = -1, flags = 0, fflags = 0, data = 8, > udata = 0x801c4f980} > (gdb) > > thebighonker.lerctr.org ~ $ ps auxw|grep doveadm > mrm 46965 0.0 0.2 108516 55264 0 I+ 4:19PM 0:02.28 gdb > /usr/local/bin/doveadm (gdb7111) > mrm 46985 0.0 0.0 81236 15432 0 TX 4:19PM 0:03.51 > /usr/local/bin/doveadm -D -vvvvvvv index * > ler 47221 0.0 0.0 18856 2360 1 S+ 4:21PM 0:00.00 grep > doveadm > thebighonker.lerctr.org ~ $ sudo lsof -p 46985 > Password: > COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME > doveadm 46985 mrm cwd VDIR 22,2669215774 152 4 > /home/mrm > doveadm 46985 mrm rtd VDIR 19,766509061 28 4 / > doveadm 46985 mrm txt VREG 119,3584295129 1714125 182952 > /usr/local/bin/doveadm > doveadm 46985 mrm txt VREG 19,766509061 132272 14382 > /libexec/ld-elf.so.1 > doveadm 46985 mrm txt VREG 22,2669215774 6920 10680 > /home/mrm/mail/TRAVEL/.imap/hawaiian.airlines/dovecot.index.log > doveadm 46985 mrm txt VREG 22,2669215774 7224 10716 > /home/mrm/mail/TRAVEL/.imap/priceline/dovecot.index.log > doveadm 46985 mrm txt VREG 22,2669215774 11080 10650 > /home/mrm/mail/TRAVEL/.imap/alamo/dovecot.index.log > doveadm 46985 mrm txt VREG 22,2669215774 2968 10679 > /home/mrm/mail/TRAVEL/.imap/hawaiian.airlines/dovecot.index.cache > doveadm 46985 mrm txt VREG 22,2669215774 3108 10715 > /home/mrm/mail/TRAVEL/.imap/priceline/dovecot.index.cache > doveadm 46985 mrm txt VREG 22,2669215774 6520 139902 > /home/mrm/mail/.imap/Sent/dovecot.index.log > doveadm 46985 mrm txt VREG 22,2669215774 9236 10648 > /home/mrm/mail/TRAVEL/.imap/alamo/dovecot.index.cache > doveadm 46985 mrm txt VREG 22,2669215774 174892 143343 > /home/mrm/mail/.imap/Sent/dovecot.index.cache > doveadm 46985 mrm txt VREG 22,2669215774 32656 143058 > /home/mrm/mail/.imap/INBOX/dovecot.index.log > doveadm 46985 mrm txt VREG 19,766509061 720 30627 > /usr/share/i18n/csmapper/CP/CP1251%UCS.mps > doveadm 46985 mrm txt VREG 19,766509061 720 30630 > /usr/share/i18n/csmapper/CP/CP1252%UCS.mps > doveadm 46985 mrm txt VREG 19,766509061 89576 6846 > /lib/libz.so.6 > doveadm 46985 mrm txt VREG 19,766509061 62008 5994 > /lib/libcrypt.so.5 > doveadm 46985 mrm txt VREG 119,3584295129 6725689 183611 > /usr/local/lib/dovecot/libdovecot-storage.so.0.0.0 > doveadm 46985 mrm txt VREG 119,3584295129 3162259 183615 > /usr/local/lib/dovecot/libdovecot.so.0.0.0 > doveadm 46985 mrm txt VREG 19,766509061 1649944 4782 > /lib/libc.so.7 > doveadm 46985 mrm txt VREG 119,3584295129 80142 183550 > /usr/local/lib/dovecot/lib15_notify_plugin.so > doveadm 46985 mrm txt VREG 119,3584295129 652615 183556 > /usr/local/lib/dovecot/lib20_fts_plugin.so > doveadm 46985 mrm txt VREG 119,3584295129 2730888 268825 > /usr/local/lib/libicui18n.so.57.1 > doveadm 46985 mrm txt VREG 119,3584295129 1753976 268849 > /usr/local/lib/libicuuc.so.57.1 > doveadm 46985 mrm txt VREG 119,3584295129 1704 268821 > /usr/local/lib/libicudata.so.57.1 > doveadm 46985 mrm txt VREG 19,766509061 102560 6745 > /lib/libthr.so.3 > doveadm 46985 mrm txt VREG 19,766509061 184712 5795 > /lib/libm.so.5 > doveadm 46985 mrm txt VREG 19,766509061 774000 5642 > /usr/lib/libc++.so.1 > doveadm 46985 mrm txt VREG 19,766509061 103304 5742 > /lib/libcxxrt.so.1 > doveadm 46985 mrm txt VREG 19,766509061 56344 7436 > /lib/libgcc_s.so.1 > doveadm 46985 mrm txt VREG 119,3584295129 349981 183782 > /usr/local/lib/dovecot/lib21_fts_lucene_plugin.so > doveadm 46985 mrm txt VREG 119,3584295129 1969384 113258 > /usr/local/lib/libclucene-core.so.2.3.3.4 > doveadm 46985 mrm txt VREG 119,3584295129 128992 113261 > /usr/local/lib/libclucene-shared.so.2.3.3.4 > doveadm 46985 mrm txt VREG 119,3584295129 143141 183578 > /usr/local/lib/dovecot/lib90_stats_plugin.so > doveadm 46985 mrm txt VREG 119,3584295129 37368 151926 > /usr/local/lib/dovecot/doveadm/lib10_doveadm_sieve_plugin.so > doveadm 46985 mrm txt VREG 119,3584295129 693808 151924 > /usr/local/lib/dovecot-2.2-pigeonhole/libdovecot-sieve.so.0.0.0 > doveadm 46985 mrm txt VREG 119,3584295129 146477 183599 > /usr/local/lib/dovecot/libdovecot-lda.so.0.0.0 > doveadm 46985 mrm txt VREG 119,3584295129 13823 183780 > /usr/local/lib/dovecot/doveadm/lib20_doveadm_fts_lucene_plugin.so > doveadm 46985 mrm txt VREG 119,3584295129 88081 183527 > /usr/local/lib/dovecot/doveadm/lib20_doveadm_fts_plugin.so > doveadm 46985 mrm txt VREG 19,766509061 8304 6330 > /usr/lib/i18n/libiconv_std.so.4 > doveadm 46985 mrm txt VREG 19,766509061 6744 6318 > /usr/lib/i18n/libUTF8.so.4 > doveadm 46985 mrm txt VREG 19,766509061 4384 6336 > /usr/lib/i18n/libmapper_none.so.4 > doveadm 46985 mrm txt VREG 19,766509061 7584 6345 > /usr/lib/i18n/libmapper_std.so.4 > doveadm 46985 mrm 0u VCHR 0,188 0t390889 188 > /dev/pts/0 > doveadm 46985 mrm 1u VCHR 0,188 0t390889 188 > /dev/pts/0 > doveadm 46985 mrm 2u VCHR 0,188 0t390889 188 > /dev/pts/0 > doveadm 46985 mrm 3u PIPE 0xfffff806fdf505d0 16384 > ->0xfffff806fdf50730 > doveadm 46985 mrm 4u PIPE 0xfffff806fdf50730 0 > ->0xfffff806fdf505d0 > doveadm 46985 mrm 5u KQUEUE 0xfffff806350b0600 > count=0, state=0 > doveadm 46985 mrm 6w FIFO 163,709754999 0t0 29707 > /var/run/dovecot/stats-mail > doveadm 46985 mrm 7u VREG 22,2669215774 11080 10650 > /home/mrm/mail/TRAVEL/.imap/alamo/dovecot.index.log > doveadm 46985 mrm 8u VREG 22,2669215774 536 137895 > /home/mrm/mail/TRAVEL/.imap/alamo/dovecot.index > doveadm 46985 mrm 9u VREG 22,2669215774 6920 10680 > /home/mrm/mail/TRAVEL/.imap/hawaiian.airlines/dovecot.index.log > doveadm 46985 mrm 10u VREG 22,2669215774 2968 10679 > /home/mrm/mail/TRAVEL/.imap/hawaiian.airlines/dovecot.index.cache > doveadm 46985 mrm 11u VREG 22,2669215774 6520 139902 > /home/mrm/mail/.imap/Sent/dovecot.index.log > doveadm 46985 mrm 12u VREG 22,2669215774 9288 139905 > /home/mrm/mail/.imap/Sent/dovecot.index > doveadm 46985 mrm 13u VREG 22,2669215774 7224 10716 > /home/mrm/mail/TRAVEL/.imap/priceline/dovecot.index.log > doveadm 46985 mrm 14u VREG 22,2669215774 3108 10715 > /home/mrm/mail/TRAVEL/.imap/priceline/dovecot.index.cache > doveadm 46985 mrm 15u VREG 22,2669215774 9236 10648 > /home/mrm/mail/TRAVEL/.imap/alamo/dovecot.index.cache > doveadm 46985 mrm 16u VREG 22,2669215774 174892 143343 > /home/mrm/mail/.imap/Sent/dovecot.index.cache > doveadm 46985 mrm 17u VREG 22,2669215774 32656 143058 > /home/mrm/mail/.imap/INBOX/dovecot.index.log > doveadm 46985 mrm 18u VREG 22,2669215774 0 135848 > /home/mrm (zroot/home/mrm) > doveadm 46985 mrm 19u VREG 22,2669215774 35656 135336 > /home/mrm/mail/.imap/INBOX/dovecot.index > doveadm 46985 mrm 20u VREG 22,2669215774 0 135849 > /home/mrm (zroot/home/mrm) > doveadm 46985 mrm 21u KQUEUE 0xfffff80163b1ba00 > count=1, state=0 > doveadm 46985 mrm 22u IPv4 0xfffff805ea69a000 0t0 TCP > localhost:44730->localhost:9998 (ESTABLISHED) > doveadm 46985 mrm 25uR VREG 22,2669215774 32997612 4151 > /home/mrm/mail/Sent > thebighonker.lerctr.org > > > > On Sun, Oct 23, 2016 at 12:20 PM, Aki Tuomi wrote: > > > According to man page, the only way it can return EINVAL (22) is either > > bad filter, or bad timeout. I can't see how the filter would be bad, so I'm > > guessing ts must be bad. Unfortunately I forgot to ask for it, so I am > > going to have to ask you run it again and run > > > > p ts > > > > if that's valid, then the only thing that can be bad if the file > > descriptor 23. > > > > Aki > > > > > On October 23, 2016 at 7:42 PM Larry Rosenman > > wrote: > > > > > > > > > ok, gdb7 works: > > > (gdb) fr 6 > > > #6 0x00000008011a3e49 in io_loop_handler_run_internal > > (ioloop=0x801c214e0) > > > at ioloop-kqueue.c:131 > > > 131 i_panic("kevent(): %m"); > > > (gdb) p errno > > > $1 = 22 > > > (gdb) p ret > > > $2 = -1 > > > (gdb) p *ioloop > > > $3 = {prev = 0x801c21080, cur_ctx = 0x0, io_files = 0x801c4f980, > > > next_io_file = 0x0, timeouts = 0x801c19e60, timeouts_new = {arr = > > {buffer = > > > 0x801c5ac80, element_size = 8}, v = 0x801c5ac80, > > > v_modifiable = 0x801c5ac80}, handler_context = 0x801c19e80, > > > notify_handler_context = 0x0, max_fd_count = 0, time_moved_callback = > > > 0x800d53bb0 , > > > next_max_time = 1477240784, ioloop_wait_usecs = 29863, > > io_pending_count = > > > 1, running = 1, iolooping = 1} > > > (gdb) p *ctx > > > $4 = {kq = 22, deleted_count = 0, events = {arr = {buffer = 0x801c5acc0, > > > element_size = 32}, v = 0x801c5acc0, v_modifiable = 0x801c5acc0}} > > > (gdb) p *events > > > $5 = {ident = 23, filter = -1, flags = 0, fflags = 0, data = 8, udata = > > > 0x801c4f980} > > > (gdb) > > > > > > > > > > > > On Sun, Oct 23, 2016 at 11:27 AM, Larry Rosenman > > wrote: > > > > > > > grrr. > > > > > > > > /home/mrm $ gdb /usr/local/bin/doveadm > > > > GNU gdb 6.1.1 [FreeBSD] > > > > Copyright 2004 Free Software Foundation, Inc. > > > > GDB is free software, covered by the GNU General Public License, and > > you > > > > are > > > > welcome to change it and/or distribute copies of it under certain > > > > conditions. > > > > Type "show copying" to see the conditions. > > > > There is absolutely no warranty for GDB. Type "show warranty" for > > details. > > > > This GDB was configured as "amd64-marcel-freebsd"... > > > > (gdb) run -D -vvvvvv index * > > > > Starting program: /usr/local/bin/doveadm -D -vvvvvv index * > > > > > > > > Program received signal SIGTRAP, Trace/breakpoint trap. > > > > Cannot remove breakpoints because program is no longer writable. > > > > It might be running in another process. > > > > Further execution is probably impossible. > > > > 0x0000000800624490 in ?? () > > > > (gdb) > > > > > > > > Ideas? > > > > > > > > > > > > On Sun, Oct 23, 2016 at 11:14 AM, Aki Tuomi > > wrote: > > > > > > > >> Hi, > > > >> > > > >> can you run doveadm in gdb, wait for it to crash, and then go to > > frame 6 > > > >> ( io_loop_handler_run_internal) and run > > > >> > > > >> p errno > > > >> p ret > > > >> p *ioloop > > > >> p *ctx > > > >> p *events > > > >> > > > >> Sorry but the crash doesn't make enough sense yet to me, we need to > > > >> determine what the invalid parameter is. > > > >> > > > >> > Larry Rosenman http://www.lerctr.org/~ler > > > >> > Phone: +1 214-642-9640 (c) E-Mail: larryrtx at gmail.com > > > >> > US Mail: 17716 Limpia Crk, Round Rock, TX 78664-7281 > > > >> > > > > > > > > > > > > > > > > -- > > > > Larry Rosenman http://www.lerctr.org/~ler > > > > Phone: +1 214-642-9640 (c) E-Mail: larryrtx at gmail.com > > > > US Mail: 17716 Limpia Crk, Round Rock, TX 78664-7281 > > > > > > > > > > > > > > > > -- > > > Larry Rosenman http://www.lerctr.org/~ler > > > Phone: +1 214-642-9640 (c) E-Mail: larryrtx at gmail.com > > > US Mail: 17716 Limpia Crk, Round Rock, TX 78664-7281 > > > > > > -- > Larry Rosenman http://www.lerctr.org/~ler > Phone: +1 214-642-9640 (c) E-Mail: larryrtx at gmail.com > US Mail: 17716 Limpia Crk, Round Rock, TX 78664-7281 From gandalf.corvotempesta at gmail.com Mon Oct 24 07:00:03 2016 From: gandalf.corvotempesta at gmail.com (Gandalf Corvotempesta) Date: Mon, 24 Oct 2016 09:00:03 +0200 Subject: Server migration In-Reply-To: References: Message-ID: Hi i have to migrate, online, a dovecot 1.2.15 to a new server. Which is the best way to accomplish this? I have 2 possibility: 1) migrate from the very old server to a newer server with the same dovecot version 2) migrate from the very old server to a new server with the latest dovecot version can i simply use rsync to sync everything and, when the sync is quick, move the mailbox from the old server to the new server? My biggest concern is how to manage the the emails that are coming during the server switch. Let's assume a 50gb maildir , the first sync would require hours to complete (tons of very small files) do i can't shutdown the mailbox. The second sync would require much less time and would also sync the email received during the first sync (but the mailbox is still receiving new emails) now, as third phase, i can move the mailbox to the new server (by changing the postfix configuration) so that all new emails are received on the new server and then start the last rsync (by removing the --delete flag or any new emails would be deleted as not existsnt on the older server) Any better solution? From aki.tuomi at dovecot.fi Mon Oct 24 07:17:11 2016 From: aki.tuomi at dovecot.fi (Aki Tuomi) Date: Mon, 24 Oct 2016 10:17:11 +0300 Subject: keent() from Tika - with doveadm In-Reply-To: <429756207.896.1477288110361@appsuite-dev.open-xchange.com> References: <1219160790.717.1477210796307@appsuite-dev.open-xchange.com> <6177676.109.1477236435200@appsuite-dev.open-xchange.com> <191657457.111.1477237004555@appsuite-dev.open-xchange.com> <1063773824.113.1477239267493@appsuite-dev.open-xchange.com> <445708024.118.1477243243399@appsuite-dev.open-xchange.com> <429756207.896.1477288110361@appsuite-dev.open-xchange.com> Message-ID: <3dc312ae-7def-0097-f664-61df0f56969f@dovecot.fi> Hi! Can you try these two patches? Aki On 24.10.2016 08:48, Aki Tuomi wrote: > Ok so that timeval makes no sense. We'll look into it. > > Aki > >> On October 24, 2016 at 12:22 AM Larry Rosenman wrote: >> >> >> doveadm(mrm): Debug: http-client: conn 127.0.0.1:9998 [1]: Got 200 response >> for request [Req38: PUT http://localhost:9998/tika/] (took 296 ms + 8 ms in >> queue) >> doveadm(mrm): Panic: kevent(): Invalid argument >> >> Program received signal SIGABRT, Aborted. >> 0x00000008014e6f7a in thr_kill () from /lib/libc.so.7 >> (gdb) fr 6 >> #6 0x00000008011a3e49 in io_loop_handler_run_internal (ioloop=0x801c214e0) >> at ioloop-kqueue.c:131 >> 131 i_panic("kevent(): %m"); >> (gdb) p ts >> $1 = {tv_sec = 34389923520, tv_nsec = 140737488345872000} >> (gdb) p errno >> $2 = 22 >> (gdb) p ret >> $3 = -1 >> (gdb) p *ioloop >> $4 = {prev = 0x801c21080, cur_ctx = 0x0, io_files = 0x801c4f980, >> next_io_file = 0x0, timeouts = 0x801d17540, timeouts_new = {arr = { >> buffer = 0x801cd9700, element_size = 8}, v = 0x801cd9700, >> v_modifiable = 0x801cd9700}, handler_context = 0x801d17560, >> notify_handler_context = 0x0, max_fd_count = 0, >> time_moved_callback = 0x800d53bb0 , >> next_max_time = 1477257580, ioloop_wait_usecs = 27148, io_pending_count = >> 1, >> running = 1, iolooping = 1} >> (gdb) p* ctx >> $5 = {kq = 21, deleted_count = 0, events = {arr = {buffer = 0x801cd9740, >> element_size = 32}, v = 0x801cd9740, v_modifiable = 0x801cd9740}} >> (gdb) p *events >> $6 = {ident = 22, filter = -1, flags = 0, fflags = 0, data = 8, >> udata = 0x801c4f980} >> (gdb) >> >> thebighonker.lerctr.org ~ $ ps auxw|grep doveadm >> mrm 46965 0.0 0.2 108516 55264 0 I+ 4:19PM 0:02.28 gdb >> /usr/local/bin/doveadm (gdb7111) >> mrm 46985 0.0 0.0 81236 15432 0 TX 4:19PM 0:03.51 >> /usr/local/bin/doveadm -D -vvvvvvv index * >> ler 47221 0.0 0.0 18856 2360 1 S+ 4:21PM 0:00.00 grep >> doveadm >> thebighonker.lerctr.org ~ $ sudo lsof -p 46985 >> Password: >> COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME >> doveadm 46985 mrm cwd VDIR 22,2669215774 152 4 >> /home/mrm >> doveadm 46985 mrm rtd VDIR 19,766509061 28 4 / >> doveadm 46985 mrm txt VREG 119,3584295129 1714125 182952 >> /usr/local/bin/doveadm >> doveadm 46985 mrm txt VREG 19,766509061 132272 14382 >> /libexec/ld-elf.so.1 >> doveadm 46985 mrm txt VREG 22,2669215774 6920 10680 >> /home/mrm/mail/TRAVEL/.imap/hawaiian.airlines/dovecot.index.log >> doveadm 46985 mrm txt VREG 22,2669215774 7224 10716 >> /home/mrm/mail/TRAVEL/.imap/priceline/dovecot.index.log >> doveadm 46985 mrm txt VREG 22,2669215774 11080 10650 >> /home/mrm/mail/TRAVEL/.imap/alamo/dovecot.index.log >> doveadm 46985 mrm txt VREG 22,2669215774 2968 10679 >> /home/mrm/mail/TRAVEL/.imap/hawaiian.airlines/dovecot.index.cache >> doveadm 46985 mrm txt VREG 22,2669215774 3108 10715 >> /home/mrm/mail/TRAVEL/.imap/priceline/dovecot.index.cache >> doveadm 46985 mrm txt VREG 22,2669215774 6520 139902 >> /home/mrm/mail/.imap/Sent/dovecot.index.log >> doveadm 46985 mrm txt VREG 22,2669215774 9236 10648 >> /home/mrm/mail/TRAVEL/.imap/alamo/dovecot.index.cache >> doveadm 46985 mrm txt VREG 22,2669215774 174892 143343 >> /home/mrm/mail/.imap/Sent/dovecot.index.cache >> doveadm 46985 mrm txt VREG 22,2669215774 32656 143058 >> /home/mrm/mail/.imap/INBOX/dovecot.index.log >> doveadm 46985 mrm txt VREG 19,766509061 720 30627 >> /usr/share/i18n/csmapper/CP/CP1251%UCS.mps >> doveadm 46985 mrm txt VREG 19,766509061 720 30630 >> /usr/share/i18n/csmapper/CP/CP1252%UCS.mps >> doveadm 46985 mrm txt VREG 19,766509061 89576 6846 >> /lib/libz.so.6 >> doveadm 46985 mrm txt VREG 19,766509061 62008 5994 >> /lib/libcrypt.so.5 >> doveadm 46985 mrm txt VREG 119,3584295129 6725689 183611 >> /usr/local/lib/dovecot/libdovecot-storage.so.0.0.0 >> doveadm 46985 mrm txt VREG 119,3584295129 3162259 183615 >> /usr/local/lib/dovecot/libdovecot.so.0.0.0 >> doveadm 46985 mrm txt VREG 19,766509061 1649944 4782 >> /lib/libc.so.7 >> doveadm 46985 mrm txt VREG 119,3584295129 80142 183550 >> /usr/local/lib/dovecot/lib15_notify_plugin.so >> doveadm 46985 mrm txt VREG 119,3584295129 652615 183556 >> /usr/local/lib/dovecot/lib20_fts_plugin.so >> doveadm 46985 mrm txt VREG 119,3584295129 2730888 268825 >> /usr/local/lib/libicui18n.so.57.1 >> doveadm 46985 mrm txt VREG 119,3584295129 1753976 268849 >> /usr/local/lib/libicuuc.so.57.1 >> doveadm 46985 mrm txt VREG 119,3584295129 1704 268821 >> /usr/local/lib/libicudata.so.57.1 >> doveadm 46985 mrm txt VREG 19,766509061 102560 6745 >> /lib/libthr.so.3 >> doveadm 46985 mrm txt VREG 19,766509061 184712 5795 >> /lib/libm.so.5 >> doveadm 46985 mrm txt VREG 19,766509061 774000 5642 >> /usr/lib/libc++.so.1 >> doveadm 46985 mrm txt VREG 19,766509061 103304 5742 >> /lib/libcxxrt.so.1 >> doveadm 46985 mrm txt VREG 19,766509061 56344 7436 >> /lib/libgcc_s.so.1 >> doveadm 46985 mrm txt VREG 119,3584295129 349981 183782 >> /usr/local/lib/dovecot/lib21_fts_lucene_plugin.so >> doveadm 46985 mrm txt VREG 119,3584295129 1969384 113258 >> /usr/local/lib/libclucene-core.so.2.3.3.4 >> doveadm 46985 mrm txt VREG 119,3584295129 128992 113261 >> /usr/local/lib/libclucene-shared.so.2.3.3.4 >> doveadm 46985 mrm txt VREG 119,3584295129 143141 183578 >> /usr/local/lib/dovecot/lib90_stats_plugin.so >> doveadm 46985 mrm txt VREG 119,3584295129 37368 151926 >> /usr/local/lib/dovecot/doveadm/lib10_doveadm_sieve_plugin.so >> doveadm 46985 mrm txt VREG 119,3584295129 693808 151924 >> /usr/local/lib/dovecot-2.2-pigeonhole/libdovecot-sieve.so.0.0.0 >> doveadm 46985 mrm txt VREG 119,3584295129 146477 183599 >> /usr/local/lib/dovecot/libdovecot-lda.so.0.0.0 >> doveadm 46985 mrm txt VREG 119,3584295129 13823 183780 >> /usr/local/lib/dovecot/doveadm/lib20_doveadm_fts_lucene_plugin.so >> doveadm 46985 mrm txt VREG 119,3584295129 88081 183527 >> /usr/local/lib/dovecot/doveadm/lib20_doveadm_fts_plugin.so >> doveadm 46985 mrm txt VREG 19,766509061 8304 6330 >> /usr/lib/i18n/libiconv_std.so.4 >> doveadm 46985 mrm txt VREG 19,766509061 6744 6318 >> /usr/lib/i18n/libUTF8.so.4 >> doveadm 46985 mrm txt VREG 19,766509061 4384 6336 >> /usr/lib/i18n/libmapper_none.so.4 >> doveadm 46985 mrm txt VREG 19,766509061 7584 6345 >> /usr/lib/i18n/libmapper_std.so.4 >> doveadm 46985 mrm 0u VCHR 0,188 0t390889 188 >> /dev/pts/0 >> doveadm 46985 mrm 1u VCHR 0,188 0t390889 188 >> /dev/pts/0 >> doveadm 46985 mrm 2u VCHR 0,188 0t390889 188 >> /dev/pts/0 >> doveadm 46985 mrm 3u PIPE 0xfffff806fdf505d0 16384 >> ->0xfffff806fdf50730 >> doveadm 46985 mrm 4u PIPE 0xfffff806fdf50730 0 >> ->0xfffff806fdf505d0 >> doveadm 46985 mrm 5u KQUEUE 0xfffff806350b0600 >> count=0, state=0 >> doveadm 46985 mrm 6w FIFO 163,709754999 0t0 29707 >> /var/run/dovecot/stats-mail >> doveadm 46985 mrm 7u VREG 22,2669215774 11080 10650 >> /home/mrm/mail/TRAVEL/.imap/alamo/dovecot.index.log >> doveadm 46985 mrm 8u VREG 22,2669215774 536 137895 >> /home/mrm/mail/TRAVEL/.imap/alamo/dovecot.index >> doveadm 46985 mrm 9u VREG 22,2669215774 6920 10680 >> /home/mrm/mail/TRAVEL/.imap/hawaiian.airlines/dovecot.index.log >> doveadm 46985 mrm 10u VREG 22,2669215774 2968 10679 >> /home/mrm/mail/TRAVEL/.imap/hawaiian.airlines/dovecot.index.cache >> doveadm 46985 mrm 11u VREG 22,2669215774 6520 139902 >> /home/mrm/mail/.imap/Sent/dovecot.index.log >> doveadm 46985 mrm 12u VREG 22,2669215774 9288 139905 >> /home/mrm/mail/.imap/Sent/dovecot.index >> doveadm 46985 mrm 13u VREG 22,2669215774 7224 10716 >> /home/mrm/mail/TRAVEL/.imap/priceline/dovecot.index.log >> doveadm 46985 mrm 14u VREG 22,2669215774 3108 10715 >> /home/mrm/mail/TRAVEL/.imap/priceline/dovecot.index.cache >> doveadm 46985 mrm 15u VREG 22,2669215774 9236 10648 >> /home/mrm/mail/TRAVEL/.imap/alamo/dovecot.index.cache >> doveadm 46985 mrm 16u VREG 22,2669215774 174892 143343 >> /home/mrm/mail/.imap/Sent/dovecot.index.cache >> doveadm 46985 mrm 17u VREG 22,2669215774 32656 143058 >> /home/mrm/mail/.imap/INBOX/dovecot.index.log >> doveadm 46985 mrm 18u VREG 22,2669215774 0 135848 >> /home/mrm (zroot/home/mrm) >> doveadm 46985 mrm 19u VREG 22,2669215774 35656 135336 >> /home/mrm/mail/.imap/INBOX/dovecot.index >> doveadm 46985 mrm 20u VREG 22,2669215774 0 135849 >> /home/mrm (zroot/home/mrm) >> doveadm 46985 mrm 21u KQUEUE 0xfffff80163b1ba00 >> count=1, state=0 >> doveadm 46985 mrm 22u IPv4 0xfffff805ea69a000 0t0 TCP >> localhost:44730->localhost:9998 (ESTABLISHED) >> doveadm 46985 mrm 25uR VREG 22,2669215774 32997612 4151 >> /home/mrm/mail/Sent >> thebighonker.lerctr.org >> >> >> >> On Sun, Oct 23, 2016 at 12:20 PM, Aki Tuomi wrote: >> >>> According to man page, the only way it can return EINVAL (22) is either >>> bad filter, or bad timeout. I can't see how the filter would be bad, so I'm >>> guessing ts must be bad. Unfortunately I forgot to ask for it, so I am >>> going to have to ask you run it again and run >>> >>> p ts >>> >>> if that's valid, then the only thing that can be bad if the file >>> descriptor 23. >>> >>> Aki >>> >>>> On October 23, 2016 at 7:42 PM Larry Rosenman >>> wrote: >>>> >>>> ok, gdb7 works: >>>> (gdb) fr 6 >>>> #6 0x00000008011a3e49 in io_loop_handler_run_internal >>> (ioloop=0x801c214e0) >>>> at ioloop-kqueue.c:131 >>>> 131 i_panic("kevent(): %m"); >>>> (gdb) p errno >>>> $1 = 22 >>>> (gdb) p ret >>>> $2 = -1 >>>> (gdb) p *ioloop >>>> $3 = {prev = 0x801c21080, cur_ctx = 0x0, io_files = 0x801c4f980, >>>> next_io_file = 0x0, timeouts = 0x801c19e60, timeouts_new = {arr = >>> {buffer = >>>> 0x801c5ac80, element_size = 8}, v = 0x801c5ac80, >>>> v_modifiable = 0x801c5ac80}, handler_context = 0x801c19e80, >>>> notify_handler_context = 0x0, max_fd_count = 0, time_moved_callback = >>>> 0x800d53bb0 , >>>> next_max_time = 1477240784, ioloop_wait_usecs = 29863, >>> io_pending_count = >>>> 1, running = 1, iolooping = 1} >>>> (gdb) p *ctx >>>> $4 = {kq = 22, deleted_count = 0, events = {arr = {buffer = 0x801c5acc0, >>>> element_size = 32}, v = 0x801c5acc0, v_modifiable = 0x801c5acc0}} >>>> (gdb) p *events >>>> $5 = {ident = 23, filter = -1, flags = 0, fflags = 0, data = 8, udata = >>>> 0x801c4f980} >>>> (gdb) >>>> >>>> >>>> >>>> On Sun, Oct 23, 2016 at 11:27 AM, Larry Rosenman >>> wrote: >>>>> grrr. >>>>> >>>>> /home/mrm $ gdb /usr/local/bin/doveadm >>>>> GNU gdb 6.1.1 [FreeBSD] >>>>> Copyright 2004 Free Software Foundation, Inc. >>>>> GDB is free software, covered by the GNU General Public License, and >>> you >>>>> are >>>>> welcome to change it and/or distribute copies of it under certain >>>>> conditions. >>>>> Type "show copying" to see the conditions. >>>>> There is absolutely no warranty for GDB. Type "show warranty" for >>> details. >>>>> This GDB was configured as "amd64-marcel-freebsd"... >>>>> (gdb) run -D -vvvvvv index * >>>>> Starting program: /usr/local/bin/doveadm -D -vvvvvv index * >>>>> >>>>> Program received signal SIGTRAP, Trace/breakpoint trap. >>>>> Cannot remove breakpoints because program is no longer writable. >>>>> It might be running in another process. >>>>> Further execution is probably impossible. >>>>> 0x0000000800624490 in ?? () >>>>> (gdb) >>>>> >>>>> Ideas? >>>>> >>>>> >>>>> On Sun, Oct 23, 2016 at 11:14 AM, Aki Tuomi >>> wrote: >>>>>> Hi, >>>>>> >>>>>> can you run doveadm in gdb, wait for it to crash, and then go to >>> frame 6 >>>>>> ( io_loop_handler_run_internal) and run >>>>>> >>>>>> p errno >>>>>> p ret >>>>>> p *ioloop >>>>>> p *ctx >>>>>> p *events >>>>>> >>>>>> Sorry but the crash doesn't make enough sense yet to me, we need to >>>>>> determine what the invalid parameter is. >>>>>> >>>>>>> Larry Rosenman http://www.lerctr.org/~ler >>>>>>> Phone: +1 214-642-9640 (c) E-Mail: larryrtx at gmail.com >>>>>>> US Mail: 17716 Limpia Crk, Round Rock, TX 78664-7281 >>>>> >>>>> >>>>> -- >>>>> Larry Rosenman http://www.lerctr.org/~ler >>>>> Phone: +1 214-642-9640 (c) E-Mail: larryrtx at gmail.com >>>>> US Mail: 17716 Limpia Crk, Round Rock, TX 78664-7281 >>>>> >>>> >>>> >>>> -- >>>> Larry Rosenman http://www.lerctr.org/~ler >>>> Phone: +1 214-642-9640 (c) E-Mail: larryrtx at gmail.com >>>> US Mail: 17716 Limpia Crk, Round Rock, TX 78664-7281 >> >> >> -- >> Larry Rosenman http://www.lerctr.org/~ler >> Phone: +1 214-642-9640 (c) E-Mail: larryrtx at gmail.com >> US Mail: 17716 Limpia Crk, Round Rock, TX 78664-7281 -------------- next part -------------- A non-text attachment was scrubbed... Name: ioloop-kqueue.tgz Type: application/x-compressed-tar Size: 1428 bytes Desc: not available URL: From karol at augustin.pl Mon Oct 24 09:23:27 2016 From: karol at augustin.pl (Karol Augustin) Date: Mon, 24 Oct 2016 10:23:27 +0100 Subject: Server migration In-Reply-To: References: Message-ID: On 2016-10-24 8:00, Gandalf Corvotempesta wrote: > can i simply use rsync to sync everything and, when the sync is quick, move > the mailbox from the old server to the new server? My biggest concern is > how to manage the the emails that are coming during the server switch. > > Let's assume a 50gb maildir , the first sync would require hours to > complete (tons of very small files) do i can't shutdown the mailbox. The > second sync would require much less time and would also sync the email > received during the first sync (but the mailbox is still receiving new > emails) > now, as third phase, i can move the mailbox to the new server (by changing > the postfix configuration) so that all new emails are received on the new > server and then start the last rsync (by removing the --delete flag or any > new emails would be deleted as not existsnt on the older server) > > Any better solution? When I am doing this I just turn off both servers for the third sync. Its short enough to not cause much problem. And then after third sync I start the new server and all clients can connect to it so I also mitigate any problems resulting from clients that would be still connected to the old server. The last issue depends on the way you force everyone to use new server (DNS, routing, etc). Remember that beside the new emails that could arrive during sync you have also all sorts of user-generated operations as move, delete etc. So if you just do 3rd rsync without --delete you can end up duplicating users' emails if they move them during procedure. Best, Karol From aki.tuomi at dovecot.fi Mon Oct 24 09:34:38 2016 From: aki.tuomi at dovecot.fi (Aki Tuomi) Date: Mon, 24 Oct 2016 12:34:38 +0300 Subject: keent() from Tika - with doveadm In-Reply-To: <3dc312ae-7def-0097-f664-61df0f56969f@dovecot.fi> References: <1219160790.717.1477210796307@appsuite-dev.open-xchange.com> <6177676.109.1477236435200@appsuite-dev.open-xchange.com> <191657457.111.1477237004555@appsuite-dev.open-xchange.com> <1063773824.113.1477239267493@appsuite-dev.open-xchange.com> <445708024.118.1477243243399@appsuite-dev.open-xchange.com> <429756207.896.1477288110361@appsuite-dev.open-xchange.com> <3dc312ae-7def-0097-f664-61df0f56969f@dovecot.fi> Message-ID: <6507f93d-b2d6-5700-d450-0cca4e87dc06@dovecot.fi> Hi! We found some problems with those patches, and ended up doing slightly different fix: https://github.com/dovecot/core/compare/3e41b3d%5E...cca98b.patch Aki On 24.10.2016 10:17, Aki Tuomi wrote: > Hi! > > Can you try these two patches? > > Aki > > > On 24.10.2016 08:48, Aki Tuomi wrote: >> Ok so that timeval makes no sense. We'll look into it. >> >> Aki >> >>> On October 24, 2016 at 12:22 AM Larry Rosenman wrote: >>> >>> >>> doveadm(mrm): Debug: http-client: conn 127.0.0.1:9998 [1]: Got 200 response >>> for request [Req38: PUT http://localhost:9998/tika/] (took 296 ms + 8 ms in >>> queue) >>> doveadm(mrm): Panic: kevent(): Invalid argument >>> >>> Program received signal SIGABRT, Aborted. >>> 0x00000008014e6f7a in thr_kill () from /lib/libc.so.7 >>> (gdb) fr 6 >>> #6 0x00000008011a3e49 in io_loop_handler_run_internal (ioloop=0x801c214e0) >>> at ioloop-kqueue.c:131 >>> 131 i_panic("kevent(): %m"); >>> (gdb) p ts >>> $1 = {tv_sec = 34389923520, tv_nsec = 140737488345872000} >>> (gdb) p errno >>> $2 = 22 >>> (gdb) p ret >>> $3 = -1 >>> (gdb) p *ioloop >>> $4 = {prev = 0x801c21080, cur_ctx = 0x0, io_files = 0x801c4f980, >>> next_io_file = 0x0, timeouts = 0x801d17540, timeouts_new = {arr = { >>> buffer = 0x801cd9700, element_size = 8}, v = 0x801cd9700, >>> v_modifiable = 0x801cd9700}, handler_context = 0x801d17560, >>> notify_handler_context = 0x0, max_fd_count = 0, >>> time_moved_callback = 0x800d53bb0 , >>> next_max_time = 1477257580, ioloop_wait_usecs = 27148, io_pending_count = >>> 1, >>> running = 1, iolooping = 1} >>> (gdb) p* ctx >>> $5 = {kq = 21, deleted_count = 0, events = {arr = {buffer = 0x801cd9740, >>> element_size = 32}, v = 0x801cd9740, v_modifiable = 0x801cd9740}} >>> (gdb) p *events >>> $6 = {ident = 22, filter = -1, flags = 0, fflags = 0, data = 8, >>> udata = 0x801c4f980} >>> (gdb) >>> >>> thebighonker.lerctr.org ~ $ ps auxw|grep doveadm >>> mrm 46965 0.0 0.2 108516 55264 0 I+ 4:19PM 0:02.28 gdb >>> /usr/local/bin/doveadm (gdb7111) >>> mrm 46985 0.0 0.0 81236 15432 0 TX 4:19PM 0:03.51 >>> /usr/local/bin/doveadm -D -vvvvvvv index * >>> ler 47221 0.0 0.0 18856 2360 1 S+ 4:21PM 0:00.00 grep >>> doveadm >>> thebighonker.lerctr.org ~ $ sudo lsof -p 46985 >>> Password: >>> COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME >>> doveadm 46985 mrm cwd VDIR 22,2669215774 152 4 >>> /home/mrm >>> doveadm 46985 mrm rtd VDIR 19,766509061 28 4 / >>> doveadm 46985 mrm txt VREG 119,3584295129 1714125 182952 >>> /usr/local/bin/doveadm >>> doveadm 46985 mrm txt VREG 19,766509061 132272 14382 >>> /libexec/ld-elf.so.1 >>> doveadm 46985 mrm txt VREG 22,2669215774 6920 10680 >>> /home/mrm/mail/TRAVEL/.imap/hawaiian.airlines/dovecot.index.log >>> doveadm 46985 mrm txt VREG 22,2669215774 7224 10716 >>> /home/mrm/mail/TRAVEL/.imap/priceline/dovecot.index.log >>> doveadm 46985 mrm txt VREG 22,2669215774 11080 10650 >>> /home/mrm/mail/TRAVEL/.imap/alamo/dovecot.index.log >>> doveadm 46985 mrm txt VREG 22,2669215774 2968 10679 >>> /home/mrm/mail/TRAVEL/.imap/hawaiian.airlines/dovecot.index.cache >>> doveadm 46985 mrm txt VREG 22,2669215774 3108 10715 >>> /home/mrm/mail/TRAVEL/.imap/priceline/dovecot.index.cache >>> doveadm 46985 mrm txt VREG 22,2669215774 6520 139902 >>> /home/mrm/mail/.imap/Sent/dovecot.index.log >>> doveadm 46985 mrm txt VREG 22,2669215774 9236 10648 >>> /home/mrm/mail/TRAVEL/.imap/alamo/dovecot.index.cache >>> doveadm 46985 mrm txt VREG 22,2669215774 174892 143343 >>> /home/mrm/mail/.imap/Sent/dovecot.index.cache >>> doveadm 46985 mrm txt VREG 22,2669215774 32656 143058 >>> /home/mrm/mail/.imap/INBOX/dovecot.index.log >>> doveadm 46985 mrm txt VREG 19,766509061 720 30627 >>> /usr/share/i18n/csmapper/CP/CP1251%UCS.mps >>> doveadm 46985 mrm txt VREG 19,766509061 720 30630 >>> /usr/share/i18n/csmapper/CP/CP1252%UCS.mps >>> doveadm 46985 mrm txt VREG 19,766509061 89576 6846 >>> /lib/libz.so.6 >>> doveadm 46985 mrm txt VREG 19,766509061 62008 5994 >>> /lib/libcrypt.so.5 >>> doveadm 46985 mrm txt VREG 119,3584295129 6725689 183611 >>> /usr/local/lib/dovecot/libdovecot-storage.so.0.0.0 >>> doveadm 46985 mrm txt VREG 119,3584295129 3162259 183615 >>> /usr/local/lib/dovecot/libdovecot.so.0.0.0 >>> doveadm 46985 mrm txt VREG 19,766509061 1649944 4782 >>> /lib/libc.so.7 >>> doveadm 46985 mrm txt VREG 119,3584295129 80142 183550 >>> /usr/local/lib/dovecot/lib15_notify_plugin.so >>> doveadm 46985 mrm txt VREG 119,3584295129 652615 183556 >>> /usr/local/lib/dovecot/lib20_fts_plugin.so >>> doveadm 46985 mrm txt VREG 119,3584295129 2730888 268825 >>> /usr/local/lib/libicui18n.so.57.1 >>> doveadm 46985 mrm txt VREG 119,3584295129 1753976 268849 >>> /usr/local/lib/libicuuc.so.57.1 >>> doveadm 46985 mrm txt VREG 119,3584295129 1704 268821 >>> /usr/local/lib/libicudata.so.57.1 >>> doveadm 46985 mrm txt VREG 19,766509061 102560 6745 >>> /lib/libthr.so.3 >>> doveadm 46985 mrm txt VREG 19,766509061 184712 5795 >>> /lib/libm.so.5 >>> doveadm 46985 mrm txt VREG 19,766509061 774000 5642 >>> /usr/lib/libc++.so.1 >>> doveadm 46985 mrm txt VREG 19,766509061 103304 5742 >>> /lib/libcxxrt.so.1 >>> doveadm 46985 mrm txt VREG 19,766509061 56344 7436 >>> /lib/libgcc_s.so.1 >>> doveadm 46985 mrm txt VREG 119,3584295129 349981 183782 >>> /usr/local/lib/dovecot/lib21_fts_lucene_plugin.so >>> doveadm 46985 mrm txt VREG 119,3584295129 1969384 113258 >>> /usr/local/lib/libclucene-core.so.2.3.3.4 >>> doveadm 46985 mrm txt VREG 119,3584295129 128992 113261 >>> /usr/local/lib/libclucene-shared.so.2.3.3.4 >>> doveadm 46985 mrm txt VREG 119,3584295129 143141 183578 >>> /usr/local/lib/dovecot/lib90_stats_plugin.so >>> doveadm 46985 mrm txt VREG 119,3584295129 37368 151926 >>> /usr/local/lib/dovecot/doveadm/lib10_doveadm_sieve_plugin.so >>> doveadm 46985 mrm txt VREG 119,3584295129 693808 151924 >>> /usr/local/lib/dovecot-2.2-pigeonhole/libdovecot-sieve.so.0.0.0 >>> doveadm 46985 mrm txt VREG 119,3584295129 146477 183599 >>> /usr/local/lib/dovecot/libdovecot-lda.so.0.0.0 >>> doveadm 46985 mrm txt VREG 119,3584295129 13823 183780 >>> /usr/local/lib/dovecot/doveadm/lib20_doveadm_fts_lucene_plugin.so >>> doveadm 46985 mrm txt VREG 119,3584295129 88081 183527 >>> /usr/local/lib/dovecot/doveadm/lib20_doveadm_fts_plugin.so >>> doveadm 46985 mrm txt VREG 19,766509061 8304 6330 >>> /usr/lib/i18n/libiconv_std.so.4 >>> doveadm 46985 mrm txt VREG 19,766509061 6744 6318 >>> /usr/lib/i18n/libUTF8.so.4 >>> doveadm 46985 mrm txt VREG 19,766509061 4384 6336 >>> /usr/lib/i18n/libmapper_none.so.4 >>> doveadm 46985 mrm txt VREG 19,766509061 7584 6345 >>> /usr/lib/i18n/libmapper_std.so.4 >>> doveadm 46985 mrm 0u VCHR 0,188 0t390889 188 >>> /dev/pts/0 >>> doveadm 46985 mrm 1u VCHR 0,188 0t390889 188 >>> /dev/pts/0 >>> doveadm 46985 mrm 2u VCHR 0,188 0t390889 188 >>> /dev/pts/0 >>> doveadm 46985 mrm 3u PIPE 0xfffff806fdf505d0 16384 >>> ->0xfffff806fdf50730 >>> doveadm 46985 mrm 4u PIPE 0xfffff806fdf50730 0 >>> ->0xfffff806fdf505d0 >>> doveadm 46985 mrm 5u KQUEUE 0xfffff806350b0600 >>> count=0, state=0 >>> doveadm 46985 mrm 6w FIFO 163,709754999 0t0 29707 >>> /var/run/dovecot/stats-mail >>> doveadm 46985 mrm 7u VREG 22,2669215774 11080 10650 >>> /home/mrm/mail/TRAVEL/.imap/alamo/dovecot.index.log >>> doveadm 46985 mrm 8u VREG 22,2669215774 536 137895 >>> /home/mrm/mail/TRAVEL/.imap/alamo/dovecot.index >>> doveadm 46985 mrm 9u VREG 22,2669215774 6920 10680 >>> /home/mrm/mail/TRAVEL/.imap/hawaiian.airlines/dovecot.index.log >>> doveadm 46985 mrm 10u VREG 22,2669215774 2968 10679 >>> /home/mrm/mail/TRAVEL/.imap/hawaiian.airlines/dovecot.index.cache >>> doveadm 46985 mrm 11u VREG 22,2669215774 6520 139902 >>> /home/mrm/mail/.imap/Sent/dovecot.index.log >>> doveadm 46985 mrm 12u VREG 22,2669215774 9288 139905 >>> /home/mrm/mail/.imap/Sent/dovecot.index >>> doveadm 46985 mrm 13u VREG 22,2669215774 7224 10716 >>> /home/mrm/mail/TRAVEL/.imap/priceline/dovecot.index.log >>> doveadm 46985 mrm 14u VREG 22,2669215774 3108 10715 >>> /home/mrm/mail/TRAVEL/.imap/priceline/dovecot.index.cache >>> doveadm 46985 mrm 15u VREG 22,2669215774 9236 10648 >>> /home/mrm/mail/TRAVEL/.imap/alamo/dovecot.index.cache >>> doveadm 46985 mrm 16u VREG 22,2669215774 174892 143343 >>> /home/mrm/mail/.imap/Sent/dovecot.index.cache >>> doveadm 46985 mrm 17u VREG 22,2669215774 32656 143058 >>> /home/mrm/mail/.imap/INBOX/dovecot.index.log >>> doveadm 46985 mrm 18u VREG 22,2669215774 0 135848 >>> /home/mrm (zroot/home/mrm) >>> doveadm 46985 mrm 19u VREG 22,2669215774 35656 135336 >>> /home/mrm/mail/.imap/INBOX/dovecot.index >>> doveadm 46985 mrm 20u VREG 22,2669215774 0 135849 >>> /home/mrm (zroot/home/mrm) >>> doveadm 46985 mrm 21u KQUEUE 0xfffff80163b1ba00 >>> count=1, state=0 >>> doveadm 46985 mrm 22u IPv4 0xfffff805ea69a000 0t0 TCP >>> localhost:44730->localhost:9998 (ESTABLISHED) >>> doveadm 46985 mrm 25uR VREG 22,2669215774 32997612 4151 >>> /home/mrm/mail/Sent >>> thebighonker.lerctr.org >>> >>> >>> >>> On Sun, Oct 23, 2016 at 12:20 PM, Aki Tuomi wrote: >>> >>>> According to man page, the only way it can return EINVAL (22) is either >>>> bad filter, or bad timeout. I can't see how the filter would be bad, so I'm >>>> guessing ts must be bad. Unfortunately I forgot to ask for it, so I am >>>> going to have to ask you run it again and run >>>> >>>> p ts >>>> >>>> if that's valid, then the only thing that can be bad if the file >>>> descriptor 23. >>>> >>>> Aki >>>> >>>>> On October 23, 2016 at 7:42 PM Larry Rosenman >>>> wrote: >>>>> ok, gdb7 works: >>>>> (gdb) fr 6 >>>>> #6 0x00000008011a3e49 in io_loop_handler_run_internal >>>> (ioloop=0x801c214e0) >>>>> at ioloop-kqueue.c:131 >>>>> 131 i_panic("kevent(): %m"); >>>>> (gdb) p errno >>>>> $1 = 22 >>>>> (gdb) p ret >>>>> $2 = -1 >>>>> (gdb) p *ioloop >>>>> $3 = {prev = 0x801c21080, cur_ctx = 0x0, io_files = 0x801c4f980, >>>>> next_io_file = 0x0, timeouts = 0x801c19e60, timeouts_new = {arr = >>>> {buffer = >>>>> 0x801c5ac80, element_size = 8}, v = 0x801c5ac80, >>>>> v_modifiable = 0x801c5ac80}, handler_context = 0x801c19e80, >>>>> notify_handler_context = 0x0, max_fd_count = 0, time_moved_callback = >>>>> 0x800d53bb0 , >>>>> next_max_time = 1477240784, ioloop_wait_usecs = 29863, >>>> io_pending_count = >>>>> 1, running = 1, iolooping = 1} >>>>> (gdb) p *ctx >>>>> $4 = {kq = 22, deleted_count = 0, events = {arr = {buffer = 0x801c5acc0, >>>>> element_size = 32}, v = 0x801c5acc0, v_modifiable = 0x801c5acc0}} >>>>> (gdb) p *events >>>>> $5 = {ident = 23, filter = -1, flags = 0, fflags = 0, data = 8, udata = >>>>> 0x801c4f980} >>>>> (gdb) >>>>> >>>>> >>>>> >>>>> On Sun, Oct 23, 2016 at 11:27 AM, Larry Rosenman >>>> wrote: >>>>>> grrr. >>>>>> >>>>>> /home/mrm $ gdb /usr/local/bin/doveadm >>>>>> GNU gdb 6.1.1 [FreeBSD] >>>>>> Copyright 2004 Free Software Foundation, Inc. >>>>>> GDB is free software, covered by the GNU General Public License, and >>>> you >>>>>> are >>>>>> welcome to change it and/or distribute copies of it under certain >>>>>> conditions. >>>>>> Type "show copying" to see the conditions. >>>>>> There is absolutely no warranty for GDB. Type "show warranty" for >>>> details. >>>>>> This GDB was configured as "amd64-marcel-freebsd"... >>>>>> (gdb) run -D -vvvvvv index * >>>>>> Starting program: /usr/local/bin/doveadm -D -vvvvvv index * >>>>>> >>>>>> Program received signal SIGTRAP, Trace/breakpoint trap. >>>>>> Cannot remove breakpoints because program is no longer writable. >>>>>> It might be running in another process. >>>>>> Further execution is probably impossible. >>>>>> 0x0000000800624490 in ?? () >>>>>> (gdb) >>>>>> >>>>>> Ideas? >>>>>> >>>>>> >>>>>> On Sun, Oct 23, 2016 at 11:14 AM, Aki Tuomi >>>> wrote: >>>>>>> Hi, >>>>>>> >>>>>>> can you run doveadm in gdb, wait for it to crash, and then go to >>>> frame 6 >>>>>>> ( io_loop_handler_run_internal) and run >>>>>>> >>>>>>> p errno >>>>>>> p ret >>>>>>> p *ioloop >>>>>>> p *ctx >>>>>>> p *events >>>>>>> >>>>>>> Sorry but the crash doesn't make enough sense yet to me, we need to >>>>>>> determine what the invalid parameter is. >>>>>>> >>>>>>>> Larry Rosenman http://www.lerctr.org/~ler >>>>>>>> Phone: +1 214-642-9640 (c) E-Mail: larryrtx at gmail.com >>>>>>>> US Mail: 17716 Limpia Crk, Round Rock, TX 78664-7281 >>>>>> >>>>>> -- >>>>>> Larry Rosenman http://www.lerctr.org/~ler >>>>>> Phone: +1 214-642-9640 (c) E-Mail: larryrtx at gmail.com >>>>>> US Mail: 17716 Limpia Crk, Round Rock, TX 78664-7281 >>>>>> >>>>> >>>>> -- >>>>> Larry Rosenman http://www.lerctr.org/~ler >>>>> Phone: +1 214-642-9640 (c) E-Mail: larryrtx at gmail.com >>>>> US Mail: 17716 Limpia Crk, Round Rock, TX 78664-7281 >>> >>> -- >>> Larry Rosenman http://www.lerctr.org/~ler >>> Phone: +1 214-642-9640 (c) E-Mail: larryrtx at gmail.com >>> US Mail: 17716 Limpia Crk, Round Rock, TX 78664-7281 From ms at ddnetservice.de Mon Oct 24 12:47:03 2016 From: ms at ddnetservice.de (Michael Seevogel) Date: Mon, 24 Oct 2016 14:47:03 +0200 Subject: Server migration In-Reply-To: References: Message-ID: Am 24.10.2016 um 09:00 schrieb Gandalf Corvotempesta: > Hi > i have to migrate, online, a dovecot 1.2.15 to a new server. Which is the > best way to accomplish this? > > I have 2 possibility: > 1) migrate from the very old server to a newer server with the same dovecot > version > 2) migrate from the very old server to a new server with the latest dovecot > version > > can i simply use rsync to sync everything and, when the sync is quick, move > the mailbox from the old server to the new server? My biggest concern is > how to manage the the emails that are coming during the server switch. > > Let's assume a 50gb maildir , the first sync would require hours to > complete (tons of very small files) do i can't shutdown the mailbox. The > second sync would require much less time and would also sync the email > received during the first sync (but the mailbox is still receiving new > emails) > now, as third phase, i can move the mailbox to the new server (by changing > the postfix configuration) so that all new emails are received on the new > server and then start the last rsync (by removing the --delete flag or any > new emails would be deleted as not existsnt on the older server) > > Any better solution? > If your server OS supports newer Dovecot versions then I would highly suggest you to upgrade to Dovecot 2.2.xx (or at least to the latest 2.1) and set up Dovecot's replication[1] feature. With this method you can actually archieve a smooth migration while your current server replicates all emails in real time to your new server, including new incoming emails and also mailbox changes to your new server and when the migration is done you'll just have to change your DNS and disable the Replication service. If you don't want or cannot set up replication you could still do a one-shot migration via Dovecot's dsync[2] on the new server, pulling the mails from the old. 50GB isn't that much as long as your two servers are at least connected with 100 Mbit to the inet. You may want to block for the time of the migration via iptables your users accessing Dovecot. However, under the bottom-line, if this is really necessary depends on you and the needs of your mailusers/customers. Best regards Michael Seevogel P.S. You should think about to use on the new server mdbox as mailbox format. That's kinda a hybrid of mbox and maildir and benefits of features of both its predecessors. However, backup and restoring is in case of mdbox "a bit" different. Just have a read... [1] http://wiki.dovecot.org/Replication [2] http://wiki2.dovecot.org/Migration/Dsync From gandalf.corvotempesta at gmail.com Mon Oct 24 13:18:13 2016 From: gandalf.corvotempesta at gmail.com (Gandalf Corvotempesta) Date: Mon, 24 Oct 2016 15:18:13 +0200 Subject: Server migration In-Reply-To: References: Message-ID: 2016-10-24 11:23 GMT+02:00 Karol Augustin : > When I am doing this I just turn off both servers for the third sync. > Its short enough to not cause much problem. And then after third sync I > start the new server and all clients can connect to it so I also > mitigate any problems resulting from clients that would be still > connected to the old server. The last issue depends on the way you force > everyone to use new server (DNS, routing, etc). The speed for third sync depends on the number of files to be scanned. I have mailboxes with tons of very small emails, thus even if the first two sync has transferred all datas, the scan made by rsync to check which files needs to be transferred requires many hours. My own mailbox has 80GB of mails. I can sync everything on a new server and then start a new rsync phase. this new phase requires exactly 1 hours and 49 minutes (as I can see from the last night backup). Transferred data: 78MB. 1 hours and 49 minutes to transfer only 78MB. > Remember that beside the new emails that could arrive during sync you > have also all sorts of user-generated operations as move, delete etc. So > if you just do 3rd rsync without --delete you can end up duplicating > users' emails if they move them during procedure. By shutting down both servers, the "--delete" argument could be used with no issues. From gandalf.corvotempesta at gmail.com Mon Oct 24 13:25:02 2016 From: gandalf.corvotempesta at gmail.com (Gandalf Corvotempesta) Date: Mon, 24 Oct 2016 15:25:02 +0200 Subject: Server migration In-Reply-To: References: Message-ID: 2016-10-24 14:47 GMT+02:00 Michael Seevogel : > If your server OS supports newer Dovecot versions then I would highly > suggest you to upgrade to Dovecot 2.2.xx (or at least to the latest 2.1) and > set up Dovecot's replication[1] feature. Are you talking about the new server or the older one that I have to replace? The new server has to be installed from scratch, so, yes, I can use Dovecot 2.2 from Jessie The "old" server is based on Squeeze, I can upgrade that to Wheezy and install Dovecot 2.2 from wheezy-backports but I have huge trouble when I've tried to do the same on a similiar server. I was unable to upgrade the dovecot configuration by following the documentation as this didn't work: doveconf -n -c /etc/dovecot/dovecot.conf > dovecot-2.conf I had an empty dovecot-2.conf file, no warning or output at all. It did nothing. > With this method you can actually archieve a smooth migration while your > current server replicates all emails in real time to your new server, > including new incoming emails and also mailbox changes to your new server > and when the migration is done you'll just have to change your DNS and > disable the Replication service. Cool. Any guide about this ? Should I start the replication on one side and wait for finish before pointing the mailbox to the new server? > If you don't want or cannot set up replication you could still do a one-shot > migration via Dovecot's dsync[2] on the new server, pulling the mails from > the old. 50GB isn't that much as long as your two servers are at least > connected with 100 Mbit to the inet. You may want to block for the time of > the migration via iptables your users accessing Dovecot. However, under the > bottom-line, if this is really necessary depends on you and the needs of > your mailusers/customers. I can't block the whole server. I have to migrate 1 user at once. But I can disable the pop3/imap access for that user, so noone is changing the files during the migration (except for the postfix/exim delivery agent) > P.S. You should think about to use on the new server mdbox as mailbox > format. > That's kinda a hybrid of mbox and maildir and benefits of features of both > its predecessors. However, backup and restoring is in case of mdbox "a bit" > different. Just have a read... > > > [1] http://wiki.dovecot.org/Replication > [2] http://wiki2.dovecot.org/Migration/Dsync Thank you From gandalf.corvotempesta at gmail.com Mon Oct 24 13:31:20 2016 From: gandalf.corvotempesta at gmail.com (Gandalf Corvotempesta) Date: Mon, 24 Oct 2016 15:31:20 +0200 Subject: Server migration In-Reply-To: References: Message-ID: 2016-10-24 14:47 GMT+02:00 Michael Seevogel : > P.S. You should think about to use on the new server mdbox as mailbox > format. > That's kinda a hybrid of mbox and maildir and benefits of features of both > its predecessors. However, backup and restoring is in case of mdbox "a bit" > different. Just have a read... No, I don't like that format, for this: This also means that you must not lose the dbox index files, they can't be regenerated without data loss additionally, this means to change even our LDA, as neither Exim or Postfix are able to deliver messages. From ms at ddnetservice.de Mon Oct 24 15:10:33 2016 From: ms at ddnetservice.de (Michael Seevogel) Date: Mon, 24 Oct 2016 17:10:33 +0200 Subject: Server migration In-Reply-To: References: Message-ID: Am 24.10.2016 um 15:25 schrieb Gandalf Corvotempesta: > 2016-10-24 14:47 GMT+02:00 Michael Seevogel : >> If your server OS supports newer Dovecot versions then I would highly >> suggest you to upgrade to Dovecot 2.2.xx (or at least to the latest 2.1) and >> set up Dovecot's replication[1] feature. > > Are you talking about the new server or the older one that I have to replace? > The new server has to be installed from scratch, so, yes, I can use Dovecot 2.2 > from Jessie I meant your old server. With "old" I was expecting something like Debian Sarge or SuSE Linux 9.3. That would have been really old, but since you are on Debian Squeeze, I would definitely choose the way with an upgraded Dovecot version and its replication service. > > The "old" server is based on Squeeze, I can upgrade that to Wheezy and install > Dovecot 2.2 from wheezy-backports but I have huge trouble when I've tried to > do the same on a similiar server. I was unable to upgrade the dovecot > configuration > by following the documentation as this didn't work: > > doveconf -n -c /etc/dovecot/dovecot.conf > dovecot-2.conf > > I had an empty dovecot-2.conf file, no warning or output at all. It > did nothing. > Well, I'am not too familiar with Debian since I'am a Red Hatter but perhaps you could use the binaries from there: http://wiki2.dovecot.org/PrebuiltBinaries Dunno if you have to rebuild the binaries, or if you can install them straight on Squeeze. You could also try to convert your old dovecot.conf on a different machine (maybe your new server?) and then just copy it back to your old server. As a last straw you could certainly adapt the dovecot.conf for Dovecot 2.2 manually, it shouldn't be too complicated, but this is totally up to you. >> With this method you can actually archieve a smooth migration while your >> current server replicates all emails in real time to your new server, >> including new incoming emails and also mailbox changes to your new server >> and when the migration is done you'll just have to change your DNS and >> disable the Replication service. > > Cool. > Any guide about this ? > Should I start the replication on one side and wait for finish before > pointing the mailbox to the new server? How to setup and start replication is described here: http://wiki2.dovecot.org/Replication Also make sure that you migrate/copy your userdb from the old server to the new server and that you properly test the user-mailbox access on the new server before you start the replication process. Regarding replication: I would wait with adjusting the DNS records until the replication has finished and you know that the new server works as expected. However, you may want to keep the replication process running for one or two more days to catch emails still arriving due to DNS caching times on your old server. The same may apply to mailusers that still access your old server via POP3/IMAP. Best regards Michael Seevogel From gjn at gjn.priv.at Mon Oct 24 23:27:46 2016 From: gjn at gjn.priv.at (=?ISO-8859-1?Q?G=FCnther_J=2E?= Niederwimmer) Date: Tue, 25 Oct 2016 01:27:46 +0200 Subject: Problem to configure dovecot-ldap.conf.ext Message-ID: <1760129.UVaFhdmSfi@techz> Hello, Dovecot 2.2.25 CentOS 7 I setup ldap (FreeIPA) to have a user for dovecot that can (read search compare) all attributes that I need for dovecot. I must also have mailAlternateAddress When I make a ldapsearch with this user, I found all I need to configure dovecot. But for me it is not possible to configure this correct ? I can make for user doveadm auth test office and doveadm auth test office at examle.com with success authentication but when I make a doveadm auth test info at example.co (mailAlternateAddress) I have a broken authentication Can any give me a hint what is wrong, or is this not possible ? # Space separated list of LDAP hosts to use. host:port is allowed too. #hosts = 192.168.100.204 192.168.100.214 #hosts = 192.168.100.204 hosts = ipa.example.com # LDAP URIs to use. You can use this instead of hosts list. Note that this # setting isn't supported by all LDAP libraries. #uris = ldap://ipa.example.com ldap://ipa1.example.com # Distinguished Name - the username used to login to the LDAP server. # Leave it commented out to bind anonymously (useful with auth_bind=yes). dn = uid=system,cn=sysaccounts,cn=etc,dc=example,dc=com # Password for LDAP server, if dn is specified. dnpass = 'XXXXXXXXXXXXXX' # Use SASL binding instead of the simple binding. Note that this changes # ldap_version automatically to be 3 if it's lower. Also note that SASL binds # and auth_bind=yes don't work together. sasl_bind = yes # SASL mechanism name to use. sasl_mech = gssapi # SASL realm to use. sasl_realm = EXAMPLE.COM # SASL authorization ID, ie. the dnpass is for this "master user", but the # dn is still the logged in user. Normally you want to keep this empty. sasl_authz_id = imap/mx01.example.com at EXAMPLE.COM # Use TLS to connect to the LDAP server. #tls = yes # TLS options, currently supported only with OpenLDAP: tls_ca_cert_file = /etc/ipa/ca.crt #tls_ca_cert_dir = #tls_cipher_suite = # TLS cert/key is used only if LDAP server requires a client certificate. #tls_cert_file = #tls_key_file = # Valid values: never, hard, demand, allow, try tls_require_cert = demand # Use the given ldaprc path. #ldaprc_path = # LDAP library debug level as specified by LDAP_DEBUG_* in ldap_log.h. # -1 = everything. You may need to recompile OpenLDAP with debugging enabled # to get enough output. #debug_level = 0 # Use authentication binding for verifying password's validity. This works by # logging into LDAP server using the username and password given by client. # The pass_filter is used to find the DN for the user. Note that the pass_attrs # is still used, only the password field is ignored in it. Before doing any # search, the binding is switched back to the default DN. auth_bind = yes # If authentication binding is used, you can save one LDAP request per login # if users' DN can be specified with a common template. The template can use # the standard %variables (see user_filter). Note that you can't # use any pass_attrs if you use this setting. # # If you use this setting, it's a good idea to use a different # dovecot-ldap.conf.ext for userdb (it can even be a symlink, just as long as # the filename is different in userdb's args). That way one connection is used # only for LDAP binds and another connection is used for user lookups. # Otherwise the binding is changed to the default DN before each user lookup. # # For example: # auth_bind_userdn = cn=%u,ou=people,o=org # auth_bind_userdn = uid=%n,cn=users,cn=accounts,dc=example,dc=com # LDAP protocol version to use. Likely 2 or 3. ldap_version = 3 # LDAP base. %variables can be used here. # For example: dc=mail, dc=example, dc=org base = cn=users,cn=accounts,dc=example,dc=com # Dereference: never, searching, finding, always #deref = never # Search scope: base, onelevel, subtree scope = subtree #scope = onelevel # User attributes are given in LDAP-name=dovecot-internal-name list. The # internal names are: # uid - System UID # gid - System GID # home - Home directory # mail - Mail location # # There are also other special fields which can be returned, see # http://wiki2.dovecot.org/UserDatabase/ExtraFields #user_attrs = homeDirectory=home,uidNumber=uid,gidNumber=gid user_attrs = uid=user,uid=home=/srv/vmail/%$,=uid=10000,=gid=10000 # Filter for user lookup. Some variables can be used (see # http://wiki2.dovecot.org/Variables for full list): # %u - username # %n - user part in user at domain, same as %u if there's no domain # %d - domain part in user at domain, empty if user there's no domain user_filter = (&(objectClass=mailrecipient)(|(uid=%Ln)(mail=%Lu) (mailAlternateAddress=%Lu))) # Password checking attributes: # user: Virtual user name (user at domain), if you wish to change the # user-given username to something else # password: Password, may optionally start with {type}, eg. {crypt} # There are also other special fields which can be returned, see # http://wiki2.dovecot.org/PasswordDatabase/ExtraFields pass_attrs = uid=user,userPassword=password,mailAlternateAddress=user # If you wish to avoid two LDAP lookups (passdb + userdb), you can use # userdb prefetch instead of userdb ldap in dovecot.conf. In that case you'll # also have to include user_attrs in pass_attrs field prefixed with "userdb_" # string. For example: #pass_attrs = uid=user,userPassword=password,\ # homeDirectory=userdb_home,uidNumber=userdb_uid,gidNumber=userdb_gid # Filter for password lookups #pass_filter = (&(objectClass=posixAccount)(uid=%u)) pass_filter = (&(objectClass=mailrecipient)(|(uid=%Ln)(mail=%Lu) (mailAlternateAddress=%Lu))) # Attributes and filter to get a list of all users iterate_attrs = uid=user, mailAlternateAddress=user iterate_filter = (objectClass=posixAccount) # Default password scheme. "{scheme}" before password overrides this. # List of supported schemes is in: http://wiki2.dovecot.org/Authentication #default_pass_scheme = CRYPT -- mit freundlichen Gr??en / best regards, G?nther J. Niederwimmer From larryrtx at gmail.com Mon Oct 24 14:21:48 2016 From: larryrtx at gmail.com (Larry Rosenman) Date: Mon, 24 Oct 2016 09:21:48 -0500 Subject: keent() from Tika - with doveadm In-Reply-To: <6507f93d-b2d6-5700-d450-0cca4e87dc06@dovecot.fi> References: <1219160790.717.1477210796307@appsuite-dev.open-xchange.com> <6177676.109.1477236435200@appsuite-dev.open-xchange.com> <191657457.111.1477237004555@appsuite-dev.open-xchange.com> <1063773824.113.1477239267493@appsuite-dev.open-xchange.com> <445708024.118.1477243243399@appsuite-dev.open-xchange.com> <429756207.896.1477288110361@appsuite-dev.open-xchange.com> <3dc312ae-7def-0097-f664-61df0f56969f@dovecot.fi> <6507f93d-b2d6-5700-d450-0cca4e87dc06@dovecot.fi> Message-ID: that seems to fix this kevent() problem, but I got the following lucene assert. Is that because of previous fails? Also, while I have your attention, is fts_autoindex supposed to work accross NAMESPACES? doveadm(mrm): Debug: Mailbox LISTS/vse-l: Opened mail UID=39483 because: fts indexing doveadm(mrm): Debug: Mailbox LISTS/vse-l: Opened mail UID=39484 because: fts indexing doveadm(mrm): Debug: Mailbox LISTS/vse-l: Opened mail UID=39485 because: fts indexing doveadm(mrm): Debug: Mailbox LISTS/vse-l: Opened mail UID=39486 because: fts indexing Assertion failed: (numDocsInStore*8 == directory->fileLength( (docStoreSegment + "." + IndexFileNames::FIELDS_INDEX_EXTENSION).c_str() )), function closeDocStore, file src/core/CLucene/index/DocumentsWriter.cpp, line 210. Program received signal SIGABRT, Aborted. 0x00000008014e6f7a in thr_kill () from /lib/libc.so.7 (gdb) bt full #0 0x00000008014e6f7a in thr_kill () from /lib/libc.so.7 No symbol table info available. #1 0x00000008014e6f66 in raise () from /lib/libc.so.7 No symbol table info available. #2 0x00000008014e6ee9 in abort () from /lib/libc.so.7 No symbol table info available. #3 0x000000080154dee1 in __assert () from /lib/libc.so.7 No symbol table info available. #4 0x0000000803ea1762 in lucene::index::DocumentsWriter::closeDocStore() () from /usr/local/lib/libclucene-core.so.1 No symbol table info available. #5 0x0000000803ea3d89 in lucene::index::DocumentsWriter::flush(bool) () from /usr/local/lib/libclucene-core.so.1 No symbol table info available. #6 0x0000000803ed26bb in lucene::index::IndexWriter::doFlush(bool) () from /usr/local/lib/libclucene-core.so.1 No symbol table info available. #7 0x0000000803ece25e in lucene::index::IndexWriter::flush(bool, bool) () from /usr/local/lib/libclucene-core.so.1 No symbol table info available. #8 0x0000000803ececbe in lucene::index::IndexWriter::addDocument(lucene::document::Document*, lucene::analysis::Analyzer*) () from /usr/local/lib/libclucene-core.so.1 No symbol table info available. ---Type to continue, or q to quit--- #9 0x0000000803b8cd55 in lucene_index_build_flush (index=0x801c1b640) at lucene-wrapper.cc:552 analyzer = 0x801c251c0 ret = 0 err = @0x801cd90d0: { _awhat = 0x801cd9108 "Return-Path: \nDelivered-To: mrm at lerctr.org\n", _twhat = 0x58 , error_number = 30249224} #10 0x0000000803b8c42e in lucene_index_build_more (index=0x801c1b640, uid=39486, part_idx=0, data=0x806041000 "", size=45, hdr_name=0x801c1a520 "Return-Path") at lucene-wrapper.cc:572 id = L"\x1cc8c40\b\xffffd970\x7fff\xffffd960\x7fff\x1190eba\b\x1cc0c00\b\x1191739\b\x1cc8c40\b\x1190eba\b\xffffd990\x7fff-\000\000\001-" namesize = 34378158489 datasize = 140737488345424 dest = 0x801190eba L"\x45880124\xff458aff\xb60f0124\xc48348c0\xfc35d10\x4855001f\x8348e589\x8d4840ec\x8948e075\x8b48f07d\x8b48f07d\x8948107f\x8b48e87d\x8b48e87d\x140bf\x458b4800\x888b48e8\510\x48f92948\x1488889\x8b480000\xc748e845\x14080" dest_free = 0x7fffffffd920 L"\xffffd950\x7fff\x1191199\b\x1cc8c40\b\xffffd970\x7fff\xffffd960\x7fff\x1190eba\b\x1cc0c00\b\x1191739\b\x1cc8c40\b\x1190eba\b\xffffd990\x7fff-" ---Type to continue, or q to quit--- token_flag = 0 #11 0x0000000803b8a420 in fts_backend_lucene_update_build_more ( _ctx=0x801c21240, data=0x806041000 "", size=45) at fts-backend-lucene.c:432 _data_stack_cur_id = 6 ctx = 0x801c21240 backend = 0x801c3a200 ret = 8 #12 0x000000080220e035 in fts_backend_update_build_more (ctx=0x801c21240, data=0x806041000 "", size=45) at fts-api.c:193 No locals. #13 0x000000080221015b in fts_build_full_words (ctx=0x7fffffffdc98, data=0x806041000 "", size=45, last=true) at fts-build-mail.c:402 i = 45 #14 0x000000080220fd45 in fts_build_data (ctx=0x7fffffffdc98, data=0x806041000 "", size=45, last=true) at fts-build-mail.c:423 No locals. #15 0x000000080221067d in fts_build_unstructured_header (ctx=0x7fffffffdc98, hdr=0x801ccf118) at fts-build-mail.c:104 data = 0x806041000 "" ---Type to continue, or q to quit--- buf = 0x0 i = 45 ret = 18164334 #16 0x000000080220fa54 in fts_build_mail_header (ctx=0x7fffffffdc98, block=0x7fffffffdc40) at fts-build-mail.c:179 hdr = 0x801ccf118 key = {uid = 39486, type = FTS_BACKEND_BUILD_KEY_HDR, part = 0x801c09c58, hdr_name = 0x801c4ba20 "Return-Path", body_content_type = 0x0, body_content_disposition = 0x0} ret = 32767 #17 0x000000080220f292 in fts_build_mail_real (update_ctx=0x801c21240, mail=0x801c63040) at fts-build-mail.c:548 ctx = {mail = 0x801c63040, update_ctx = 0x801c21240, content_type = 0x0, content_disposition = 0x0, body_parser = 0x0, word_buf = 0x0, pending_input = 0x0, cur_user_lang = 0x0} input = 0x801cc9030 parser = 0x801c2f040 decoder = 0x801ccf100 raw_block = {part = 0x801c09c58, hdr = 0x801c53900, data = 0x0, size = 0} block = {part = 0x801c09c58, hdr = 0x801ccf118, data = 0x7fffffffdc90 "0\220\314\001\b", size = 0} prev_part = 0x801c09c58 parts = 0x4ffffdca8 ---Type to continue, or q to quit--- skip_body = false body_part = false body_added = false binary_body = 255 error = 0x801cc88c0 "\200\212\314\001\b" ret = 1 #18 0x000000080220ee72 in fts_build_mail (update_ctx=0x801c21240, mail=0x801c63040) at fts-build-mail.c:594 _data_stack_cur_id = 5 ret = 8 #19 0x000000080221a626 in fts_mail_index (_mail=0x801c63040) at fts-storage.c:503 ft = 0x801c196e0 flist = 0x801c5dbd8 #20 0x0000000802217d40 in fts_mail_precache (_mail=0x801c63040) at fts-storage.c:522 _data_stack_cur_id = 4 mail = 0x801c63040 fmail = 0x801c634f0 ft = 0x801c196e0 #21 0x0000000800d3d992 in mail_precache (mail=0x801c63040) at mail.c:420 _data_stack_cur_id = 3 p = 0x801c63040 #22 0x0000000000433b59 in cmd_index_box_precache (box=0x8074edc40) ---Type to continue, or q to quit--- at doveadm-mail-index.c:75 status = {messages = 5342, recent = 0, unseen = 0, uidvalidity = 1362362144, uidnext = 43009, first_unseen_seq = 0, first_recent_uid = 43007, last_cached_seq = 0, highest_modseq = 0, highest_pvt_modseq = 0, keywords = 0x0, permanent_flags = 0, permanent_keywords = 0, allow_new_keywords = 0, nonpermanent_modseqs = 0, no_modseq_tracking = 0, have_guids = 1, have_save_guids = 0, have_only_guid128 = 0} trans = 0x801c3a800 search_args = 0x0 ctx = 0x801c1c040 mail = 0x801c63040 metadata = {guid = '\000' , virtual_size = 0, physical_size = 0, first_save_date = 0, cache_fields = 0x0, precache_fields = (MAIL_FETCH_STREAM_HEADER | MAIL_FETCH_STREAM_BODY | MAIL_FETCH_RECEIVED_DATE | MAIL_FETCH_SAVE_DATE | MAIL_FETCH_PHYSICAL_SIZE | MAIL_FETCH_VIRTUAL_SIZE | MAIL_FETCH_UIDL_BACKEND | MAIL_FETCH_GUID | MAIL_FETCH_POP3_ORDER), backend_ns_prefix = 0x0, backend_ns_type = (unknown: 0)} seq = 1 counter = 1819 max = 5342 ret = 0 #23 0x0000000000433907 in cmd_index_box (ctx=0x801c2ac40, info=0x801c5f0c0) at doveadm-mail-index.c:130 ---Type to continue, or q to quit--- box = 0x8074edc40 status = {messages = 4294958944, recent = 32767, unseen = 14577888, uidvalidity = 8, uidnext = 4294958944, first_unseen_seq = 16809983, first_recent_uid = 29749440, last_cached_seq = 8, highest_modseq = 34389277760, highest_pvt_modseq = 140737488346996, keywords = 0x7fffffffdf90, permanent_flags = 18334301, permanent_keywords = 0, allow_new_keywords = 0, nonpermanent_modseqs = 0, no_modseq_tracking = 1, have_guids = 0, have_save_guids = 0, have_only_guid128 = 0} ret = 0 #24 0x00000000004335ee in cmd_index_run (_ctx=0x801c2ac40, user=0x801c45040) at doveadm-mail-index.c:201 _data_stack_cur_id = 2 ctx = 0x801c2ac40 iter_flags = (MAILBOX_LIST_ITER_NO_AUTO_BOXES | MAILBOX_LIST_ITER_STAR_WITHIN_NS | MAILBOX_LIST_ITER_RETURN_NO_FLAGS) ns_mask = (MAIL_NAMESPACE_TYPE_PRIVATE | MAIL_NAMESPACE_TYPE_SHARED | MAIL_NAMESPACE_TYPE_PUBLIC) iter = 0x801c2bc40 info = 0x801c5f0c0 i = 32767 ret = 0 #25 0x000000000042b90a in doveadm_mail_next_user (ctx=0x801c2ac40, cctx=0x7fffffffe350, error_r=0x7fffffffe0f8) at doveadm-mail.c:404 ---Type to continue, or q to quit--- input = {module = 0x0, service = 0x484aa6 "doveadm", username = 0x7fffffffef58 "mrm", session_id = 0x0, session_id_prefix = 0x0, session_create_time = 0, local_ip = { family = 0, u = {ip6 = {__u6_addr = { __u6_addr8 = '\000' , __u6_addr16 = {0, 0, 0, 0, 0, 0, 0, 0}, __u6_addr32 = {0, 0, 0, 0}}}, ip4 = { s_addr = 0}}}, remote_ip = {family = 0, u = {ip6 = { __u6_addr = {__u6_addr8 = '\000' , __u6_addr16 = {0, 0, 0, 0, 0, 0, 0, 0}, __u6_addr32 = {0, 0, 0, 0}}}, ip4 = {s_addr = 0}}}, local_port = 0, remote_port = 0, userdb_fields = 0x0, flags_override_add = (unknown: 0), flags_override_remove = (unknown: 0), no_userdb_lookup = 0, debug = 0} error = 0x7fffffffe420 "\200\347\377\377\377\177" ip = 0x8011deee3 "" ret = 0 #26 0x000000000042b5bc in doveadm_mail_single_user (ctx=0x801c2ac40, cctx=0x7fffffffe350, error_r=0x7fffffffe0f8) at doveadm-mail.c:435 No locals. #27 0x000000000042d50a in doveadm_mail_cmd_exec (ctx=0x801c2ac40, cctx=0x7fffffffe350, wildcard_user=0x0) at doveadm-mail.c:596 ret = 32767 error = 0x801c2ae18 "P\256\302\001\b" ---Type to continue, or q to quit--- #28 0x000000000042d0a5 in doveadm_cmd_ver2_to_mail_cmd_wrapper ( cctx=0x7fffffffe350) at doveadm-mail.c:1061 mctx = 0x801c2ac40 wildcard_user = 0x0 fieldstr = 0x7fffffffe1e0 "\300\342\377\377\377\177" pargv = {arr = {buffer = 0x801c2ae98, element_size = 8}, v = 0x801c2ae98, v_modifiable = 0x801c2ae98} full_args = {arr = {buffer = 0x801c2ae18, element_size = 8}, v = 0x801c2ae18, v_modifiable = 0x801c2ae18} i = 7 mail_cmd = {alloc = 0x433210 , name = 0x48da32 "index", usage_args = 0x488030 "[-u |-A] [-S ] [-q] [-n ] "} args_pos = 0 #29 0x0000000000443cfe in doveadm_cmd_run_ver2 (argc=2, argv=0x7fffffffe438, cctx=0x7fffffffe350) at doveadm-cmd.c:523 param = 0x801c06ce0 pargv = {arr = {buffer = 0x801c06a38, element_size = 104}, v = 0x801c06a38, v_modifiable = 0x801c06a38} opts = {arr = {buffer = 0x801c06800, element_size = 32}, v = 0x801c06800, v_modifiable = 0x801c06800} pargc = 7 c = -1 ---Type to continue, or q to quit--- li = 32767 pool = 0x801c06768 optbuf = 0x801c06780 #30 0x00000000004437f4 in doveadm_cmd_try_run_ver2 ( cmd_name=0x7fffffffe7a3 "index", argc=2, argv=0x7fffffffe438, cctx=0x7fffffffe350) at doveadm-cmd.c:446 cmd = 0x801c4db98 #31 0x0000000000447f51 in main (argc=2, argv=0x7fffffffe438) at doveadm.c:379 service_flags = (MASTER_SERVICE_FLAG_STANDALONE | MASTER_SERVICE_FLAG_KEEP_CONFIG_OPEN) cctx = {cmd = 0x801c4db98, argc = 7, argv = 0x801c06a70, username = 0x7fffffffef58 "mrm", cli = true, tcp_server = false, local_ip = {family = 0, u = {ip6 = {__u6_addr = { __u6_addr8 = '\000' , __u6_addr16 = {0, 0, 0, 0, 0, 0, 0, 0}, __u6_addr32 = {0, 0, 0, 0}}}, ip4 = { s_addr = 0}}}, remote_ip = {family = 0, u = {ip6 = { __u6_addr = {__u6_addr8 = '\000' , __u6_addr16 = {0, 0, 0, 0, 0, 0, 0, 0}, __u6_addr32 = {0, 0, 0, 0}}}, ip4 = {s_addr = 0}}}, local_port = 0, remote_port = 0, conn = 0x0} cmd_name = 0x7fffffffe7a3 "index" i = 6 quick_init = false c = -1 (gdb) On Mon, Oct 24, 2016 at 4:34 AM, Aki Tuomi wrote: > Hi! > > We found some problems with those patches, and ended up doing slightly > different fix: > > https://github.com/dovecot/core/compare/3e41b3d%5E...cca98b.patch > > Aki > > On 24.10.2016 10:17, Aki Tuomi wrote: > > Hi! > > > > Can you try these two patches? > > > > Aki > > > > > > On 24.10.2016 08:48, Aki Tuomi wrote: > >> Ok so that timeval makes no sense. We'll look into it. > >> > >> Aki > >> > >>> On October 24, 2016 at 12:22 AM Larry Rosenman > wrote: > >>> > >>> > >>> doveadm(mrm): Debug: http-client: conn 127.0.0.1:9998 [1]: Got 200 > response > >>> for request [Req38: PUT http://localhost:9998/tika/] (took 296 ms + 8 > ms in > >>> queue) > >>> doveadm(mrm): Panic: kevent(): Invalid argument > >>> > >>> Program received signal SIGABRT, Aborted. > >>> 0x00000008014e6f7a in thr_kill () from /lib/libc.so.7 > >>> (gdb) fr 6 > >>> #6 0x00000008011a3e49 in io_loop_handler_run_internal > (ioloop=0x801c214e0) > >>> at ioloop-kqueue.c:131 > >>> 131 i_panic("kevent(): %m"); > >>> (gdb) p ts > >>> $1 = {tv_sec = 34389923520, tv_nsec = 140737488345872000} > >>> (gdb) p errno > >>> $2 = 22 > >>> (gdb) p ret > >>> $3 = -1 > >>> (gdb) p *ioloop > >>> $4 = {prev = 0x801c21080, cur_ctx = 0x0, io_files = 0x801c4f980, > >>> next_io_file = 0x0, timeouts = 0x801d17540, timeouts_new = {arr = { > >>> buffer = 0x801cd9700, element_size = 8}, v = 0x801cd9700, > >>> v_modifiable = 0x801cd9700}, handler_context = 0x801d17560, > >>> notify_handler_context = 0x0, max_fd_count = 0, > >>> time_moved_callback = 0x800d53bb0 , > >>> next_max_time = 1477257580, ioloop_wait_usecs = 27148, > io_pending_count = > >>> 1, > >>> running = 1, iolooping = 1} > >>> (gdb) p* ctx > >>> $5 = {kq = 21, deleted_count = 0, events = {arr = {buffer = > 0x801cd9740, > >>> element_size = 32}, v = 0x801cd9740, v_modifiable = 0x801cd9740}} > >>> (gdb) p *events > >>> $6 = {ident = 22, filter = -1, flags = 0, fflags = 0, data = 8, > >>> udata = 0x801c4f980} > >>> (gdb) > >>> > >>> thebighonker.lerctr.org ~ $ ps auxw|grep doveadm > >>> mrm 46965 0.0 0.2 108516 55264 0 I+ 4:19PM 0:02.28 > gdb > >>> /usr/local/bin/doveadm (gdb7111) > >>> mrm 46985 0.0 0.0 81236 15432 0 TX 4:19PM 0:03.51 > >>> /usr/local/bin/doveadm -D -vvvvvvv index * > >>> ler 47221 0.0 0.0 18856 2360 1 S+ 4:21PM 0:00.00 > grep > >>> doveadm > >>> thebighonker.lerctr.org ~ $ sudo lsof -p 46985 > >>> Password: > >>> COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE > NAME > >>> doveadm 46985 mrm cwd VDIR 22,2669215774 152 4 > >>> /home/mrm > >>> doveadm 46985 mrm rtd VDIR 19,766509061 28 4 / > >>> doveadm 46985 mrm txt VREG 119,3584295129 1714125 182952 > >>> /usr/local/bin/doveadm > >>> doveadm 46985 mrm txt VREG 19,766509061 132272 14382 > >>> /libexec/ld-elf.so.1 > >>> doveadm 46985 mrm txt VREG 22,2669215774 6920 10680 > >>> /home/mrm/mail/TRAVEL/.imap/hawaiian.airlines/dovecot.index.log > >>> doveadm 46985 mrm txt VREG 22,2669215774 7224 10716 > >>> /home/mrm/mail/TRAVEL/.imap/priceline/dovecot.index.log > >>> doveadm 46985 mrm txt VREG 22,2669215774 11080 10650 > >>> /home/mrm/mail/TRAVEL/.imap/alamo/dovecot.index.log > >>> doveadm 46985 mrm txt VREG 22,2669215774 2968 10679 > >>> /home/mrm/mail/TRAVEL/.imap/hawaiian.airlines/dovecot.index.cache > >>> doveadm 46985 mrm txt VREG 22,2669215774 3108 10715 > >>> /home/mrm/mail/TRAVEL/.imap/priceline/dovecot.index.cache > >>> doveadm 46985 mrm txt VREG 22,2669215774 6520 139902 > >>> /home/mrm/mail/.imap/Sent/dovecot.index.log > >>> doveadm 46985 mrm txt VREG 22,2669215774 9236 10648 > >>> /home/mrm/mail/TRAVEL/.imap/alamo/dovecot.index.cache > >>> doveadm 46985 mrm txt VREG 22,2669215774 174892 143343 > >>> /home/mrm/mail/.imap/Sent/dovecot.index.cache > >>> doveadm 46985 mrm txt VREG 22,2669215774 32656 143058 > >>> /home/mrm/mail/.imap/INBOX/dovecot.index.log > >>> doveadm 46985 mrm txt VREG 19,766509061 720 30627 > >>> /usr/share/i18n/csmapper/CP/CP1251%UCS.mps > >>> doveadm 46985 mrm txt VREG 19,766509061 720 30630 > >>> /usr/share/i18n/csmapper/CP/CP1252%UCS.mps > >>> doveadm 46985 mrm txt VREG 19,766509061 89576 6846 > >>> /lib/libz.so.6 > >>> doveadm 46985 mrm txt VREG 19,766509061 62008 5994 > >>> /lib/libcrypt.so.5 > >>> doveadm 46985 mrm txt VREG 119,3584295129 6725689 183611 > >>> /usr/local/lib/dovecot/libdovecot-storage.so.0.0.0 > >>> doveadm 46985 mrm txt VREG 119,3584295129 3162259 183615 > >>> /usr/local/lib/dovecot/libdovecot.so.0.0.0 > >>> doveadm 46985 mrm txt VREG 19,766509061 1649944 4782 > >>> /lib/libc.so.7 > >>> doveadm 46985 mrm txt VREG 119,3584295129 80142 183550 > >>> /usr/local/lib/dovecot/lib15_notify_plugin.so > >>> doveadm 46985 mrm txt VREG 119,3584295129 652615 183556 > >>> /usr/local/lib/dovecot/lib20_fts_plugin.so > >>> doveadm 46985 mrm txt VREG 119,3584295129 2730888 268825 > >>> /usr/local/lib/libicui18n.so.57.1 > >>> doveadm 46985 mrm txt VREG 119,3584295129 1753976 268849 > >>> /usr/local/lib/libicuuc.so.57.1 > >>> doveadm 46985 mrm txt VREG 119,3584295129 1704 268821 > >>> /usr/local/lib/libicudata.so.57.1 > >>> doveadm 46985 mrm txt VREG 19,766509061 102560 6745 > >>> /lib/libthr.so.3 > >>> doveadm 46985 mrm txt VREG 19,766509061 184712 5795 > >>> /lib/libm.so.5 > >>> doveadm 46985 mrm txt VREG 19,766509061 774000 5642 > >>> /usr/lib/libc++.so.1 > >>> doveadm 46985 mrm txt VREG 19,766509061 103304 5742 > >>> /lib/libcxxrt.so.1 > >>> doveadm 46985 mrm txt VREG 19,766509061 56344 7436 > >>> /lib/libgcc_s.so.1 > >>> doveadm 46985 mrm txt VREG 119,3584295129 349981 183782 > >>> /usr/local/lib/dovecot/lib21_fts_lucene_plugin.so > >>> doveadm 46985 mrm txt VREG 119,3584295129 1969384 113258 > >>> /usr/local/lib/libclucene-core.so.2.3.3.4 > >>> doveadm 46985 mrm txt VREG 119,3584295129 128992 113261 > >>> /usr/local/lib/libclucene-shared.so.2.3.3.4 > >>> doveadm 46985 mrm txt VREG 119,3584295129 143141 183578 > >>> /usr/local/lib/dovecot/lib90_stats_plugin.so > >>> doveadm 46985 mrm txt VREG 119,3584295129 37368 151926 > >>> /usr/local/lib/dovecot/doveadm/lib10_doveadm_sieve_plugin.so > >>> doveadm 46985 mrm txt VREG 119,3584295129 693808 151924 > >>> /usr/local/lib/dovecot-2.2-pigeonhole/libdovecot-sieve.so.0.0.0 > >>> doveadm 46985 mrm txt VREG 119,3584295129 146477 183599 > >>> /usr/local/lib/dovecot/libdovecot-lda.so.0.0.0 > >>> doveadm 46985 mrm txt VREG 119,3584295129 13823 183780 > >>> /usr/local/lib/dovecot/doveadm/lib20_doveadm_fts_lucene_plugin.so > >>> doveadm 46985 mrm txt VREG 119,3584295129 88081 183527 > >>> /usr/local/lib/dovecot/doveadm/lib20_doveadm_fts_plugin.so > >>> doveadm 46985 mrm txt VREG 19,766509061 8304 6330 > >>> /usr/lib/i18n/libiconv_std.so.4 > >>> doveadm 46985 mrm txt VREG 19,766509061 6744 6318 > >>> /usr/lib/i18n/libUTF8.so.4 > >>> doveadm 46985 mrm txt VREG 19,766509061 4384 6336 > >>> /usr/lib/i18n/libmapper_none.so.4 > >>> doveadm 46985 mrm txt VREG 19,766509061 7584 6345 > >>> /usr/lib/i18n/libmapper_std.so.4 > >>> doveadm 46985 mrm 0u VCHR 0,188 0t390889 188 > >>> /dev/pts/0 > >>> doveadm 46985 mrm 1u VCHR 0,188 0t390889 188 > >>> /dev/pts/0 > >>> doveadm 46985 mrm 2u VCHR 0,188 0t390889 188 > >>> /dev/pts/0 > >>> doveadm 46985 mrm 3u PIPE 0xfffff806fdf505d0 16384 > >>> ->0xfffff806fdf50730 > >>> doveadm 46985 mrm 4u PIPE 0xfffff806fdf50730 0 > >>> ->0xfffff806fdf505d0 > >>> doveadm 46985 mrm 5u KQUEUE 0xfffff806350b0600 > >>> count=0, state=0 > >>> doveadm 46985 mrm 6w FIFO 163,709754999 0t0 29707 > >>> /var/run/dovecot/stats-mail > >>> doveadm 46985 mrm 7u VREG 22,2669215774 11080 10650 > >>> /home/mrm/mail/TRAVEL/.imap/alamo/dovecot.index.log > >>> doveadm 46985 mrm 8u VREG 22,2669215774 536 137895 > >>> /home/mrm/mail/TRAVEL/.imap/alamo/dovecot.index > >>> doveadm 46985 mrm 9u VREG 22,2669215774 6920 10680 > >>> /home/mrm/mail/TRAVEL/.imap/hawaiian.airlines/dovecot.index.log > >>> doveadm 46985 mrm 10u VREG 22,2669215774 2968 10679 > >>> /home/mrm/mail/TRAVEL/.imap/hawaiian.airlines/dovecot.index.cache > >>> doveadm 46985 mrm 11u VREG 22,2669215774 6520 139902 > >>> /home/mrm/mail/.imap/Sent/dovecot.index.log > >>> doveadm 46985 mrm 12u VREG 22,2669215774 9288 139905 > >>> /home/mrm/mail/.imap/Sent/dovecot.index > >>> doveadm 46985 mrm 13u VREG 22,2669215774 7224 10716 > >>> /home/mrm/mail/TRAVEL/.imap/priceline/dovecot.index.log > >>> doveadm 46985 mrm 14u VREG 22,2669215774 3108 10715 > >>> /home/mrm/mail/TRAVEL/.imap/priceline/dovecot.index.cache > >>> doveadm 46985 mrm 15u VREG 22,2669215774 9236 10648 > >>> /home/mrm/mail/TRAVEL/.imap/alamo/dovecot.index.cache > >>> doveadm 46985 mrm 16u VREG 22,2669215774 174892 143343 > >>> /home/mrm/mail/.imap/Sent/dovecot.index.cache > >>> doveadm 46985 mrm 17u VREG 22,2669215774 32656 143058 > >>> /home/mrm/mail/.imap/INBOX/dovecot.index.log > >>> doveadm 46985 mrm 18u VREG 22,2669215774 0 135848 > >>> /home/mrm (zroot/home/mrm) > >>> doveadm 46985 mrm 19u VREG 22,2669215774 35656 135336 > >>> /home/mrm/mail/.imap/INBOX/dovecot.index > >>> doveadm 46985 mrm 20u VREG 22,2669215774 0 135849 > >>> /home/mrm (zroot/home/mrm) > >>> doveadm 46985 mrm 21u KQUEUE 0xfffff80163b1ba00 > >>> count=1, state=0 > >>> doveadm 46985 mrm 22u IPv4 0xfffff805ea69a000 0t0 TCP > >>> localhost:44730->localhost:9998 (ESTABLISHED) > >>> doveadm 46985 mrm 25uR VREG 22,2669215774 32997612 4151 > >>> /home/mrm/mail/Sent > >>> thebighonker.lerctr.org > >>> > >>> > >>> > >>> On Sun, Oct 23, 2016 at 12:20 PM, Aki Tuomi > wrote: > >>> > >>>> According to man page, the only way it can return EINVAL (22) is > either > >>>> bad filter, or bad timeout. I can't see how the filter would be bad, > so I'm > >>>> guessing ts must be bad. Unfortunately I forgot to ask for it, so I am > >>>> going to have to ask you run it again and run > >>>> > >>>> p ts > >>>> > >>>> if that's valid, then the only thing that can be bad if the file > >>>> descriptor 23. > >>>> > >>>> Aki > >>>> > >>>>> On October 23, 2016 at 7:42 PM Larry Rosenman > >>>> wrote: > >>>>> ok, gdb7 works: > >>>>> (gdb) fr 6 > >>>>> #6 0x00000008011a3e49 in io_loop_handler_run_internal > >>>> (ioloop=0x801c214e0) > >>>>> at ioloop-kqueue.c:131 > >>>>> 131 i_panic("kevent(): %m"); > >>>>> (gdb) p errno > >>>>> $1 = 22 > >>>>> (gdb) p ret > >>>>> $2 = -1 > >>>>> (gdb) p *ioloop > >>>>> $3 = {prev = 0x801c21080, cur_ctx = 0x0, io_files = 0x801c4f980, > >>>>> next_io_file = 0x0, timeouts = 0x801c19e60, timeouts_new = {arr = > >>>> {buffer = > >>>>> 0x801c5ac80, element_size = 8}, v = 0x801c5ac80, > >>>>> v_modifiable = 0x801c5ac80}, handler_context = 0x801c19e80, > >>>>> notify_handler_context = 0x0, max_fd_count = 0, time_moved_callback = > >>>>> 0x800d53bb0 , > >>>>> next_max_time = 1477240784, ioloop_wait_usecs = 29863, > >>>> io_pending_count = > >>>>> 1, running = 1, iolooping = 1} > >>>>> (gdb) p *ctx > >>>>> $4 = {kq = 22, deleted_count = 0, events = {arr = {buffer = > 0x801c5acc0, > >>>>> element_size = 32}, v = 0x801c5acc0, v_modifiable = 0x801c5acc0}} > >>>>> (gdb) p *events > >>>>> $5 = {ident = 23, filter = -1, flags = 0, fflags = 0, data = 8, > udata = > >>>>> 0x801c4f980} > >>>>> (gdb) > >>>>> > >>>>> > >>>>> > >>>>> On Sun, Oct 23, 2016 at 11:27 AM, Larry Rosenman > > >>>> wrote: > >>>>>> grrr. > >>>>>> > >>>>>> /home/mrm $ gdb /usr/local/bin/doveadm > >>>>>> GNU gdb 6.1.1 [FreeBSD] > >>>>>> Copyright 2004 Free Software Foundation, Inc. > >>>>>> GDB is free software, covered by the GNU General Public License, and > >>>> you > >>>>>> are > >>>>>> welcome to change it and/or distribute copies of it under certain > >>>>>> conditions. > >>>>>> Type "show copying" to see the conditions. > >>>>>> There is absolutely no warranty for GDB. Type "show warranty" for > >>>> details. > >>>>>> This GDB was configured as "amd64-marcel-freebsd"... > >>>>>> (gdb) run -D -vvvvvv index * > >>>>>> Starting program: /usr/local/bin/doveadm -D -vvvvvv index * > >>>>>> > >>>>>> Program received signal SIGTRAP, Trace/breakpoint trap. > >>>>>> Cannot remove breakpoints because program is no longer writable. > >>>>>> It might be running in another process. > >>>>>> Further execution is probably impossible. > >>>>>> 0x0000000800624490 in ?? () > >>>>>> (gdb) > >>>>>> > >>>>>> Ideas? > >>>>>> > >>>>>> > >>>>>> On Sun, Oct 23, 2016 at 11:14 AM, Aki Tuomi > >>>> wrote: > >>>>>>> Hi, > >>>>>>> > >>>>>>> can you run doveadm in gdb, wait for it to crash, and then go to > >>>> frame 6 > >>>>>>> ( io_loop_handler_run_internal) and run > >>>>>>> > >>>>>>> p errno > >>>>>>> p ret > >>>>>>> p *ioloop > >>>>>>> p *ctx > >>>>>>> p *events > >>>>>>> > >>>>>>> Sorry but the crash doesn't make enough sense yet to me, we need to > >>>>>>> determine what the invalid parameter is. > >>>>>>> > >>>>>>>> Larry Rosenman http://www.lerctr.org/~ler > >>>>>>>> Phone: +1 214-642-9640 (c) E-Mail: larryrtx at gmail.com > >>>>>>>> US Mail: 17716 Limpia Crk, Round Rock, TX 78664-7281 > >>>>>> > >>>>>> -- > >>>>>> Larry Rosenman http://www.lerctr.org/~ler > >>>>>> Phone: +1 214-642-9640 (c) E-Mail: larryrtx at gmail.com > >>>>>> US Mail: 17716 Limpia Crk, Round Rock, TX 78664-7281 > >>>>>> > >>>>> > >>>>> -- > >>>>> Larry Rosenman http://www.lerctr.org/~ler > >>>>> Phone: +1 214-642-9640 (c) E-Mail: larryrtx at gmail.com > >>>>> US Mail: 17716 Limpia Crk, Round Rock, TX 78664-7281 > >>> > >>> -- > >>> Larry Rosenman http://www.lerctr.org/~ler > >>> Phone: +1 214-642-9640 (c) E-Mail: larryrtx at gmail.com > >>> US Mail: 17716 Limpia Crk, Round Rock, TX 78664-7281 > -- Larry Rosenman http://www.lerctr.org/~ler Phone: +1 214-642-9640 (c) E-Mail: larryrtx at gmail.com US Mail: 17716 Limpia Crk, Round Rock, TX 78664-7281 From simeon.ott at onnet.ch Tue Oct 25 05:14:58 2016 From: simeon.ott at onnet.ch (Simeon Ott) Date: Tue, 25 Oct 2016 07:14:58 +0200 Subject: Hierarchy separator and LAYOUT=FS change In-Reply-To: References: Message-ID: <9A0E763F-B64C-4CDF-8999-95886A73AF21@onnet.ch> Anyone? What are the steps to take to migrate from dot-to slash-separator with LAYOUT=fs? > On 11.10.2016, at 00:06, Simeon Ott wrote: > > Hello, > > I stumbled across a 5-year-old post on the dovecot list about changing the dovecot hierarchy separator to enable shared mailboxes (http://www.dovecot.org/list/dovecot/2011-January/056201.html ). > At the moment I?m stuck in a pretty similar situation. Migrated from courier to dovecot 2 years ago and preserved the dot-separator. > Because I?m using the e-mail adress as a username, the dots for folder separation and the dots in the email adresses getting messed up - > > I do have a pretty small mailserver with about 150 accounts. The Maildir filestructur of a typical mail account looks like this: > drwx------ 2 vmail vmail 4096 Oct 10 20:02 cur > drwx------ 5 vmail vmail 4096 Oct 3 07:48 .Daten.Administration > drwx------ 5 vmail vmail 4096 Oct 3 09:51 .Daten.Anfragen, Werbung > drwx------ 5 vmail vmail 4096 Oct 3 08:02 .Daten > drwx------ 5 vmail vmail 4096 Oct 6 09:57 .Daten.Intern > drwx------ 5 vmail vmail 4096 Oct 3 08:03 .Daten.Intern.Fahrzeuge > drwx------ 5 vmail vmail 4096 Oct 6 12:57 .Daten.Intern.Infos, FileMaker etc > drwx------ 5 vmail vmail 4096 Oct 3 09:19 .Daten.Intern.Sonstiges > drwx------ 5 vmail vmail 4096 Oct 3 07:47 .Daten.Kunden > drwx------ 5 vmail vmail 4096 Sep 16 08:29 .Daten.Lieferanten > drwx------ 5 vmail vmail 4096 Oct 3 08:28 .Daten.Marketing > drwx------ 2 vmail vmail 4096 Oct 10 20:02 new > drwx------ 5 vmail vmail 4096 Oct 10 18:00 .Sent > drwx------ 5 vmail vmail 4096 Oct 10 18:00 .Spam > drwx------ 5 vmail vmail 4096 Oct 10 18:00 .Trash > > When changing the separator in my inbox namespace the documentation mentions that the filestructur doesn?t change. This means I will get the same problems when using shared boxes with email adresses as usernames. I definitly need to change to maildir:~/Maildir:LAYOUT=fs > When changing to LAYOUT=fs i need to convert all the mailboxes manually, is that correct? Is dsync is the way to go? > Or is it better to leave the separator and change to a different username schema (without dots in it) and advise the clients to change their credentials? > > I know there are people out there who successfully converted this - but I can?f find that many information about this subject. > > doveconf -n: > # 2.1.7: /etc/dovecot/dovecot.conf > # OS: Linux 3.2.0-4-amd64 x86_64 Debian 7.11 > auth_mechanisms = plain login > auth_verbose = yes > lda_mailbox_autocreate = yes > lda_mailbox_autosubscribe = yes > listen = * > login_log_format_elements = user=<%u> method=%m rip=%r lip=%l mpid=%e %c > mail_gid = 5000 > mail_location = maildir:~/Maildir > mail_plugins = zlib quota acl > mail_uid = 5000 > managesieve_notify_capability = mailto > managesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy include variables body enotify environment mailbox date ihave > namespace inbox { > inbox = yes > location = > mailbox Drafts { > auto = subscribe > special_use = \Drafts > } > mailbox Sent { > auto = subscribe > special_use = \Sent > } > mailbox "Sent Messages" { > special_use = \Sent > } > mailbox Spam { > auto = subscribe > special_use = \Junk > } > mailbox Trash { > auto = subscribe > special_use = \Trash > } > prefix = INBOX. > separator = . > } > passdb { > args = /etc/dovecot/dovecot-ldap.conf > driver = ldap > } > plugin { > acl = vfile > acl_shared_dict = file:/var/spool/postfix/virtual/shared-mailboxes > quota = maildir:User quota > quota_exceeded_message = 4.2.2 Mailbox full > quota_rule = *:storage=1G > quota_rule2 = INBOX.Trash:storage=+100M > quota_rule3 = INBOX.Spam:ignore > quota_warning = storage=95%% quota-warning 95 %u > sieve = ~/.dovecot.sieve > sieve_before = /var/lib/dovecot/sieve/default.sieve > sieve_dir = ~/sieve > sieve_max_actions = 32 > sieve_max_redirects = 4 > sieve_max_script_size = 1M > sieve_quota_max_scripts = 0 > sieve_quota_max_storage = 0 > } > protocols = " imap lmtp sieve pop3" > service auth { > group = dovecot > unix_listener /var/spool/postfix/private/auth { > group = postfix > mode = 0660 > user = postfix > } > unix_listener auth-master { > group = vmail > mode = 0660 > user = vmail > } > user = dovecot > } > service lmtp { > unix_listener lmtp { > mode = 0666 > } > } > service managesieve-login { > inet_listener sieve { > port = 4190 > } > inet_listener sieve_deprecated { > port = 2000 > } > process_min_avail = 1 > service_count = 1 > vsz_limit = 64 M > } > ssl_cert = -chain.crt > ssl_cipher_list = ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA:AES256-SHA:DHE-RSA-CAMELLIA128-SHA:DHE-RSA-CAMELLIA256-SHA:CAMELLIA128-SHA:CAMELLIA256-SHA:ECDHE-RSA-DES-CBC3-SHA:DES-CBC3-SHA:!SSLv2 > ssl_key = userdb { > args = /etc/dovecot/dovecot-ldap.conf > driver = ldap > } > protocol lmtp { > mail_plugins = zlib quota acl sieve > } > protocol lda { > auth_socket_path = /var/run/dovecot/auth-master > deliver_log_format = msgid=%m: %$ > mail_plugins = zlib quota acl sieve > postmaster_address = postmaster at onnet.ch > } > protocol imap { > mail_plugins = zlib quota acl imap_quota imap_acl > } > protocol sieve { > info_log_path = /var/log/sieve.log > log_path = /var/log/sieve.log > mail_max_userip_connections = 10 > managesieve_implementation_string = Dovecot Pigeonhole > managesieve_logout_format = bytes=%i/%o > managesieve_max_compile_errors = 5 > managesieve_max_line_length = 65536 > } > ? parts of the ldap config > user_attrs = homeDirectory=home=/var/spool/postfix/virtual/%$,uidNumber=uid,gidNumber=gid,quota=quota_rule=*:bytes=%$ > user_filter = (&(objectClass=CourierMailAccount)(mail=%u)) > > ? my shared configuration is currently commented out. > # namespace { > # type = shared > # separator = . > # prefix = shared.%%u. > # location = maildir:%h/Maildir:INDEX=~/Maildir/shared/%%u > # subscriptions = yes > # list = children > #} > > thanks in advance for any help > > Sincerely, > Simeon - onnet.ch From mrobti at insiberia.net Tue Oct 25 06:43:35 2016 From: mrobti at insiberia.net (mrobti at insiberia.net) Date: Tue, 25 Oct 2016 06:43:35 +0000 Subject: Break up massive INBOX? Message-ID: I have inherited a 10+ GB mailbox which dsync converted to maildir very nicely. But 15GB in tens of thousands of messages is too much in one INBOX when used in webmail. I guess it's best to divide the messages into subfolders for date. Is there a server-side way to safely move messages into new subfolders so that Dovecot indexes and UIDs won't break? If I write a shell script to move maildir files based on date of the files, I think Dovecot would not like this? Also, is there any way to automatically create delivery or archive folders by date eg., on the 1st of each month? And deliver mails to the subfolder for the current month? I know I can create Sieve rules for such by hand, but prefer if there is a plugin or something to do this automatically. From aki.tuomi at dovecot.fi Tue Oct 25 06:51:53 2016 From: aki.tuomi at dovecot.fi (Aki Tuomi) Date: Tue, 25 Oct 2016 09:51:53 +0300 Subject: Break up massive INBOX? In-Reply-To: References: Message-ID: <788ded37-ab3c-be54-398b-847e62967c29@dovecot.fi> On 25.10.2016 09:43, mrobti at insiberia.net wrote: > I have inherited a 10+ GB mailbox which dsync converted to maildir > very nicely. But 15GB in tens of thousands of messages is too much in > one INBOX when used in webmail. > > I guess it's best to divide the messages into subfolders for date. > > Is there a server-side way to safely move messages into new subfolders > so that Dovecot indexes and UIDs won't break? If I write a shell > script to move maildir files based on date of the files, I think > Dovecot would not like this? > > Also, is there any way to automatically create delivery or archive > folders by date eg., on the 1st of each month? And deliver mails to > the subfolder for the current month? I know I can create Sieve rules > for such by hand, but prefer if there is a plugin or something to do > this automatically. Hi! doveadm move is probably your friend here, see man doveadm-search-query for details on what you can search for. Aki From matthew.broadhead at nbmlaw.co.uk Tue Oct 25 07:11:55 2016 From: matthew.broadhead at nbmlaw.co.uk (Matthew Broadhead) Date: Tue, 25 Oct 2016 09:11:55 +0200 Subject: sieve sending vacation message from vmail@ns1.domain.tld In-Reply-To: References: <71b362e8-3a69-076d-6376-2f3bbd39d0eb@nbmlaw.co.uk> <94941225-09d0-1440-1733-3884cc6dcd67@rename-it.nl> <7cdadba3-fd03-7d8c-1235-b428018a081c@nbmlaw.co.uk> <55712b3a-4812-f0a6-c9f9-59efcdac79f7@rename-it.nl> <8260ce16-bc94-e3a9-13d1-f1204e6ae525@rename-it.nl> <344d3d36-b905-5a90-e0ea-17d556076838@nbmlaw.co.uk> <9b47cb74-0aa7-4851-11f0-5a367341a63b@nbmlaw.co.uk> <4aa89a3c-937f-a1e6-3871-1df196ac7af2@rename-it.nl> Message-ID: <0c0eaf7f-e65c-e31d-443f-21f3e3ae4fd2@nbmlaw.co.uk> are there any instructions or tests i can make to check the sieve configuration? or does the magic all happen internally and there are no settings to change? On 21/10/2016 10:22, Matthew Broadhead wrote: > the server is using CentOS 7 and that is the package that comes > through yum. everything is up to date. i am hesitant to install a > new package manually as that could cause other compatibility issues? > is there another way to test the configuration on the server? > > On 21/10/2016 01:07, Stephan Bosch wrote: >> Op 10/20/2016 om 7:38 PM schreef Matthew Broadhead: >>> do i need to provide more information? >>> >> It still doesn't make sense to me. I do notice that the version you're >> using is ancient (dated 26-09-2013), which may well the problem. >> >> Do have the ability to upgrade? >> >> Regards, >> >> Stephan. >> >>> On 19/10/2016 14:49, Matthew Broadhead wrote: >>>> /var/log/maillog showed this >>>> Oct 19 13:25:41 ns1 postfix/smtpd[1298]: 7599A2C19C6: >>>> client=unknown[127.0.0.1] >>>> Oct 19 13:25:41 ns1 postfix/cleanup[1085]: 7599A2C19C6: >>>> message-id= >>>> Oct 19 13:25:41 ns1 postfix/qmgr[1059]: 7599A2C19C6: >>>> from=, size=3190, nrcpt=1 (queue >>>> active) >>>> Oct 19 13:25:41 ns1 amavis[32367]: (32367-17) Passed CLEAN >>>> {RelayedInternal}, ORIGINATING LOCAL [80.30.255.180]:54566 >>>> [80.30.255.180] -> >>>> , Queue-ID: BFFA62C1965, Message-ID: >>>> , mail_id: >>>> TlJQ9xQhWjQk, Hits: -2.9, size: 2235, queued_as: 7599A2C19C6, >>>> dkim_new=foo:nbmlaw.co.uk, 531 ms >>>> Oct 19 13:25:41 ns1 postfix/smtp[1135]: BFFA62C1965: >>>> to=, relay=127.0.0.1[127.0.0.1]:10026, >>>> delay=0.76, delays=0.22/0/0/0.53, dsn=2.0.0, status=sent (250 2.0.0 >>>> from MTA(smtp:[127.0.0.1]:10027): 250 2.0.0 Ok: queued as 7599A2C19C6) >>>> Oct 19 13:25:41 ns1 postfix/qmgr[1059]: BFFA62C1965: removed >>>> Oct 19 13:25:41 ns1 postfix/smtpd[1114]: connect from >>>> ns1.nbmlaw.co.uk[217.174.253.19] >>>> Oct 19 13:25:41 ns1 postfix/smtpd[1114]: NOQUEUE: filter: RCPT from >>>> ns1.nbmlaw.co.uk[217.174.253.19]: : Sender >>>> address triggers FILTER smtp-amavis:[127.0.0.1]:10026; >>>> from= to= >>>> proto=SMTP helo= >>>> Oct 19 13:25:41 ns1 postfix/smtpd[1114]: 8A03F2C1965: >>>> client=ns1.nbmlaw.co.uk[217.174.253.19] >>>> Oct 19 13:25:41 ns1 postfix/cleanup[1085]: 8A03F2C1965: >>>> message-id= >>>> Oct 19 13:25:41 ns1 opendmarc[2430]: implicit authentication service: >>>> ns1.nbmlaw.co.uk >>>> Oct 19 13:25:41 ns1 opendmarc[2430]: 8A03F2C1965: ns1.nbmlaw.co.uk >>>> fail >>>> Oct 19 13:25:41 ns1 postfix/qmgr[1059]: 8A03F2C1965: >>>> from=, size=1077, nrcpt=1 (queue active) >>>> Oct 19 13:25:41 ns1 postfix/smtpd[1114]: disconnect from >>>> ns1.nbmlaw.co.uk[217.174.253.19] >>>> Oct 19 13:25:41 ns1 sSMTP[1895]: Sent mail for vmail at ns1.nbmlaw.co.uk >>>> (221 2.0.0 Bye) uid=996 username=vmail outbytes=971 >>>> Oct 19 13:25:41 ns1 postfix/smtpd[1898]: connect from >>>> unknown[127.0.0.1] >>>> Oct 19 13:25:41 ns1 postfix/pipe[1162]: 7599A2C19C6: >>>> to=, relay=dovecot, delay=0.46, >>>> delays=0/0/0/0.45, dsn=2.0.0, status=sent (delivered via dovecot >>>> service) >>>> Oct 19 13:25:41 ns1 postfix/qmgr[1059]: 7599A2C19C6: removed >>>> Oct 19 13:25:41 ns1 postfix/smtpd[1898]: E53472C19C6: >>>> client=unknown[127.0.0.1] >>>> Oct 19 13:25:41 ns1 postfix/cleanup[1085]: E53472C19C6: >>>> message-id= >>>> Oct 19 13:25:41 ns1 postfix/qmgr[1059]: E53472C19C6: >>>> from=, size=1619, nrcpt=1 (queue active) >>>> Oct 19 13:25:41 ns1 amavis[1885]: (01885-01) Passed CLEAN >>>> {RelayedInternal}, ORIGINATING LOCAL [217.174.253.19]:40960 >>>> [217.174.253.19] -> >>>> , Queue-ID: 8A03F2C1965, Message-ID: >>>> , mail_id: >>>> mOMO97yjVqjM, Hits: -2.211, size: 1301, queued_as: E53472C19C6, 296 ms >>>> Oct 19 13:25:41 ns1 postfix/smtp[1217]: 8A03F2C1965: >>>> to=, >>>> relay=127.0.0.1[127.0.0.1]:10026, delay=0.38, delays=0.08/0/0/0.29, >>>> dsn=2.0.0, status=sent (250 2.0.0 from MTA(smtp:[127.0.0.1]:10027): >>>> 250 2.0.0 Ok: queued as E53472C19C6) >>>> Oct 19 13:25:41 ns1 postfix/qmgr[1059]: 8A03F2C1965: removed >>>> Oct 19 13:25:42 ns1 postfix/pipe[1303]: E53472C19C6: >>>> to=, relay=dovecot, delay=0.14, >>>> delays=0/0/0/0.14, dsn=2.0.0, status=sent (delivered via dovecot >>>> service) >>>> Oct 19 13:25:42 ns1 postfix/qmgr[1059]: E53472C19C6: removed >>>> >>>> On 19/10/2016 13:54, Stephan Bosch wrote: >>>>> >>>>> Op 19-10-2016 om 13:47 schreef Matthew Broadhead: >>>>>> i am not 100% sure how to give you the information you require. >>>>>> >>>>>> my current setup in /etc/postfix/master.cf is >>>>>> flags=DRhu user=vmail:mail argv=/usr/libexec/dovecot/deliver -d >>>>>> ${recipient} >>>>>> so recipient would presumably be user at domain.tld? or do you want >>>>>> the real email address of one of our users? is there some way i >>>>>> can output this information directly e.g. in logs? >>>>> I am no Postfix expert. I just need to know which values are being >>>>> passed to dovecot-lda with what options. I'd assume Postfix allows >>>>> logging the command line or at least the values of these variables. >>>>> >>>>>> the incoming email message could be anything? again i can run an >>>>>> example directly if you can advise the best way to do this >>>>> As long as the problem occurs with this message. >>>>> >>>>> BTW, it would also be helpful to have the Dovecot logs from this >>>>> delivery, with mail_debug configured to "yes". >>>>> >>>>> Regards, >>>>> >>>>> Stephan. >>>>> >>>>>> On 19/10/2016 12:54, Stephan Bosch wrote: >>>>>>> Also, please provide an example scenario; i.e., for one >>>>>>> problematic delivery provide: >>>>>>> >>>>>>> - The values of the variables substituted in the dovecot-lda >>>>>>> command line; i.e., provide that command line. >>>>>>> - The incoming e-mail message. >>>>>>> >>>>>>> Regards, >>>>>>> >>>>>>> Stephan. >>>>>>> >>>>>>> Op 19-10-2016 om 12:43 schreef Matthew Broadhead: >>>>>>>> dovecot is configured by sentora control panel to a certain >>>>>>>> extent. if you want those configs i can send them as well >>>>>>>> >>>>>>>> dovecot -n >>>>>>>> >>>>>>>> debug_log_path = /var/log/dovecot-debug.log >>>>>>>> dict { >>>>>>>> quotadict = >>>>>>>> mysql:/etc/sentora/configs/dovecot2/dovecot-dict-quota.conf >>>>>>>> } >>>>>>>> disable_plaintext_auth = no >>>>>>>> first_valid_gid = 12 >>>>>>>> first_valid_uid = 996 >>>>>>>> info_log_path = /var/log/dovecot-info.log >>>>>>>> lda_mailbox_autocreate = yes >>>>>>>> lda_mailbox_autosubscribe = yes >>>>>>>> listen = * >>>>>>>> lmtp_save_to_detail_mailbox = yes >>>>>>>> log_path = /var/log/dovecot.log >>>>>>>> log_timestamp = %Y-%m-%d %H:%M:%S >>>>>>>> mail_fsync = never >>>>>>>> mail_location = maildir:/var/sentora/vmail/%d/%n >>>>>>>> managesieve_notify_capability = mailto >>>>>>>> managesieve_sieve_capability = fileinto reject envelope >>>>>>>> encoded-character vacation subaddress comparator-i;ascii-numeric >>>>>>>> relational regex imap4flags copy include variables body enotify >>>>>>>> environment mailbox date ihave >>>>>>>> passdb { >>>>>>>> args = /etc/sentora/configs/dovecot2/dovecot-mysql.conf >>>>>>>> driver = sql >>>>>>>> } >>>>>>>> plugin { >>>>>>>> acl = vfile:/etc/dovecot/acls >>>>>>>> quota = maildir:User quota >>>>>>>> sieve = ~/dovecot.sieve >>>>>>>> sieve_dir = ~/sieve >>>>>>>> sieve_global_dir = /var/sentora/sieve/ >>>>>>>> sieve_global_path = /var/sentora/sieve/globalfilter.sieve >>>>>>>> sieve_max_script_size = 1M >>>>>>>> sieve_vacation_send_from_recipient = yes >>>>>>>> trash = /etc/sentora/configs/dovecot2/dovecot-trash.conf >>>>>>>> } >>>>>>>> protocols = imap pop3 lmtp sieve >>>>>>>> service auth { >>>>>>>> unix_listener /var/spool/postfix/private/auth { >>>>>>>> group = postfix >>>>>>>> mode = 0666 >>>>>>>> user = postfix >>>>>>>> } >>>>>>>> unix_listener auth-userdb { >>>>>>>> group = mail >>>>>>>> mode = 0666 >>>>>>>> user = vmail >>>>>>>> } >>>>>>>> } >>>>>>>> service dict { >>>>>>>> unix_listener dict { >>>>>>>> group = mail >>>>>>>> mode = 0666 >>>>>>>> user = vmail >>>>>>>> } >>>>>>>> } >>>>>>>> service imap-login { >>>>>>>> inet_listener imap { >>>>>>>> port = 143 >>>>>>>> } >>>>>>>> process_limit = 500 >>>>>>>> process_min_avail = 2 >>>>>>>> } >>>>>>>> service imap { >>>>>>>> vsz_limit = 256 M >>>>>>>> } >>>>>>>> service managesieve-login { >>>>>>>> inet_listener sieve { >>>>>>>> port = 4190 >>>>>>>> } >>>>>>>> process_min_avail = 0 >>>>>>>> service_count = 1 >>>>>>>> vsz_limit = 64 M >>>>>>>> } >>>>>>>> service pop3-login { >>>>>>>> inet_listener pop3 { >>>>>>>> port = 110 >>>>>>>> } >>>>>>>> } >>>>>>>> ssl_cert = >>>>>>> ssl_key = >>>>>>> ssl_protocols = !SSLv2 !SSLv3 >>>>>>>> userdb { >>>>>>>> driver = prefetch >>>>>>>> } >>>>>>>> userdb { >>>>>>>> args = /etc/sentora/configs/dovecot2/dovecot-mysql.conf >>>>>>>> driver = sql >>>>>>>> } >>>>>>>> protocol lda { >>>>>>>> mail_fsync = optimized >>>>>>>> mail_plugins = quota sieve >>>>>>>> postmaster_address = postmaster at ns1.nbmlaw.co.uk >>>>>>>> } >>>>>>>> protocol imap { >>>>>>>> imap_client_workarounds = delay-newmail >>>>>>>> mail_fsync = optimized >>>>>>>> mail_max_userip_connections = 60 >>>>>>>> mail_plugins = quota imap_quota trash >>>>>>>> } >>>>>>>> protocol lmtp { >>>>>>>> mail_plugins = quota sieve >>>>>>>> } >>>>>>>> protocol pop3 { >>>>>>>> mail_plugins = quota >>>>>>>> pop3_client_workarounds = outlook-no-nuls oe-ns-eoh >>>>>>>> pop3_uidl_format = %08Xu%08Xv >>>>>>>> } >>>>>>>> protocol sieve { >>>>>>>> managesieve_implementation_string = Dovecot Pigeonhole >>>>>>>> managesieve_max_compile_errors = 5 >>>>>>>> managesieve_max_line_length = 65536 >>>>>>>> } >>>>>>>> >>>>>>>> managesieve.sieve >>>>>>>> >>>>>>>> require ["fileinto","vacation"]; >>>>>>>> # rule:[vacation] >>>>>>>> if true >>>>>>>> { >>>>>>>> vacation :days 1 :subject "Vacation subject" text: >>>>>>>> i am currently out of the office >>>>>>>> >>>>>>>> trying some line breaks >>>>>>>> >>>>>>>> ...zzz >>>>>>>> . >>>>>>>> ; >>>>>>>> } >>>>>>>> >>>>>>>> On 19/10/2016 12:29, Stephan Bosch wrote: >>>>>>>>> Could you send your configuration (output from `dovecot -n`)? >>>>>>>>> >>>>>>>>> Also, please provide an example scenario; i.e., for one >>>>>>>>> problematic delivery provide: >>>>>>>>> >>>>>>>>> - The values of the variables substituted below. >>>>>>>>> >>>>>>>>> - The incoming e-mail message. >>>>>>>>> >>>>>>>>> - The Sieve script (or at least that vacation command). >>>>>>>>> >>>>>>>>> Regards, >>>>>>>>> >>>>>>>>> >>>>>>>>> Stephan. >>>>>>>>> >>>>>>>>> Op 19-10-2016 om 11:42 schreef Matthew Broadhead: >>>>>>>>>> hi, does anyone have any ideas about this issue? i have not >>>>>>>>>> had any response yet >>>>>>>>>> >>>>>>>>>> i tried changing /etc/postfix/master.cf line: >>>>>>>>>> dovecot unix - n n - - pipe >>>>>>>>>> flags=DRhu user=vmail:mail argv=/usr/libexec/dovecot/deliver -d >>>>>>>>>> ${recipient} >>>>>>>>>> >>>>>>>>>> to >>>>>>>>>> flags=DRhu user=vmail:mail >>>>>>>>>> argv=/usr/libexec/dovecot/dovecot-lda -f ${sender} -d >>>>>>>>>> ${user}@${nexthop} -a ${original_recipient} >>>>>>>>>> >>>>>>>>>> and >>>>>>>>>> -d ${user}@${domain} -a {recipient} -f ${sender} -m ${extension} >>>>>>>>>> >>>>>>>>>> but it didn't work >>>>>>>>>> >>>>>>>>>> On 12/10/2016 13:57, Matthew Broadhead wrote: >>>>>>>>>>> I have a server running >>>>>>>>>>> centos-release-7-2.1511.el7.centos.2.10.x86_64 with dovecot >>>>>>>>>>> version 2.2.10. I am also using roundcube for webmail. when a >>>>>>>>>>> vacation filter (reply with message) is created in roundcube >>>>>>>>>>> it adds a rule to managesieve.sieve in the user's mailbox. >>>>>>>>>>> everything works fine except the reply comes from >>>>>>>>>>> vmail at ns1.domain.tld instead of user at domain.tld. >>>>>>>>>>> ns1.domain.tld is the fully qualified name of the server. >>>>>>>>>>> >>>>>>>>>>> it used to work fine on my old CentOS 6 server so I am not >>>>>>>>>>> sure what has changed. Can anyone point me in the direction >>>>>>>>>>> of where I can configure this behaviour? From mrobti at insiberia.net Tue Oct 25 07:21:20 2016 From: mrobti at insiberia.net (mrobti at insiberia.net) Date: Tue, 25 Oct 2016 07:21:20 +0000 Subject: Break up massive INBOX? In-Reply-To: <788ded37-ab3c-be54-398b-847e62967c29@dovecot.fi> References: <788ded37-ab3c-be54-398b-847e62967c29@dovecot.fi> Message-ID: <6ef68b60411c3adb56bb01d5d2f6bca4@insiberia.net> On 2016-10-25 06:51, Aki Tuomi wrote: > On 25.10.2016 09:43, mrobti at insiberia.net wrote: >> I have inherited a 10+ GB mailbox which dsync converted to maildir >> very nicely. But 15GB in tens of thousands of messages is too much in >> one INBOX when used in webmail. >> >> I guess it's best to divide the messages into subfolders for date. >> >> Is there a server-side way to safely move messages into new subfolders >> so that Dovecot indexes and UIDs won't break? If I write a shell >> script to move maildir files based on date of the files, I think >> Dovecot would not like this? >> >> Also, is there any way to automatically create delivery or archive >> folders by date eg., on the 1st of each month? And deliver mails to >> the subfolder for the current month? I know I can create Sieve rules >> for such by hand, but prefer if there is a plugin or something to do >> this automatically. > > Hi! > > doveadm move is probably your friend here, see man doveadm-search-query > for details on what you can search for. Thank you! I think I can script a few years of this: doveadm mailbox create -u biguser -s 'May 2012' doveadm move -u biguser 'May 2012' mailbox INBOX BEFORE 2012-06-01 SINCE 2012-05-01 Is there any plugin that can break up mailbox delivery automatically? I guess I can run the commands above on first of each month if that's the best way. From matthew.broadhead at nbmlaw.co.uk Tue Oct 25 07:30:20 2016 From: matthew.broadhead at nbmlaw.co.uk (Matthew Broadhead) Date: Tue, 25 Oct 2016 09:30:20 +0200 Subject: sieve sending vacation message from vmail@ns1.domain.tld In-Reply-To: References: <71b362e8-3a69-076d-6376-2f3bbd39d0eb@nbmlaw.co.uk> <94941225-09d0-1440-1733-3884cc6dcd67@rename-it.nl> <7cdadba3-fd03-7d8c-1235-b428018a081c@nbmlaw.co.uk> <55712b3a-4812-f0a6-c9f9-59efcdac79f7@rename-it.nl> <8260ce16-bc94-e3a9-13d1-f1204e6ae525@rename-it.nl> <344d3d36-b905-5a90-e0ea-17d556076838@nbmlaw.co.uk> <9b47cb74-0aa7-4851-11f0-5a367341a63b@nbmlaw.co.uk> <4aa89a3c-937f-a1e6-3871-1df196ac7af2@rename-it.nl> Message-ID: <28c02bd4-0b94-06bb-a4f8-74578d20aeb5@nbmlaw.co.uk> sorry to double post but maybe there is some way to report the problem to centos so that they upgrade the package there? On 21/10/2016 10:22, Matthew Broadhead wrote: > the server is using CentOS 7 and that is the package that comes > through yum. everything is up to date. i am hesitant to install a > new package manually as that could cause other compatibility issues? > is there another way to test the configuration on the server? > > On 21/10/2016 01:07, Stephan Bosch wrote: >> Op 10/20/2016 om 7:38 PM schreef Matthew Broadhead: >>> do i need to provide more information? >>> >> It still doesn't make sense to me. I do notice that the version you're >> using is ancient (dated 26-09-2013), which may well the problem. >> >> Do have the ability to upgrade? >> >> Regards, >> >> Stephan. >> >>> On 19/10/2016 14:49, Matthew Broadhead wrote: >>>> /var/log/maillog showed this >>>> Oct 19 13:25:41 ns1 postfix/smtpd[1298]: 7599A2C19C6: >>>> client=unknown[127.0.0.1] >>>> Oct 19 13:25:41 ns1 postfix/cleanup[1085]: 7599A2C19C6: >>>> message-id= >>>> Oct 19 13:25:41 ns1 postfix/qmgr[1059]: 7599A2C19C6: >>>> from=, size=3190, nrcpt=1 (queue >>>> active) >>>> Oct 19 13:25:41 ns1 amavis[32367]: (32367-17) Passed CLEAN >>>> {RelayedInternal}, ORIGINATING LOCAL [80.30.255.180]:54566 >>>> [80.30.255.180] -> >>>> , Queue-ID: BFFA62C1965, Message-ID: >>>> , mail_id: >>>> TlJQ9xQhWjQk, Hits: -2.9, size: 2235, queued_as: 7599A2C19C6, >>>> dkim_new=foo:nbmlaw.co.uk, 531 ms >>>> Oct 19 13:25:41 ns1 postfix/smtp[1135]: BFFA62C1965: >>>> to=, relay=127.0.0.1[127.0.0.1]:10026, >>>> delay=0.76, delays=0.22/0/0/0.53, dsn=2.0.0, status=sent (250 2.0.0 >>>> from MTA(smtp:[127.0.0.1]:10027): 250 2.0.0 Ok: queued as 7599A2C19C6) >>>> Oct 19 13:25:41 ns1 postfix/qmgr[1059]: BFFA62C1965: removed >>>> Oct 19 13:25:41 ns1 postfix/smtpd[1114]: connect from >>>> ns1.nbmlaw.co.uk[217.174.253.19] >>>> Oct 19 13:25:41 ns1 postfix/smtpd[1114]: NOQUEUE: filter: RCPT from >>>> ns1.nbmlaw.co.uk[217.174.253.19]: : Sender >>>> address triggers FILTER smtp-amavis:[127.0.0.1]:10026; >>>> from= to= >>>> proto=SMTP helo= >>>> Oct 19 13:25:41 ns1 postfix/smtpd[1114]: 8A03F2C1965: >>>> client=ns1.nbmlaw.co.uk[217.174.253.19] >>>> Oct 19 13:25:41 ns1 postfix/cleanup[1085]: 8A03F2C1965: >>>> message-id= >>>> Oct 19 13:25:41 ns1 opendmarc[2430]: implicit authentication service: >>>> ns1.nbmlaw.co.uk >>>> Oct 19 13:25:41 ns1 opendmarc[2430]: 8A03F2C1965: ns1.nbmlaw.co.uk >>>> fail >>>> Oct 19 13:25:41 ns1 postfix/qmgr[1059]: 8A03F2C1965: >>>> from=, size=1077, nrcpt=1 (queue active) >>>> Oct 19 13:25:41 ns1 postfix/smtpd[1114]: disconnect from >>>> ns1.nbmlaw.co.uk[217.174.253.19] >>>> Oct 19 13:25:41 ns1 sSMTP[1895]: Sent mail for vmail at ns1.nbmlaw.co.uk >>>> (221 2.0.0 Bye) uid=996 username=vmail outbytes=971 >>>> Oct 19 13:25:41 ns1 postfix/smtpd[1898]: connect from >>>> unknown[127.0.0.1] >>>> Oct 19 13:25:41 ns1 postfix/pipe[1162]: 7599A2C19C6: >>>> to=, relay=dovecot, delay=0.46, >>>> delays=0/0/0/0.45, dsn=2.0.0, status=sent (delivered via dovecot >>>> service) >>>> Oct 19 13:25:41 ns1 postfix/qmgr[1059]: 7599A2C19C6: removed >>>> Oct 19 13:25:41 ns1 postfix/smtpd[1898]: E53472C19C6: >>>> client=unknown[127.0.0.1] >>>> Oct 19 13:25:41 ns1 postfix/cleanup[1085]: E53472C19C6: >>>> message-id= >>>> Oct 19 13:25:41 ns1 postfix/qmgr[1059]: E53472C19C6: >>>> from=, size=1619, nrcpt=1 (queue active) >>>> Oct 19 13:25:41 ns1 amavis[1885]: (01885-01) Passed CLEAN >>>> {RelayedInternal}, ORIGINATING LOCAL [217.174.253.19]:40960 >>>> [217.174.253.19] -> >>>> , Queue-ID: 8A03F2C1965, Message-ID: >>>> , mail_id: >>>> mOMO97yjVqjM, Hits: -2.211, size: 1301, queued_as: E53472C19C6, 296 ms >>>> Oct 19 13:25:41 ns1 postfix/smtp[1217]: 8A03F2C1965: >>>> to=, >>>> relay=127.0.0.1[127.0.0.1]:10026, delay=0.38, delays=0.08/0/0/0.29, >>>> dsn=2.0.0, status=sent (250 2.0.0 from MTA(smtp:[127.0.0.1]:10027): >>>> 250 2.0.0 Ok: queued as E53472C19C6) >>>> Oct 19 13:25:41 ns1 postfix/qmgr[1059]: 8A03F2C1965: removed >>>> Oct 19 13:25:42 ns1 postfix/pipe[1303]: E53472C19C6: >>>> to=, relay=dovecot, delay=0.14, >>>> delays=0/0/0/0.14, dsn=2.0.0, status=sent (delivered via dovecot >>>> service) >>>> Oct 19 13:25:42 ns1 postfix/qmgr[1059]: E53472C19C6: removed >>>> >>>> On 19/10/2016 13:54, Stephan Bosch wrote: >>>>> >>>>> Op 19-10-2016 om 13:47 schreef Matthew Broadhead: >>>>>> i am not 100% sure how to give you the information you require. >>>>>> >>>>>> my current setup in /etc/postfix/master.cf is >>>>>> flags=DRhu user=vmail:mail argv=/usr/libexec/dovecot/deliver -d >>>>>> ${recipient} >>>>>> so recipient would presumably be user at domain.tld? or do you want >>>>>> the real email address of one of our users? is there some way i >>>>>> can output this information directly e.g. in logs? >>>>> I am no Postfix expert. I just need to know which values are being >>>>> passed to dovecot-lda with what options. I'd assume Postfix allows >>>>> logging the command line or at least the values of these variables. >>>>> >>>>>> the incoming email message could be anything? again i can run an >>>>>> example directly if you can advise the best way to do this >>>>> As long as the problem occurs with this message. >>>>> >>>>> BTW, it would also be helpful to have the Dovecot logs from this >>>>> delivery, with mail_debug configured to "yes". >>>>> >>>>> Regards, >>>>> >>>>> Stephan. >>>>> >>>>>> On 19/10/2016 12:54, Stephan Bosch wrote: >>>>>>> Also, please provide an example scenario; i.e., for one >>>>>>> problematic delivery provide: >>>>>>> >>>>>>> - The values of the variables substituted in the dovecot-lda >>>>>>> command line; i.e., provide that command line. >>>>>>> - The incoming e-mail message. >>>>>>> >>>>>>> Regards, >>>>>>> >>>>>>> Stephan. >>>>>>> >>>>>>> Op 19-10-2016 om 12:43 schreef Matthew Broadhead: >>>>>>>> dovecot is configured by sentora control panel to a certain >>>>>>>> extent. if you want those configs i can send them as well >>>>>>>> >>>>>>>> dovecot -n >>>>>>>> >>>>>>>> debug_log_path = /var/log/dovecot-debug.log >>>>>>>> dict { >>>>>>>> quotadict = >>>>>>>> mysql:/etc/sentora/configs/dovecot2/dovecot-dict-quota.conf >>>>>>>> } >>>>>>>> disable_plaintext_auth = no >>>>>>>> first_valid_gid = 12 >>>>>>>> first_valid_uid = 996 >>>>>>>> info_log_path = /var/log/dovecot-info.log >>>>>>>> lda_mailbox_autocreate = yes >>>>>>>> lda_mailbox_autosubscribe = yes >>>>>>>> listen = * >>>>>>>> lmtp_save_to_detail_mailbox = yes >>>>>>>> log_path = /var/log/dovecot.log >>>>>>>> log_timestamp = %Y-%m-%d %H:%M:%S >>>>>>>> mail_fsync = never >>>>>>>> mail_location = maildir:/var/sentora/vmail/%d/%n >>>>>>>> managesieve_notify_capability = mailto >>>>>>>> managesieve_sieve_capability = fileinto reject envelope >>>>>>>> encoded-character vacation subaddress comparator-i;ascii-numeric >>>>>>>> relational regex imap4flags copy include variables body enotify >>>>>>>> environment mailbox date ihave >>>>>>>> passdb { >>>>>>>> args = /etc/sentora/configs/dovecot2/dovecot-mysql.conf >>>>>>>> driver = sql >>>>>>>> } >>>>>>>> plugin { >>>>>>>> acl = vfile:/etc/dovecot/acls >>>>>>>> quota = maildir:User quota >>>>>>>> sieve = ~/dovecot.sieve >>>>>>>> sieve_dir = ~/sieve >>>>>>>> sieve_global_dir = /var/sentora/sieve/ >>>>>>>> sieve_global_path = /var/sentora/sieve/globalfilter.sieve >>>>>>>> sieve_max_script_size = 1M >>>>>>>> sieve_vacation_send_from_recipient = yes >>>>>>>> trash = /etc/sentora/configs/dovecot2/dovecot-trash.conf >>>>>>>> } >>>>>>>> protocols = imap pop3 lmtp sieve >>>>>>>> service auth { >>>>>>>> unix_listener /var/spool/postfix/private/auth { >>>>>>>> group = postfix >>>>>>>> mode = 0666 >>>>>>>> user = postfix >>>>>>>> } >>>>>>>> unix_listener auth-userdb { >>>>>>>> group = mail >>>>>>>> mode = 0666 >>>>>>>> user = vmail >>>>>>>> } >>>>>>>> } >>>>>>>> service dict { >>>>>>>> unix_listener dict { >>>>>>>> group = mail >>>>>>>> mode = 0666 >>>>>>>> user = vmail >>>>>>>> } >>>>>>>> } >>>>>>>> service imap-login { >>>>>>>> inet_listener imap { >>>>>>>> port = 143 >>>>>>>> } >>>>>>>> process_limit = 500 >>>>>>>> process_min_avail = 2 >>>>>>>> } >>>>>>>> service imap { >>>>>>>> vsz_limit = 256 M >>>>>>>> } >>>>>>>> service managesieve-login { >>>>>>>> inet_listener sieve { >>>>>>>> port = 4190 >>>>>>>> } >>>>>>>> process_min_avail = 0 >>>>>>>> service_count = 1 >>>>>>>> vsz_limit = 64 M >>>>>>>> } >>>>>>>> service pop3-login { >>>>>>>> inet_listener pop3 { >>>>>>>> port = 110 >>>>>>>> } >>>>>>>> } >>>>>>>> ssl_cert = >>>>>>> ssl_key = >>>>>>> ssl_protocols = !SSLv2 !SSLv3 >>>>>>>> userdb { >>>>>>>> driver = prefetch >>>>>>>> } >>>>>>>> userdb { >>>>>>>> args = /etc/sentora/configs/dovecot2/dovecot-mysql.conf >>>>>>>> driver = sql >>>>>>>> } >>>>>>>> protocol lda { >>>>>>>> mail_fsync = optimized >>>>>>>> mail_plugins = quota sieve >>>>>>>> postmaster_address = postmaster at ns1.nbmlaw.co.uk >>>>>>>> } >>>>>>>> protocol imap { >>>>>>>> imap_client_workarounds = delay-newmail >>>>>>>> mail_fsync = optimized >>>>>>>> mail_max_userip_connections = 60 >>>>>>>> mail_plugins = quota imap_quota trash >>>>>>>> } >>>>>>>> protocol lmtp { >>>>>>>> mail_plugins = quota sieve >>>>>>>> } >>>>>>>> protocol pop3 { >>>>>>>> mail_plugins = quota >>>>>>>> pop3_client_workarounds = outlook-no-nuls oe-ns-eoh >>>>>>>> pop3_uidl_format = %08Xu%08Xv >>>>>>>> } >>>>>>>> protocol sieve { >>>>>>>> managesieve_implementation_string = Dovecot Pigeonhole >>>>>>>> managesieve_max_compile_errors = 5 >>>>>>>> managesieve_max_line_length = 65536 >>>>>>>> } >>>>>>>> >>>>>>>> managesieve.sieve >>>>>>>> >>>>>>>> require ["fileinto","vacation"]; >>>>>>>> # rule:[vacation] >>>>>>>> if true >>>>>>>> { >>>>>>>> vacation :days 1 :subject "Vacation subject" text: >>>>>>>> i am currently out of the office >>>>>>>> >>>>>>>> trying some line breaks >>>>>>>> >>>>>>>> ...zzz >>>>>>>> . >>>>>>>> ; >>>>>>>> } >>>>>>>> >>>>>>>> On 19/10/2016 12:29, Stephan Bosch wrote: >>>>>>>>> Could you send your configuration (output from `dovecot -n`)? >>>>>>>>> >>>>>>>>> Also, please provide an example scenario; i.e., for one >>>>>>>>> problematic delivery provide: >>>>>>>>> >>>>>>>>> - The values of the variables substituted below. >>>>>>>>> >>>>>>>>> - The incoming e-mail message. >>>>>>>>> >>>>>>>>> - The Sieve script (or at least that vacation command). >>>>>>>>> >>>>>>>>> Regards, >>>>>>>>> >>>>>>>>> >>>>>>>>> Stephan. >>>>>>>>> >>>>>>>>> Op 19-10-2016 om 11:42 schreef Matthew Broadhead: >>>>>>>>>> hi, does anyone have any ideas about this issue? i have not >>>>>>>>>> had any response yet >>>>>>>>>> >>>>>>>>>> i tried changing /etc/postfix/master.cf line: >>>>>>>>>> dovecot unix - n n - - pipe >>>>>>>>>> flags=DRhu user=vmail:mail argv=/usr/libexec/dovecot/deliver -d >>>>>>>>>> ${recipient} >>>>>>>>>> >>>>>>>>>> to >>>>>>>>>> flags=DRhu user=vmail:mail >>>>>>>>>> argv=/usr/libexec/dovecot/dovecot-lda -f ${sender} -d >>>>>>>>>> ${user}@${nexthop} -a ${original_recipient} >>>>>>>>>> >>>>>>>>>> and >>>>>>>>>> -d ${user}@${domain} -a {recipient} -f ${sender} -m ${extension} >>>>>>>>>> >>>>>>>>>> but it didn't work >>>>>>>>>> >>>>>>>>>> On 12/10/2016 13:57, Matthew Broadhead wrote: >>>>>>>>>>> I have a server running >>>>>>>>>>> centos-release-7-2.1511.el7.centos.2.10.x86_64 with dovecot >>>>>>>>>>> version 2.2.10. I am also using roundcube for webmail. when a >>>>>>>>>>> vacation filter (reply with message) is created in roundcube >>>>>>>>>>> it adds a rule to managesieve.sieve in the user's mailbox. >>>>>>>>>>> everything works fine except the reply comes from >>>>>>>>>>> vmail at ns1.domain.tld instead of user at domain.tld. >>>>>>>>>>> ns1.domain.tld is the fully qualified name of the server. >>>>>>>>>>> >>>>>>>>>>> it used to work fine on my old CentOS 6 server so I am not >>>>>>>>>>> sure what has changed. Can anyone point me in the direction >>>>>>>>>>> of where I can configure this behaviour? -- Matthew Broadhead NBM Solicitors See the latest jobs available at NBM @www.nbmlaw.co.uk/recruitment.htm 32 Rainsford Road Chelmsford Essex CM1 2QG Tel: 01245 269909 Fax: 01245 261932 www.nbmlaw.co.uk Partners: WJ Broadhead NP Eason SJ Lacey CR Broadhead D Seepaul T Carley NBM Solicitors are authorised and regulated by the Solicitors Regulation Authority. We are also bound by their code of conduct. Registered no. 00061052 NBM also provide a will writing service, see http://www.nbmlaw.co.uk/wills.htm for more information Confidentiality Information in this message is confidential and may be legally privileged. It is intended solely for the recipient to whom it is addressed. If you receive the message in error, please notify the sender and immediately destroy all copies. Security warning Please note that this e-mail has been created in the knowledge that e-mail is not a 100% secure communications medium. We advise you that you understand and observe this lack of security when e-mailing us. This e-mail does not constitute a legally binding document. No contracts may be concluded on behalf of Nigel Broadhead Mynard Solicitors by e-mail communications. If you have any queries, please contact administrator at nbmlaw.co.uk From aki.tuomi at dovecot.fi Tue Oct 25 07:32:40 2016 From: aki.tuomi at dovecot.fi (Aki Tuomi) Date: Tue, 25 Oct 2016 10:32:40 +0300 Subject: Break up massive INBOX? In-Reply-To: <6ef68b60411c3adb56bb01d5d2f6bca4@insiberia.net> References: <788ded37-ab3c-be54-398b-847e62967c29@dovecot.fi> <6ef68b60411c3adb56bb01d5d2f6bca4@insiberia.net> Message-ID: On 25.10.2016 10:21, mrobti at insiberia.net wrote: > On 2016-10-25 06:51, Aki Tuomi wrote: >> On 25.10.2016 09:43, mrobti at insiberia.net wrote: >>> I have inherited a 10+ GB mailbox which dsync converted to maildir >>> very nicely. But 15GB in tens of thousands of messages is too much in >>> one INBOX when used in webmail. >>> >>> I guess it's best to divide the messages into subfolders for date. >>> >>> Is there a server-side way to safely move messages into new subfolders >>> so that Dovecot indexes and UIDs won't break? If I write a shell >>> script to move maildir files based on date of the files, I think >>> Dovecot would not like this? >>> >>> Also, is there any way to automatically create delivery or archive >>> folders by date eg., on the 1st of each month? And deliver mails to >>> the subfolder for the current month? I know I can create Sieve rules >>> for such by hand, but prefer if there is a plugin or something to do >>> this automatically. >> >> Hi! >> >> doveadm move is probably your friend here, see man doveadm-search-query >> for details on what you can search for. > > Thank you! I think I can script a few years of this: > > doveadm mailbox create -u biguser -s 'May 2012' > doveadm move -u biguser 'May 2012' mailbox INBOX BEFORE 2012-06-01 > SINCE 2012-05-01 > > Is there any plugin that can break up mailbox delivery automatically? > I guess I can run the commands above on first of each month if that's > the best way. You could see if Sieve has suitable function(s) for this. Aki From aki.tuomi at dovecot.fi Tue Oct 25 08:59:25 2016 From: aki.tuomi at dovecot.fi (Aki Tuomi) Date: Tue, 25 Oct 2016 11:59:25 +0300 Subject: keent() from Tika - with doveadm In-Reply-To: References: <6177676.109.1477236435200@appsuite-dev.open-xchange.com> <191657457.111.1477237004555@appsuite-dev.open-xchange.com> <1063773824.113.1477239267493@appsuite-dev.open-xchange.com> <445708024.118.1477243243399@appsuite-dev.open-xchange.com> <429756207.896.1477288110361@appsuite-dev.open-xchange.com> <3dc312ae-7def-0097-f664-61df0f56969f@dovecot.fi> <6507f93d-b2d6-5700-d450-0cca4e87dc06@dovecot.fi> Message-ID: This seems to be some kind of clucene internal error. Aki On 24.10.2016 17:21, Larry Rosenman wrote: > that seems to fix this kevent() problem, but I got the following lucene > assert. Is that because of previous fails? > > Also, while I have your attention, is fts_autoindex supposed to work > accross NAMESPACES? > > doveadm(mrm): Debug: Mailbox LISTS/vse-l: Opened mail UID=39483 because: > fts indexing > doveadm(mrm): Debug: Mailbox LISTS/vse-l: Opened mail UID=39484 because: > fts indexing > doveadm(mrm): Debug: Mailbox LISTS/vse-l: Opened mail UID=39485 because: > fts indexing > doveadm(mrm): Debug: Mailbox LISTS/vse-l: Opened mail UID=39486 because: > fts indexing > Assertion failed: (numDocsInStore*8 == directory->fileLength( > (docStoreSegment + "." + IndexFileNames::FIELDS_INDEX_EXTENSION).c_str() > )), function closeDocStore, file > src/core/CLucene/index/DocumentsWriter.cpp, line 210. > > Program received signal SIGABRT, Aborted. > 0x00000008014e6f7a in thr_kill () from /lib/libc.so.7 > (gdb) bt full > #0 0x00000008014e6f7a in thr_kill () from /lib/libc.so.7 > No symbol table info available. > #1 0x00000008014e6f66 in raise () from /lib/libc.so.7 > No symbol table info available. > #2 0x00000008014e6ee9 in abort () from /lib/libc.so.7 > No symbol table info available. > #3 0x000000080154dee1 in __assert () from /lib/libc.so.7 > No symbol table info available. > #4 0x0000000803ea1762 in lucene::index::DocumentsWriter::closeDocStore() () > from /usr/local/lib/libclucene-core.so.1 > No symbol table info available. > #5 0x0000000803ea3d89 in lucene::index::DocumentsWriter::flush(bool) () > from /usr/local/lib/libclucene-core.so.1 > No symbol table info available. > #6 0x0000000803ed26bb in lucene::index::IndexWriter::doFlush(bool) () > from /usr/local/lib/libclucene-core.so.1 > No symbol table info available. > #7 0x0000000803ece25e in lucene::index::IndexWriter::flush(bool, bool) () > from /usr/local/lib/libclucene-core.so.1 > No symbol table info available. > #8 0x0000000803ececbe in > lucene::index::IndexWriter::addDocument(lucene::document::Document*, > lucene::analysis::Analyzer*) () > from /usr/local/lib/libclucene-core.so.1 > No symbol table info available. > ---Type to continue, or q to quit--- > #9 0x0000000803b8cd55 in lucene_index_build_flush (index=0x801c1b640) > at lucene-wrapper.cc:552 > analyzer = 0x801c251c0 > ret = 0 > err = @0x801cd90d0: { > _awhat = 0x801cd9108 "Return-Path: lerctr.org at gmail.com>\nDelivered-To: mrm at lerctr.org\n", > _twhat = 0x58 , > error_number = 30249224} > #10 0x0000000803b8c42e in lucene_index_build_more (index=0x801c1b640, > uid=39486, part_idx=0, > data=0x806041000 "", > size=45, > hdr_name=0x801c1a520 "Return-Path") at lucene-wrapper.cc:572 > id = > L"\x1cc8c40\b\xffffd970\x7fff\xffffd960\x7fff\x1190eba\b\x1cc0c00\b\x1191739\b\x1cc8c40\b\x1190eba\b\xffffd990\x7fff-\000\000\001-" > namesize = 34378158489 > datasize = 140737488345424 > dest = 0x801190eba > L"\x45880124\xff458aff\xb60f0124\xc48348c0\xfc35d10\x4855001f\x8348e589\x8d4840ec\x8948e075\x8b48f07d\x8b48f07d\x8948107f\x8b48e87d\x8b48e87d\x140bf\x458b4800\x888b48e8\510\x48f92948\x1488889\x8b480000\xc748e845\x14080" > dest_free = 0x7fffffffd920 > L"\xffffd950\x7fff\x1191199\b\x1cc8c40\b\xffffd970\x7fff\xffffd960\x7fff\x1190eba\b\x1cc0c00\b\x1191739\b\x1cc8c40\b\x1190eba\b\xffffd990\x7fff-" > ---Type to continue, or q to quit--- > token_flag = 0 > #11 0x0000000803b8a420 in fts_backend_lucene_update_build_more ( > _ctx=0x801c21240, > data=0x806041000 "", > size=45) > at fts-backend-lucene.c:432 > _data_stack_cur_id = 6 > ctx = 0x801c21240 > backend = 0x801c3a200 > ret = 8 > #12 0x000000080220e035 in fts_backend_update_build_more (ctx=0x801c21240, > data=0x806041000 "", > size=45) > at fts-api.c:193 > No locals. > #13 0x000000080221015b in fts_build_full_words (ctx=0x7fffffffdc98, > data=0x806041000 "", > size=45, > last=true) at fts-build-mail.c:402 > i = 45 > #14 0x000000080220fd45 in fts_build_data (ctx=0x7fffffffdc98, > data=0x806041000 "", > size=45, > last=true) at fts-build-mail.c:423 > No locals. > #15 0x000000080221067d in fts_build_unstructured_header (ctx=0x7fffffffdc98, > hdr=0x801ccf118) at fts-build-mail.c:104 > data = 0x806041000 "" > ---Type to continue, or q to quit--- > buf = 0x0 > i = 45 > ret = 18164334 > #16 0x000000080220fa54 in fts_build_mail_header (ctx=0x7fffffffdc98, > block=0x7fffffffdc40) at fts-build-mail.c:179 > hdr = 0x801ccf118 > key = {uid = 39486, type = FTS_BACKEND_BUILD_KEY_HDR, > part = 0x801c09c58, hdr_name = 0x801c4ba20 "Return-Path", > body_content_type = 0x0, body_content_disposition = 0x0} > ret = 32767 > #17 0x000000080220f292 in fts_build_mail_real (update_ctx=0x801c21240, > mail=0x801c63040) at fts-build-mail.c:548 > ctx = {mail = 0x801c63040, update_ctx = 0x801c21240, > content_type = 0x0, content_disposition = 0x0, body_parser = 0x0, > word_buf = 0x0, pending_input = 0x0, cur_user_lang = 0x0} > input = 0x801cc9030 > parser = 0x801c2f040 > decoder = 0x801ccf100 > raw_block = {part = 0x801c09c58, hdr = 0x801c53900, data = 0x0, > size = 0} > block = {part = 0x801c09c58, hdr = 0x801ccf118, > data = 0x7fffffffdc90 "0\220\314\001\b", size = 0} > prev_part = 0x801c09c58 > parts = 0x4ffffdca8 > ---Type to continue, or q to quit--- > skip_body = false > body_part = false > body_added = false > binary_body = 255 > error = 0x801cc88c0 "\200\212\314\001\b" > ret = 1 > #18 0x000000080220ee72 in fts_build_mail (update_ctx=0x801c21240, > mail=0x801c63040) at fts-build-mail.c:594 > _data_stack_cur_id = 5 > ret = 8 > #19 0x000000080221a626 in fts_mail_index (_mail=0x801c63040) > at fts-storage.c:503 > ft = 0x801c196e0 > flist = 0x801c5dbd8 > #20 0x0000000802217d40 in fts_mail_precache (_mail=0x801c63040) > at fts-storage.c:522 > _data_stack_cur_id = 4 > mail = 0x801c63040 > fmail = 0x801c634f0 > ft = 0x801c196e0 > #21 0x0000000800d3d992 in mail_precache (mail=0x801c63040) at mail.c:420 > _data_stack_cur_id = 3 > p = 0x801c63040 > #22 0x0000000000433b59 in cmd_index_box_precache (box=0x8074edc40) > ---Type to continue, or q to quit--- > at doveadm-mail-index.c:75 > status = {messages = 5342, recent = 0, unseen = 0, > uidvalidity = 1362362144, uidnext = 43009, first_unseen_seq = 0, > first_recent_uid = 43007, last_cached_seq = 0, highest_modseq = 0, > highest_pvt_modseq = 0, keywords = 0x0, permanent_flags = 0, > permanent_keywords = 0, allow_new_keywords = 0, > nonpermanent_modseqs = 0, no_modseq_tracking = 0, have_guids = 1, > have_save_guids = 0, have_only_guid128 = 0} > trans = 0x801c3a800 > search_args = 0x0 > ctx = 0x801c1c040 > mail = 0x801c63040 > metadata = {guid = '\000' , virtual_size = 0, > physical_size = 0, first_save_date = 0, cache_fields = 0x0, > precache_fields = (MAIL_FETCH_STREAM_HEADER | > MAIL_FETCH_STREAM_BODY | MAIL_FETCH_RECEIVED_DATE | MAIL_FETCH_SAVE_DATE | > MAIL_FETCH_PHYSICAL_SIZE | MAIL_FETCH_VIRTUAL_SIZE | > MAIL_FETCH_UIDL_BACKEND | MAIL_FETCH_GUID | MAIL_FETCH_POP3_ORDER), > backend_ns_prefix = 0x0, backend_ns_type = (unknown: 0)} > seq = 1 > counter = 1819 > max = 5342 > ret = 0 > #23 0x0000000000433907 in cmd_index_box (ctx=0x801c2ac40, info=0x801c5f0c0) > at doveadm-mail-index.c:130 > ---Type to continue, or q to quit--- > box = 0x8074edc40 > status = {messages = 4294958944, recent = 32767, unseen = 14577888, > uidvalidity = 8, uidnext = 4294958944, first_unseen_seq = > 16809983, > first_recent_uid = 29749440, last_cached_seq = 8, > highest_modseq = 34389277760, highest_pvt_modseq = > 140737488346996, > keywords = 0x7fffffffdf90, permanent_flags = 18334301, > permanent_keywords = 0, allow_new_keywords = 0, > nonpermanent_modseqs = 0, no_modseq_tracking = 1, have_guids = 0, > have_save_guids = 0, have_only_guid128 = 0} > ret = 0 > #24 0x00000000004335ee in cmd_index_run (_ctx=0x801c2ac40, user=0x801c45040) > at doveadm-mail-index.c:201 > _data_stack_cur_id = 2 > ctx = 0x801c2ac40 > iter_flags = (MAILBOX_LIST_ITER_NO_AUTO_BOXES | > MAILBOX_LIST_ITER_STAR_WITHIN_NS | MAILBOX_LIST_ITER_RETURN_NO_FLAGS) > ns_mask = (MAIL_NAMESPACE_TYPE_PRIVATE | MAIL_NAMESPACE_TYPE_SHARED > | MAIL_NAMESPACE_TYPE_PUBLIC) > iter = 0x801c2bc40 > info = 0x801c5f0c0 > i = 32767 > ret = 0 > #25 0x000000000042b90a in doveadm_mail_next_user (ctx=0x801c2ac40, > cctx=0x7fffffffe350, error_r=0x7fffffffe0f8) at doveadm-mail.c:404 > ---Type to continue, or q to quit--- > input = {module = 0x0, service = 0x484aa6 "doveadm", > username = 0x7fffffffef58 "mrm", session_id = 0x0, > session_id_prefix = 0x0, session_create_time = 0, local_ip = { > family = 0, u = {ip6 = {__u6_addr = { > __u6_addr8 = '\000' , __u6_addr16 = {0, > 0, > 0, 0, 0, 0, 0, 0}, __u6_addr32 = {0, 0, 0, 0}}}, ip4 = { > s_addr = 0}}}, remote_ip = {family = 0, u = {ip6 = { > __u6_addr = {__u6_addr8 = '\000' , > __u6_addr16 = {0, 0, 0, 0, 0, 0, 0, 0}, __u6_addr32 = {0, > 0, > 0, 0}}}, ip4 = {s_addr = 0}}}, local_port = 0, > remote_port = 0, userdb_fields = 0x0, > flags_override_add = (unknown: 0), > flags_override_remove = (unknown: 0), no_userdb_lookup = 0, > debug = 0} > error = 0x7fffffffe420 "\200\347\377\377\377\177" > ip = 0x8011deee3 "" > ret = 0 > #26 0x000000000042b5bc in doveadm_mail_single_user (ctx=0x801c2ac40, > cctx=0x7fffffffe350, error_r=0x7fffffffe0f8) at doveadm-mail.c:435 > No locals. > #27 0x000000000042d50a in doveadm_mail_cmd_exec (ctx=0x801c2ac40, > cctx=0x7fffffffe350, wildcard_user=0x0) at doveadm-mail.c:596 > ret = 32767 > error = 0x801c2ae18 "P\256\302\001\b" > ---Type to continue, or q to quit--- > #28 0x000000000042d0a5 in doveadm_cmd_ver2_to_mail_cmd_wrapper ( > cctx=0x7fffffffe350) at doveadm-mail.c:1061 > mctx = 0x801c2ac40 > wildcard_user = 0x0 > fieldstr = 0x7fffffffe1e0 "\300\342\377\377\377\177" > pargv = {arr = {buffer = 0x801c2ae98, element_size = 8}, > v = 0x801c2ae98, v_modifiable = 0x801c2ae98} > full_args = {arr = {buffer = 0x801c2ae18, element_size = 8}, > v = 0x801c2ae18, v_modifiable = 0x801c2ae18} > i = 7 > mail_cmd = {alloc = 0x433210 , > name = 0x48da32 "index", > usage_args = 0x488030 "[-u |-A] [-S ] [-q] [-n > ] "} > args_pos = 0 > #29 0x0000000000443cfe in doveadm_cmd_run_ver2 (argc=2, argv=0x7fffffffe438, > cctx=0x7fffffffe350) at doveadm-cmd.c:523 > param = 0x801c06ce0 > pargv = {arr = {buffer = 0x801c06a38, element_size = 104}, > v = 0x801c06a38, v_modifiable = 0x801c06a38} > opts = {arr = {buffer = 0x801c06800, element_size = 32}, > v = 0x801c06800, v_modifiable = 0x801c06800} > pargc = 7 > c = -1 > ---Type to continue, or q to quit--- > li = 32767 > pool = 0x801c06768 > optbuf = 0x801c06780 > #30 0x00000000004437f4 in doveadm_cmd_try_run_ver2 ( > cmd_name=0x7fffffffe7a3 "index", argc=2, argv=0x7fffffffe438, > cctx=0x7fffffffe350) at doveadm-cmd.c:446 > cmd = 0x801c4db98 > #31 0x0000000000447f51 in main (argc=2, argv=0x7fffffffe438) at > doveadm.c:379 > service_flags = (MASTER_SERVICE_FLAG_STANDALONE | > MASTER_SERVICE_FLAG_KEEP_CONFIG_OPEN) > cctx = {cmd = 0x801c4db98, argc = 7, argv = 0x801c06a70, > username = 0x7fffffffef58 "mrm", cli = true, tcp_server = false, > local_ip = {family = 0, u = {ip6 = {__u6_addr = { > __u6_addr8 = '\000' , __u6_addr16 = {0, > 0, > 0, 0, 0, 0, 0, 0}, __u6_addr32 = {0, 0, 0, 0}}}, ip4 = { > s_addr = 0}}}, remote_ip = {family = 0, u = {ip6 = { > __u6_addr = {__u6_addr8 = '\000' , > __u6_addr16 = {0, 0, 0, 0, 0, 0, 0, 0}, __u6_addr32 = {0, > 0, > 0, 0}}}, ip4 = {s_addr = 0}}}, local_port = 0, > remote_port = 0, conn = 0x0} > cmd_name = 0x7fffffffe7a3 "index" > i = 6 > quick_init = false > c = -1 > (gdb) > > On Mon, Oct 24, 2016 at 4:34 AM, Aki Tuomi wrote: > >> Hi! >> >> We found some problems with those patches, and ended up doing slightly >> different fix: >> >> https://github.com/dovecot/core/compare/3e41b3d%5E...cca98b.patch >> >> Aki >> >> On 24.10.2016 10:17, Aki Tuomi wrote: >>> Hi! >>> >>> Can you try these two patches? >>> >>> Aki >>> >>> >>> On 24.10.2016 08:48, Aki Tuomi wrote: >>>> Ok so that timeval makes no sense. We'll look into it. >>>> >>>> Aki >>>> >>>>> On October 24, 2016 at 12:22 AM Larry Rosenman >> wrote: >>>>> >>>>> doveadm(mrm): Debug: http-client: conn 127.0.0.1:9998 [1]: Got 200 >> response >>>>> for request [Req38: PUT http://localhost:9998/tika/] (took 296 ms + 8 >> ms in >>>>> queue) >>>>> doveadm(mrm): Panic: kevent(): Invalid argument >>>>> >>>>> Program received signal SIGABRT, Aborted. >>>>> 0x00000008014e6f7a in thr_kill () from /lib/libc.so.7 >>>>> (gdb) fr 6 >>>>> #6 0x00000008011a3e49 in io_loop_handler_run_internal >> (ioloop=0x801c214e0) >>>>> at ioloop-kqueue.c:131 >>>>> 131 i_panic("kevent(): %m"); >>>>> (gdb) p ts >>>>> $1 = {tv_sec = 34389923520, tv_nsec = 140737488345872000} >>>>> (gdb) p errno >>>>> $2 = 22 >>>>> (gdb) p ret >>>>> $3 = -1 >>>>> (gdb) p *ioloop >>>>> $4 = {prev = 0x801c21080, cur_ctx = 0x0, io_files = 0x801c4f980, >>>>> next_io_file = 0x0, timeouts = 0x801d17540, timeouts_new = {arr = { >>>>> buffer = 0x801cd9700, element_size = 8}, v = 0x801cd9700, >>>>> v_modifiable = 0x801cd9700}, handler_context = 0x801d17560, >>>>> notify_handler_context = 0x0, max_fd_count = 0, >>>>> time_moved_callback = 0x800d53bb0 , >>>>> next_max_time = 1477257580, ioloop_wait_usecs = 27148, >> io_pending_count = >>>>> 1, >>>>> running = 1, iolooping = 1} >>>>> (gdb) p* ctx >>>>> $5 = {kq = 21, deleted_count = 0, events = {arr = {buffer = >> 0x801cd9740, >>>>> element_size = 32}, v = 0x801cd9740, v_modifiable = 0x801cd9740}} >>>>> (gdb) p *events >>>>> $6 = {ident = 22, filter = -1, flags = 0, fflags = 0, data = 8, >>>>> udata = 0x801c4f980} >>>>> (gdb) >>>>> >>>>> thebighonker.lerctr.org ~ $ ps auxw|grep doveadm >>>>> mrm 46965 0.0 0.2 108516 55264 0 I+ 4:19PM 0:02.28 >> gdb >>>>> /usr/local/bin/doveadm (gdb7111) >>>>> mrm 46985 0.0 0.0 81236 15432 0 TX 4:19PM 0:03.51 >>>>> /usr/local/bin/doveadm -D -vvvvvvv index * >>>>> ler 47221 0.0 0.0 18856 2360 1 S+ 4:21PM 0:00.00 >> grep >>>>> doveadm >>>>> thebighonker.lerctr.org ~ $ sudo lsof -p 46985 >>>>> Password: >>>>> COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE >> NAME >>>>> doveadm 46985 mrm cwd VDIR 22,2669215774 152 4 >>>>> /home/mrm >>>>> doveadm 46985 mrm rtd VDIR 19,766509061 28 4 / >>>>> doveadm 46985 mrm txt VREG 119,3584295129 1714125 182952 >>>>> /usr/local/bin/doveadm >>>>> doveadm 46985 mrm txt VREG 19,766509061 132272 14382 >>>>> /libexec/ld-elf.so.1 >>>>> doveadm 46985 mrm txt VREG 22,2669215774 6920 10680 >>>>> /home/mrm/mail/TRAVEL/.imap/hawaiian.airlines/dovecot.index.log >>>>> doveadm 46985 mrm txt VREG 22,2669215774 7224 10716 >>>>> /home/mrm/mail/TRAVEL/.imap/priceline/dovecot.index.log >>>>> doveadm 46985 mrm txt VREG 22,2669215774 11080 10650 >>>>> /home/mrm/mail/TRAVEL/.imap/alamo/dovecot.index.log >>>>> doveadm 46985 mrm txt VREG 22,2669215774 2968 10679 >>>>> /home/mrm/mail/TRAVEL/.imap/hawaiian.airlines/dovecot.index.cache >>>>> doveadm 46985 mrm txt VREG 22,2669215774 3108 10715 >>>>> /home/mrm/mail/TRAVEL/.imap/priceline/dovecot.index.cache >>>>> doveadm 46985 mrm txt VREG 22,2669215774 6520 139902 >>>>> /home/mrm/mail/.imap/Sent/dovecot.index.log >>>>> doveadm 46985 mrm txt VREG 22,2669215774 9236 10648 >>>>> /home/mrm/mail/TRAVEL/.imap/alamo/dovecot.index.cache >>>>> doveadm 46985 mrm txt VREG 22,2669215774 174892 143343 >>>>> /home/mrm/mail/.imap/Sent/dovecot.index.cache >>>>> doveadm 46985 mrm txt VREG 22,2669215774 32656 143058 >>>>> /home/mrm/mail/.imap/INBOX/dovecot.index.log >>>>> doveadm 46985 mrm txt VREG 19,766509061 720 30627 >>>>> /usr/share/i18n/csmapper/CP/CP1251%UCS.mps >>>>> doveadm 46985 mrm txt VREG 19,766509061 720 30630 >>>>> /usr/share/i18n/csmapper/CP/CP1252%UCS.mps >>>>> doveadm 46985 mrm txt VREG 19,766509061 89576 6846 >>>>> /lib/libz.so.6 >>>>> doveadm 46985 mrm txt VREG 19,766509061 62008 5994 >>>>> /lib/libcrypt.so.5 >>>>> doveadm 46985 mrm txt VREG 119,3584295129 6725689 183611 >>>>> /usr/local/lib/dovecot/libdovecot-storage.so.0.0.0 >>>>> doveadm 46985 mrm txt VREG 119,3584295129 3162259 183615 >>>>> /usr/local/lib/dovecot/libdovecot.so.0.0.0 >>>>> doveadm 46985 mrm txt VREG 19,766509061 1649944 4782 >>>>> /lib/libc.so.7 >>>>> doveadm 46985 mrm txt VREG 119,3584295129 80142 183550 >>>>> /usr/local/lib/dovecot/lib15_notify_plugin.so >>>>> doveadm 46985 mrm txt VREG 119,3584295129 652615 183556 >>>>> /usr/local/lib/dovecot/lib20_fts_plugin.so >>>>> doveadm 46985 mrm txt VREG 119,3584295129 2730888 268825 >>>>> /usr/local/lib/libicui18n.so.57.1 >>>>> doveadm 46985 mrm txt VREG 119,3584295129 1753976 268849 >>>>> /usr/local/lib/libicuuc.so.57.1 >>>>> doveadm 46985 mrm txt VREG 119,3584295129 1704 268821 >>>>> /usr/local/lib/libicudata.so.57.1 >>>>> doveadm 46985 mrm txt VREG 19,766509061 102560 6745 >>>>> /lib/libthr.so.3 >>>>> doveadm 46985 mrm txt VREG 19,766509061 184712 5795 >>>>> /lib/libm.so.5 >>>>> doveadm 46985 mrm txt VREG 19,766509061 774000 5642 >>>>> /usr/lib/libc++.so.1 >>>>> doveadm 46985 mrm txt VREG 19,766509061 103304 5742 >>>>> /lib/libcxxrt.so.1 >>>>> doveadm 46985 mrm txt VREG 19,766509061 56344 7436 >>>>> /lib/libgcc_s.so.1 >>>>> doveadm 46985 mrm txt VREG 119,3584295129 349981 183782 >>>>> /usr/local/lib/dovecot/lib21_fts_lucene_plugin.so >>>>> doveadm 46985 mrm txt VREG 119,3584295129 1969384 113258 >>>>> /usr/local/lib/libclucene-core.so.2.3.3.4 >>>>> doveadm 46985 mrm txt VREG 119,3584295129 128992 113261 >>>>> /usr/local/lib/libclucene-shared.so.2.3.3.4 >>>>> doveadm 46985 mrm txt VREG 119,3584295129 143141 183578 >>>>> /usr/local/lib/dovecot/lib90_stats_plugin.so >>>>> doveadm 46985 mrm txt VREG 119,3584295129 37368 151926 >>>>> /usr/local/lib/dovecot/doveadm/lib10_doveadm_sieve_plugin.so >>>>> doveadm 46985 mrm txt VREG 119,3584295129 693808 151924 >>>>> /usr/local/lib/dovecot-2.2-pigeonhole/libdovecot-sieve.so.0.0.0 >>>>> doveadm 46985 mrm txt VREG 119,3584295129 146477 183599 >>>>> /usr/local/lib/dovecot/libdovecot-lda.so.0.0.0 >>>>> doveadm 46985 mrm txt VREG 119,3584295129 13823 183780 >>>>> /usr/local/lib/dovecot/doveadm/lib20_doveadm_fts_lucene_plugin.so >>>>> doveadm 46985 mrm txt VREG 119,3584295129 88081 183527 >>>>> /usr/local/lib/dovecot/doveadm/lib20_doveadm_fts_plugin.so >>>>> doveadm 46985 mrm txt VREG 19,766509061 8304 6330 >>>>> /usr/lib/i18n/libiconv_std.so.4 >>>>> doveadm 46985 mrm txt VREG 19,766509061 6744 6318 >>>>> /usr/lib/i18n/libUTF8.so.4 >>>>> doveadm 46985 mrm txt VREG 19,766509061 4384 6336 >>>>> /usr/lib/i18n/libmapper_none.so.4 >>>>> doveadm 46985 mrm txt VREG 19,766509061 7584 6345 >>>>> /usr/lib/i18n/libmapper_std.so.4 >>>>> doveadm 46985 mrm 0u VCHR 0,188 0t390889 188 >>>>> /dev/pts/0 >>>>> doveadm 46985 mrm 1u VCHR 0,188 0t390889 188 >>>>> /dev/pts/0 >>>>> doveadm 46985 mrm 2u VCHR 0,188 0t390889 188 >>>>> /dev/pts/0 >>>>> doveadm 46985 mrm 3u PIPE 0xfffff806fdf505d0 16384 >>>>> ->0xfffff806fdf50730 >>>>> doveadm 46985 mrm 4u PIPE 0xfffff806fdf50730 0 >>>>> ->0xfffff806fdf505d0 >>>>> doveadm 46985 mrm 5u KQUEUE 0xfffff806350b0600 >>>>> count=0, state=0 >>>>> doveadm 46985 mrm 6w FIFO 163,709754999 0t0 29707 >>>>> /var/run/dovecot/stats-mail >>>>> doveadm 46985 mrm 7u VREG 22,2669215774 11080 10650 >>>>> /home/mrm/mail/TRAVEL/.imap/alamo/dovecot.index.log >>>>> doveadm 46985 mrm 8u VREG 22,2669215774 536 137895 >>>>> /home/mrm/mail/TRAVEL/.imap/alamo/dovecot.index >>>>> doveadm 46985 mrm 9u VREG 22,2669215774 6920 10680 >>>>> /home/mrm/mail/TRAVEL/.imap/hawaiian.airlines/dovecot.index.log >>>>> doveadm 46985 mrm 10u VREG 22,2669215774 2968 10679 >>>>> /home/mrm/mail/TRAVEL/.imap/hawaiian.airlines/dovecot.index.cache >>>>> doveadm 46985 mrm 11u VREG 22,2669215774 6520 139902 >>>>> /home/mrm/mail/.imap/Sent/dovecot.index.log >>>>> doveadm 46985 mrm 12u VREG 22,2669215774 9288 139905 >>>>> /home/mrm/mail/.imap/Sent/dovecot.index >>>>> doveadm 46985 mrm 13u VREG 22,2669215774 7224 10716 >>>>> /home/mrm/mail/TRAVEL/.imap/priceline/dovecot.index.log >>>>> doveadm 46985 mrm 14u VREG 22,2669215774 3108 10715 >>>>> /home/mrm/mail/TRAVEL/.imap/priceline/dovecot.index.cache >>>>> doveadm 46985 mrm 15u VREG 22,2669215774 9236 10648 >>>>> /home/mrm/mail/TRAVEL/.imap/alamo/dovecot.index.cache >>>>> doveadm 46985 mrm 16u VREG 22,2669215774 174892 143343 >>>>> /home/mrm/mail/.imap/Sent/dovecot.index.cache >>>>> doveadm 46985 mrm 17u VREG 22,2669215774 32656 143058 >>>>> /home/mrm/mail/.imap/INBOX/dovecot.index.log >>>>> doveadm 46985 mrm 18u VREG 22,2669215774 0 135848 >>>>> /home/mrm (zroot/home/mrm) >>>>> doveadm 46985 mrm 19u VREG 22,2669215774 35656 135336 >>>>> /home/mrm/mail/.imap/INBOX/dovecot.index >>>>> doveadm 46985 mrm 20u VREG 22,2669215774 0 135849 >>>>> /home/mrm (zroot/home/mrm) >>>>> doveadm 46985 mrm 21u KQUEUE 0xfffff80163b1ba00 >>>>> count=1, state=0 >>>>> doveadm 46985 mrm 22u IPv4 0xfffff805ea69a000 0t0 TCP >>>>> localhost:44730->localhost:9998 (ESTABLISHED) >>>>> doveadm 46985 mrm 25uR VREG 22,2669215774 32997612 4151 >>>>> /home/mrm/mail/Sent >>>>> thebighonker.lerctr.org >>>>> >>>>> >>>>> >>>>> On Sun, Oct 23, 2016 at 12:20 PM, Aki Tuomi >> wrote: >>>>>> According to man page, the only way it can return EINVAL (22) is >> either >>>>>> bad filter, or bad timeout. I can't see how the filter would be bad, >> so I'm >>>>>> guessing ts must be bad. Unfortunately I forgot to ask for it, so I am >>>>>> going to have to ask you run it again and run >>>>>> >>>>>> p ts >>>>>> >>>>>> if that's valid, then the only thing that can be bad if the file >>>>>> descriptor 23. >>>>>> >>>>>> Aki >>>>>> >>>>>>> On October 23, 2016 at 7:42 PM Larry Rosenman >>>>>> wrote: >>>>>>> ok, gdb7 works: >>>>>>> (gdb) fr 6 >>>>>>> #6 0x00000008011a3e49 in io_loop_handler_run_internal >>>>>> (ioloop=0x801c214e0) >>>>>>> at ioloop-kqueue.c:131 >>>>>>> 131 i_panic("kevent(): %m"); >>>>>>> (gdb) p errno >>>>>>> $1 = 22 >>>>>>> (gdb) p ret >>>>>>> $2 = -1 >>>>>>> (gdb) p *ioloop >>>>>>> $3 = {prev = 0x801c21080, cur_ctx = 0x0, io_files = 0x801c4f980, >>>>>>> next_io_file = 0x0, timeouts = 0x801c19e60, timeouts_new = {arr = >>>>>> {buffer = >>>>>>> 0x801c5ac80, element_size = 8}, v = 0x801c5ac80, >>>>>>> v_modifiable = 0x801c5ac80}, handler_context = 0x801c19e80, >>>>>>> notify_handler_context = 0x0, max_fd_count = 0, time_moved_callback = >>>>>>> 0x800d53bb0 , >>>>>>> next_max_time = 1477240784, ioloop_wait_usecs = 29863, >>>>>> io_pending_count = >>>>>>> 1, running = 1, iolooping = 1} >>>>>>> (gdb) p *ctx >>>>>>> $4 = {kq = 22, deleted_count = 0, events = {arr = {buffer = >> 0x801c5acc0, >>>>>>> element_size = 32}, v = 0x801c5acc0, v_modifiable = 0x801c5acc0}} >>>>>>> (gdb) p *events >>>>>>> $5 = {ident = 23, filter = -1, flags = 0, fflags = 0, data = 8, >> udata = >>>>>>> 0x801c4f980} >>>>>>> (gdb) >>>>>>> >>>>>>> >>>>>>> >>>>>>> On Sun, Oct 23, 2016 at 11:27 AM, Larry Rosenman >>>>> wrote: >>>>>>>> grrr. >>>>>>>> >>>>>>>> /home/mrm $ gdb /usr/local/bin/doveadm >>>>>>>> GNU gdb 6.1.1 [FreeBSD] >>>>>>>> Copyright 2004 Free Software Foundation, Inc. >>>>>>>> GDB is free software, covered by the GNU General Public License, and >>>>>> you >>>>>>>> are >>>>>>>> welcome to change it and/or distribute copies of it under certain >>>>>>>> conditions. >>>>>>>> Type "show copying" to see the conditions. >>>>>>>> There is absolutely no warranty for GDB. Type "show warranty" for >>>>>> details. >>>>>>>> This GDB was configured as "amd64-marcel-freebsd"... >>>>>>>> (gdb) run -D -vvvvvv index * >>>>>>>> Starting program: /usr/local/bin/doveadm -D -vvvvvv index * >>>>>>>> >>>>>>>> Program received signal SIGTRAP, Trace/breakpoint trap. >>>>>>>> Cannot remove breakpoints because program is no longer writable. >>>>>>>> It might be running in another process. >>>>>>>> Further execution is probably impossible. >>>>>>>> 0x0000000800624490 in ?? () >>>>>>>> (gdb) >>>>>>>> >>>>>>>> Ideas? >>>>>>>> >>>>>>>> >>>>>>>> On Sun, Oct 23, 2016 at 11:14 AM, Aki Tuomi >>>>>> wrote: >>>>>>>>> Hi, >>>>>>>>> >>>>>>>>> can you run doveadm in gdb, wait for it to crash, and then go to >>>>>> frame 6 >>>>>>>>> ( io_loop_handler_run_internal) and run >>>>>>>>> >>>>>>>>> p errno >>>>>>>>> p ret >>>>>>>>> p *ioloop >>>>>>>>> p *ctx >>>>>>>>> p *events >>>>>>>>> >>>>>>>>> Sorry but the crash doesn't make enough sense yet to me, we need to >>>>>>>>> determine what the invalid parameter is. >>>>>>>>> >>>>>>>>>> Larry Rosenman http://www.lerctr.org/~ler >>>>>>>>>> Phone: +1 214-642-9640 (c) E-Mail: larryrtx at gmail.com >>>>>>>>>> US Mail: 17716 Limpia Crk, Round Rock, TX 78664-7281 >>>>>>>> -- >>>>>>>> Larry Rosenman http://www.lerctr.org/~ler >>>>>>>> Phone: +1 214-642-9640 (c) E-Mail: larryrtx at gmail.com >>>>>>>> US Mail: 17716 Limpia Crk, Round Rock, TX 78664-7281 >>>>>>>> >>>>>>> -- >>>>>>> Larry Rosenman http://www.lerctr.org/~ler >>>>>>> Phone: +1 214-642-9640 (c) E-Mail: larryrtx at gmail.com >>>>>>> US Mail: 17716 Limpia Crk, Round Rock, TX 78664-7281 >>>>> -- >>>>> Larry Rosenman http://www.lerctr.org/~ler >>>>> Phone: +1 214-642-9640 (c) E-Mail: larryrtx at gmail.com >>>>> US Mail: 17716 Limpia Crk, Round Rock, TX 78664-7281 > > From skdovecot at smail.inf.fh-brs.de Tue Oct 25 10:19:08 2016 From: skdovecot at smail.inf.fh-brs.de (Steffen Kaiser) Date: Tue, 25 Oct 2016 12:19:08 +0200 (CEST) Subject: Problem to configure dovecot-ldap.conf.ext In-Reply-To: <1760129.UVaFhdmSfi@techz> References: <1760129.UVaFhdmSfi@techz> Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Tue, 25 Oct 2016, G?nther J. Niederwimmer wrote: > I setup ldap (FreeIPA) to have a user for dovecot that can (read search > compare) all attributes that I need for dovecot. > > I must also have mailAlternateAddress > > When I make a ldapsearch with this user, I found all I need to configure > dovecot. > > doveadm auth test office > and > doveadm auth test office at examle.com > > with success authentication > > but when I make a > doveadm auth test info at example.co (mailAlternateAddress) I guess the missing 'm' in .co is a typo? Do you find doveadm user -u office doveadm user -u office at examle.com doveadm user -u info at example.co > I have a broken authentication > Can any give me a hint what is wrong, or is this not possible ? Show us your LDAP record of this user. > # Distinguished Name - the username used to login to the LDAP server. > # Leave it commented out to bind anonymously (useful with auth_bind=yes). > dn = uid=system,cn=sysaccounts,cn=etc,dc=example,dc=com > > # Password for LDAP server, if dn is specified. > dnpass = 'XXXXXXXXXXXXXX' > > # Use SASL binding instead of the simple binding. Note that this changes > # ldap_version automatically to be 3 if it's lower. Also note that SASL binds > # and auth_bind=yes don't work together. > sasl_bind = yes > # SASL mechanism name to use. > sasl_mech = gssapi > # SASL realm to use. > sasl_realm = EXAMPLE.COM > # SASL authorization ID, ie. the dnpass is for this "master user", but the > # dn is still the logged in user. Normally you want to keep this empty. > sasl_authz_id = imap/mx01.example.com at EXAMPLE.COM Dunno with SASL and Co. > # Use authentication binding for verifying password's validity. This works by > # logging into LDAP server using the username and password given by client. > # The pass_filter is used to find the DN for the user. Note that the pass_attrs > # is still used, only the password field is ignored in it. Before doing any > # search, the binding is switched back to the default DN. > auth_bind = yes > > # If authentication binding is used, you can save one LDAP request per login > # if users' DN can be specified with a common template. The template can use > # the standard %variables (see user_filter). Note that you can't > # use any pass_attrs if you use this setting. > # > # If you use this setting, it's a good idea to use a different > # dovecot-ldap.conf.ext for userdb (it can even be a symlink, just as long as > # the filename is different in userdb's args). That way one connection is used > # only for LDAP binds and another connection is used for user lookups. > # Otherwise the binding is changed to the default DN before each user lookup. > # > # For example: > # auth_bind_userdn = cn=%u,ou=people,o=org > # > auth_bind_userdn = uid=%n,cn=users,cn=accounts,dc=example,dc=com That one looks strange, you really have an account (uid=office at examle.com) ? > # Search scope: base, onelevel, subtree > scope = subtree > #scope = onelevel > > # User attributes are given in LDAP-name=dovecot-internal-name list. The > # internal names are: > # uid - System UID > # gid - System GID > # home - Home directory > # mail - Mail location > # > # There are also other special fields which can be returned, see > # http://wiki2.dovecot.org/UserDatabase/ExtraFields > #user_attrs = homeDirectory=home,uidNumber=uid,gidNumber=gid > user_attrs = uid=user,uid=home=/srv/vmail/%$,=uid=10000,=gid=10000 > > # Filter for user lookup. Some variables can be used (see > # http://wiki2.dovecot.org/Variables for full list): > # %u - username > # %n - user part in user at domain, same as %u if there's no domain > # %d - domain part in user at domain, empty if user there's no domain > user_filter = (&(objectClass=mailrecipient)(|(uid=%Ln)(mail=%Lu) > (mailAlternateAddress=%Lu))) If doveadm user -u info at example.co returns your entry, this filter is OK. > # Password checking attributes: > # user: Virtual user name (user at domain), if you wish to change the > # user-given username to something else > # password: Password, may optionally start with {type}, eg. {crypt} > # There are also other special fields which can be returned, see > # http://wiki2.dovecot.org/PasswordDatabase/ExtraFields > pass_attrs = uid=user,userPassword=password,mailAlternateAddress=user you cannot return two values for user, I guess you like to have "uid", so pass_attrs = uid=user,userPassword=password > # Filter for password lookups > #pass_filter = (&(objectClass=posixAccount)(uid=%u)) > pass_filter = (&(objectClass=mailrecipient)(|(uid=%Ln)(mail=%Lu) > (mailAlternateAddress=%Lu))) Looks good, if doveadm user -u info at example.co returns something sensible, beause the user filter is the same. > # Attributes and filter to get a list of all users > iterate_attrs = uid=user, mailAlternateAddress=user same as pass_attr. > iterate_filter = (objectClass=posixAccount) Looks strange, should be iterate_filter = (objectClass=mailrecipient) > # Default password scheme. "{scheme}" before password overrides this. > # List of supported schemes is in: http://wiki2.dovecot.org/Authentication > #default_pass_scheme = CRYPT > > > - -- Steffen Kaiser -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQEVAwUBWA8xnHz1H7kL/d9rAQKjlQf/VyK1ipVnt3B+NGwWlIc29MERp7Zy1DFI 8x7GKRFSwJ9pKRalreVL/D+3hI/mKzoqQOiaWG6QSNlX+zj1uu6FkpsiJrAmuJP2 uOObVjyS9DSw8zmU9wNJmqxUvWNTb857udnwAazsMbKge+ApKa4w8GmLUIyZXBZt oBziQZjbASlReaIGv8q+R8z5B0wUx9FRfqFuEY4N2mSudZMdf6kBsUXnFPTxWlEY kpIFpOFhfCi0dFRYduVQXhP9qR8BMOBwjm1NizZGTFgGSHgY2sgr4ouOKtoXHePh 28EvYzRY/FHvSKGDv3R8KVqnf6BJ03SkJ5+L0Smbr9XUg+1UuaQqkg== =0e2c -----END PGP SIGNATURE----- From gjn at gjn.priv.at Tue Oct 25 13:25:36 2016 From: gjn at gjn.priv.at (=?ISO-8859-1?Q?G=FCnther_J=2E?= Niederwimmer) Date: Tue, 25 Oct 2016 15:25:36 +0200 Subject: Problem to configure dovecot-ldap.conf.ext In-Reply-To: References: <1760129.UVaFhdmSfi@techz> Message-ID: <1890788.z5i5kZVShI@techz> Hello Steffen and List, Thanks for the answer and help, I mean I found the biggest problem it is "auth_bind_userdn = " please read the rest ;-) Am Dienstag, 25. Oktober 2016, 12:19:08 schrieb Steffen Kaiser: > On Tue, 25 Oct 2016, G?nther J. Niederwimmer wrote: > > I setup ldap (FreeIPA) to have a user for dovecot that can (read search > > compare) all attributes that I need for dovecot. > > > > I must also have mailAlternateAddress > > > > When I make a ldapsearch with this user, I found all I need to configure > > dovecot. > > > > doveadm auth test office > > and > > doveadm auth test office at examle.com > > > > with success authentication > > > > but when I make a > > doveadm auth test info at example.co (mailAlternateAddress) > > I guess the missing 'm' in .co is a typo? ;-) Yes > Do you find > doveadm user -u office > doveadm user -u office at examle.com > doveadm user -u info at example.com yes this is working with all user ? doveadm user -u office userdb: office user : office home : /srv/vmail/office uid : 10000 gid : 10000 doveadm user -u info at example.com userdb: info at example.com user : office home : /srv/vmail/office uid : 10000 gid : 10000 > > I have a broken authentication > > > > Can any give me a hint what is wrong, or is this not possible ? > > Show us your LDAP record of this user. this is a result from ldapsearch with dovecots special user, from the dovecot system! ldapsearch -w 'XXXXXXXXXXX' -h ipa.example.com -D 'uid=system,cn=sysaccounts,cn=etc,dc=example,dc=com' -s sub -b 'dc=example,dc=com' 'mail=office at example.com' I can also search for 'mailAlternateAddress=info at example.com' with the same result. # extended LDIF # # LDAPv3 # base with scope subtree # filter: mail=office at example.com # requesting: ALL # # office, users, accounts, example.com dn: uid=office,cn=users,cn=accounts,dc=example,dc=com st: AUSTRIA l: Salzburg postalCode: 5020 krbPasswordExpiration: 20380101000000Z krbLastPwdChange: 20160929133721Z memberOf: cn=ipausers,cn=groups,cn=accounts,dc=example,dc=com memberOf: cn=mailusers,cn=groups,cn=accounts,dc=example,dc=com mailAlternateAddress: info at example.com displayName:: R8O8bnRoZXIgSi4gTmllZGVyd2ltbWVy uid: office objectClass: ipaobject objectClass: person objectClass: top objectClass: ipasshuser objectClass: inetorgperson objectClass: mailrecipient objectClass: organizationalperson objectClass: krbticketpolicyaux objectClass: krbprincipalaux objectClass: inetuser objectClass: posixaccount objectClass: ipaSshGroupOfPubKeys objectClass: mepOriginEntry loginShell: /bin/bash initials: GN gecos:: R8O8bnRoZXIgSi4gTmllZGVyd2ltbWVy sn: Niederwimmer homeDirectory: /home/office mail: office at example.com krbPrincipalName: office at example.COM givenName:: R8O8bnRoZXIgSi4= cn:: R8O8bnRoZXIgSi4gTmllZGVyd2ltbWVy ipaUniqueID: 3a6e2256-8648-11e6-b45d-5254002cd3fc uidNumber: 1507800005 gidNumber: 1507800005 # search result search: 2 result: 0 Success # numResponses: 2 # numEntries: 1 > > # Distinguished Name - the username used to login to the LDAP server. > > # Leave it commented out to bind anonymously (useful with auth_bind=yes). > > dn = uid=system,cn=sysaccounts,cn=etc,dc=example,dc=com > > > > # Password for LDAP server, if dn is specified. > > dnpass = 'XXXXXXXXXXXXXX' > > > > # Use SASL binding instead of the simple binding. Note that this changes > > # ldap_version automatically to be 3 if it's lower. Also note that SASL > > binds # and auth_bind=yes don't work together. > > sasl_bind = yes > > # SASL mechanism name to use. > > sasl_mech = gssapi > > # SASL realm to use. > > sasl_realm = EXAMPLE.COM > > # SASL authorization ID, ie. the dnpass is for this "master user", but the > > # dn is still the logged in user. Normally you want to keep this empty. > > sasl_authz_id = imap/mx01.example.com at EXAMPLE.COM > > Dunno with SASL and Co. OK, OK this was a Test and I reverting this ;-). Now I have #sals_bind = yes This is my next Problem, to find out is this correct working on my system ;-). > > # Use authentication binding for verifying password's validity. This works > > by # logging into LDAP server using the username and password given by > > client. # The pass_filter is used to find the DN for the user. Note that > > the pass_attrs # is still used, only the password field is ignored in it. > > Before doing any # search, the binding is switched back to the default > > DN. > > auth_bind = yes > > > > # If authentication binding is used, you can save one LDAP request per > > login # if users' DN can be specified with a common template. The > > template can use # the standard %variables (see user_filter). Note that > > you can't > > # use any pass_attrs if you use this setting. > > # > > # If you use this setting, it's a good idea to use a different > > # dovecot-ldap.conf.ext for userdb (it can even be a symlink, just as long > > as # the filename is different in userdb's args). That way one connection > > is used # only for LDAP binds and another connection is used for user > > lookups. # Otherwise the binding is changed to the default DN before each > > user lookup. # > > # For example: > > # auth_bind_userdn = cn=%u,ou=people,o=org > > # > > auth_bind_userdn = uid=%n,cn=users,cn=accounts,dc=example,dc=com > > That one looks strange, you really have an account (uid=office at examle.com) > ? I mean I don't understand this in the Moment (?), but I can comment out this ? I make now also Tests with commented out "#auth_bind_userdn = uid=%n...." now the tests are WORKING !!! now I have to find out the correct syntax for auth_bind_userdn !!! when it is possible ? > > # Search scope: base, onelevel, subtree > > scope = subtree > > #scope = onelevel > > > > # User attributes are given in LDAP-name=dovecot-internal-name list. The > > # internal names are: > > # uid - System UID > > # gid - System GID > > # home - Home directory > > # mail - Mail location > > # > > # There are also other special fields which can be returned, see > > # http://wiki2.dovecot.org/UserDatabase/ExtraFields > > #user_attrs = homeDirectory=home,uidNumber=uid,gidNumber=gid > > user_attrs = uid=user,uid=home=/srv/vmail/%$,=uid=10000,=gid=10000 > > > > # Filter for user lookup. Some variables can be used (see > > # http://wiki2.dovecot.org/Variables for full list): > > # %u - username > > # %n - user part in user at domain, same as %u if there's no domain > > # %d - domain part in user at domain, empty if user there's no domain > > user_filter = (&(objectClass=mailrecipient)(|(uid=%Ln)(mail=%Lu) > > (mailAlternateAddress=%Lu))) > > If doveadm user -u info at example.co > returns your entry, this filter is OK. Yes, this filter is OK ;-) > > # Password checking attributes: > > # user: Virtual user name (user at domain), if you wish to change the > > # user-given username to something else > > # password: Password, may optionally start with {type}, eg. {crypt} > > # There are also other special fields which can be returned, see > > # http://wiki2.dovecot.org/PasswordDatabase/ExtraFields > > pass_attrs = uid=user,userPassword=password,mailAlternateAddress=user > > you cannot return two values for user, I guess you like to have "uid", so > > pass_attrs = uid=user,userPassword=password OK, I change it back, this are only tests to found the correct setup for dovecot > > # Filter for password lookups > > #pass_filter = (&(objectClass=posixAccount)(uid=%u)) > > pass_filter = (&(objectClass=mailrecipient)(|(uid=%Ln)(mail=%Lu) > > (mailAlternateAddress=%Lu))) > > Looks good, if doveadm user -u info at example.co returns something sensible, > beause the user filter is the same. :-) > > # Attributes and filter to get a list of all users > > iterate_attrs = uid=user, mailAlternateAddress=user > > same as pass_attr. > > > iterate_filter = (objectClass=posixAccount) > > Looks strange, should be > > iterate_filter = (objectClass=mailrecipient) Is changed to your Parameters > > # Default password scheme. "{scheme}" before password overrides this. > > # List of supported schemes is in: http://wiki2.dovecot.org/Authentication > > #default_pass_scheme = CRYPT I say it before with commented out "auth_bind_userdn" the authentication is also working now with "mailAlternateAddress= xxxxxxxxx" Many thanks to hint me ;-) -- mit freundlichen Gr??en / best regards, G?nther J. Niederwimmer From chead at chead.ca Tue Oct 25 06:10:35 2016 From: chead at chead.ca (Christopher Head) Date: Mon, 24 Oct 2016 23:10:35 -0700 Subject: ssl_options missing no_ticket documentation in example config Message-ID: <20161024231035.27a4b056@amdahl.home.chead.ca> Hello! I have a very minor bug to report. The ssl_options configuration directive takes a space-separated list of options, each of which must be in the set {?no_compression?, ?no_ticket?}, according to the 2.2.25 source code. However, the file doc/example-config/conf.d/10-ssl.conf shipped in the tarball only mentions the ?no_compression? option; it makes no mention of the ?no_ticket? option. Oh, and by the way, the changelog linked from the download page is missing information for anything newer than 2.2.21. Please include me in replies; I am not subscribed to the list. -- Christopher Head -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 630 bytes Desc: OpenPGP digital signature URL: From skdovecot at smail.inf.fh-brs.de Wed Oct 26 05:58:27 2016 From: skdovecot at smail.inf.fh-brs.de (Steffen Kaiser) Date: Wed, 26 Oct 2016 07:58:27 +0200 (CEST) Subject: Problem to configure dovecot-ldap.conf.ext In-Reply-To: <1890788.z5i5kZVShI@techz> References: <1760129.UVaFhdmSfi@techz> <1890788.z5i5kZVShI@techz> Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Tue, 25 Oct 2016, G?nther J. Niederwimmer wrote: > Thanks for the answer and help, > > I mean I found the biggest problem it is "auth_bind_userdn = " > > Am Dienstag, 25. Oktober 2016, 12:19:08 schrieb Steffen Kaiser: >> On Tue, 25 Oct 2016, G?nther J. Niederwimmer wrote: >>> I setup ldap (FreeIPA) to have a user for dovecot that can (read search >>> compare) all attributes that I need for dovecot. >>> >>> I must also have mailAlternateAddress >>> >>> When I make a ldapsearch with this user, I found all I need to configure >>> dovecot. >>> >>> doveadm auth test office >>> and >>> doveadm auth test office at examle.com >>> >>> with success authentication >>> >>> but when I make a >>> doveadm auth test info at example.co (mailAlternateAddress) >> >> I guess the missing 'm' in .co is a typo? > > ;-) Yes > >> Do you find >> doveadm user -u office >> doveadm user -u office at examle.com >> doveadm user -u info at example.com > > yes this is working with all user ? > > doveadm user -u office > userdb: office > user : office > home : /srv/vmail/office > uid : 10000 > gid : 10000 > > doveadm user -u info at example.com > userdb: info at example.com > user : office > home : /srv/vmail/office > uid : 10000 > gid : 10000 > > >>> I have a broken authentication >>> >>> Can any give me a hint what is wrong, or is this not possible ? >> >> Show us your LDAP record of this user. > this is a result from ldapsearch with dovecots special user, from the dovecot > system! > > ldapsearch -w 'XXXXXXXXXXX' -h ipa.example.com -D > 'uid=system,cn=sysaccounts,cn=etc,dc=example,dc=com' -s sub -b > 'dc=example,dc=com' 'mail=office at example.com' > > I can also search for 'mailAlternateAddress=info at example.com' with the same > result. > > # extended LDIF > # > # LDAPv3 > # base with scope subtree > # filter: mail=office at example.com > # requesting: ALL > # > > # office, users, accounts, example.com > dn: uid=office,cn=users,cn=accounts,dc=example,dc=com > st: AUSTRIA > l: Salzburg > postalCode: 5020 > krbPasswordExpiration: 20380101000000Z > krbLastPwdChange: 20160929133721Z > memberOf: cn=ipausers,cn=groups,cn=accounts,dc=example,dc=com > memberOf: cn=mailusers,cn=groups,cn=accounts,dc=example,dc=com > mailAlternateAddress: info at example.com > displayName:: R8O8bnRoZXIgSi4gTmllZGVyd2ltbWVy > uid: office > objectClass: ipaobject > objectClass: person > objectClass: top > objectClass: ipasshuser > objectClass: inetorgperson > objectClass: mailrecipient > objectClass: organizationalperson > objectClass: krbticketpolicyaux > objectClass: krbprincipalaux > objectClass: inetuser > objectClass: posixaccount > objectClass: ipaSshGroupOfPubKeys > objectClass: mepOriginEntry > loginShell: /bin/bash > initials: GN > gecos:: R8O8bnRoZXIgSi4gTmllZGVyd2ltbWVy > sn: Niederwimmer > homeDirectory: /home/office > mail: office at example.com > krbPrincipalName: office at example.COM > givenName:: R8O8bnRoZXIgSi4= > cn:: R8O8bnRoZXIgSi4gTmllZGVyd2ltbWVy > ipaUniqueID: 3a6e2256-8648-11e6-b45d-5254002cd3fc > uidNumber: 1507800005 > gidNumber: 1507800005 > > # search result > search: 2 > result: 0 Success > > # numResponses: 2 > # numEntries: 1 > >>> # For example: >>> # auth_bind_userdn = cn=%u,ou=people,o=org >>> # >>> auth_bind_userdn = uid=%n,cn=users,cn=accounts,dc=example,dc=com >> >> That one looks strange, you really have an account (uid=office at examle.com) >> ? > > I mean I don't understand this in the Moment (?), but I can comment out this ? Well, you must comment this setting, because: http://wiki2.dovecot.org/AuthDatabase/LDAP/AuthBinds?highlight=%28auth_bind_userdn%29 "If you're using DN template, pass_attrs and pass_filter settings are completely ignored." That is: Only if *all* your users log in using their "uid" attribute and are located at a single predictable hierarchie level, you can use this in order to avoid the LDAP query with passdb_filter to locate the user's DN. > I make now also Tests with commented out "#auth_bind_userdn = uid=%n...." > > now the tests are WORKING !!! > > now I have to find out the correct syntax for auth_bind_userdn !!! when it is > possible ? - -- Steffen Kaiser -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQEVAwUBWBBGA3z1H7kL/d9rAQKsEgf8C0xuesf4YJYD6sHF1eMMhAbQew3I9gP1 TxSVkRJP2VYZM4mkIfPEnyK0GOGU1uri8yT65gQLSxZCg+R77UZjIls9pUsZ3Zqq Ko/jBWbXzphglHlppLQ6EiLnaRfiLPT5dO7EynQm7RiFWiwhc4mL9Gc8w0X6Gye8 copDqauC3hm9LHtxfcQe28K82A0WuJHHxyz7AchT38N4EzzkAp5jOeNvt4fV4L+s C9Juxz2uVE5/qhHE1/w3BWY0dpy+1SRdVoXHX8iix4Lz3STUcVDSuiYptNhLjKPv 2KEF/7gPRONCz7b6wDqIfVDoYrBYcueACASdtg3re/xrVjbh7fsG/Q== =wO5h -----END PGP SIGNATURE----- From gandalf.corvotempesta at gmail.com Wed Oct 26 06:27:46 2016 From: gandalf.corvotempesta at gmail.com (Gandalf Corvotempesta) Date: Wed, 26 Oct 2016 08:27:46 +0200 Subject: Server migration In-Reply-To: References: Message-ID: Il 24 ott 2016 5:11 PM, "Michael Seevogel" ha scritto: > I meant your old server. With "old" I was expecting something like Debian Sarge or SuSE Linux 9.3. That would have been really old, but since you are on Debian Squeeze, I would definitely choose the way with an upgraded Dovecot version and its replication service. Is 2.1 from squeeze-backports enough to start the replication over a newer server with dovecot 2.2? Is this supported or both server must run the same version? I've looked around but the replication system is still not clear to me. Any howto explaining this in details? From aki.tuomi at dovecot.fi Wed Oct 26 06:30:02 2016 From: aki.tuomi at dovecot.fi (Aki Tuomi) Date: Wed, 26 Oct 2016 09:30:02 +0300 Subject: Server migration In-Reply-To: References: Message-ID: <1879ea04-a29c-bf04-197c-4f8ffc0bf9bc@dovecot.fi> On 26.10.2016 09:27, Gandalf Corvotempesta wrote: > Il 24 ott 2016 5:11 PM, "Michael Seevogel" ha scritto: >> I meant your old server. With "old" I was expecting something like Debian > Sarge or SuSE Linux 9.3. That would have been really old, but since you are > on Debian Squeeze, I would definitely choose the way with an upgraded > Dovecot version and its replication service. > > Is 2.1 from squeeze-backports enough to start the replication over a newer > server with dovecot 2.2? Is this supported or both server must run the same > version? > > I've looked around but the replication system is still not clear to me. > Any howto explaining this in details? Hi! I would recommend using same major release with replication. If you are using maildir++ format, it should be enough to copy all the maildir files over and start dovecot on new server. Aki Tuomi Dovecot oy From gandalf.corvotempesta at gmail.com Wed Oct 26 06:32:00 2016 From: gandalf.corvotempesta at gmail.com (Gandalf Corvotempesta) Date: Wed, 26 Oct 2016 08:32:00 +0200 Subject: Shared storage for dovecot cluster Message-ID: As I'm planning some server migrations and a new mail architecture, i would like to create an HA cluster Any advice on which kind of shared storage should i use? Are gluster performances with small files enough for dovecot? Any other solution? It's mandatory to avoid any splibrains or similiar thus the replication must be done on at least 3 storage servers. From gandalf.corvotempesta at gmail.com Wed Oct 26 06:38:03 2016 From: gandalf.corvotempesta at gmail.com (Gandalf Corvotempesta) Date: Wed, 26 Oct 2016 08:38:03 +0200 Subject: Server migration In-Reply-To: <1879ea04-a29c-bf04-197c-4f8ffc0bf9bc@dovecot.fi> References: <1879ea04-a29c-bf04-197c-4f8ffc0bf9bc@dovecot.fi> Message-ID: Il 26 ott 2016 8:30 AM, "Aki Tuomi" ha scritto: > I would recommend using same major release with replication. > > If you are using maildir++ format, it should be enough to copy all the > maildir files over and start dovecot on new server. > This is much easier than dovecot replication as i can start immedialy with no need to upgrade the old server my only question is: how to manage the email received on the new server during the last rsync phase? As i wrote previously, i have some huge maildirs where rsync take hours to scan all files i can't keep the server down for hours or customers won't receive any new emails, so, after the initial sync i have to move the mailbox on the new server (only for deliveries) . In this way I'll not loose any emails but the new servers as newer data than the old server. When doing rsync with --delete, the news mails would be removed A solution could be to disable customer access to the new server and put "new" directory in rsync exclude. Doing this won't delete the newly received emails as the "new" directory isn't synced. and no one osd able to move from new to cur as users are blocked for login. From aki.tuomi at dovecot.fi Wed Oct 26 06:57:35 2016 From: aki.tuomi at dovecot.fi (Aki Tuomi) Date: Wed, 26 Oct 2016 09:57:35 +0300 Subject: Server migration In-Reply-To: References: <1879ea04-a29c-bf04-197c-4f8ffc0bf9bc@dovecot.fi> Message-ID: <2ca2e088-5c17-33c0-5136-951aebeb2c70@dovecot.fi> On 26.10.2016 09:38, Gandalf Corvotempesta wrote: > Il 26 ott 2016 8:30 AM, "Aki Tuomi" ha scritto: >> I would recommend using same major release with replication. >> >> If you are using maildir++ format, it should be enough to copy all the >> maildir files over and start dovecot on new server. >> > This is much easier than dovecot replication as i can start immedialy with > no need to upgrade the old server > > my only question is: how to manage the email received on the new server > during the last rsync phase? > As i wrote previously, i have some huge maildirs where rsync take hours to > scan all files > i can't keep the server down for hours or customers won't receive any new > emails, so, after the initial sync i have to move the mailbox on the new > server (only for deliveries) . In this way I'll not loose any emails but > the new servers as newer data than the old server. > When doing rsync with --delete, the news mails would be removed > > A solution could be to disable customer access to the new server and put > "new" directory in rsync exclude. Doing this won't delete the newly > received emails as the "new" directory isn't synced. > and no one osd able to move from new to cur as users are blocked for login. If you are moving from 1.x to 2.x, I think you should make some trials first, and preferably move the user one at a time, blocking access to old server/new server during move. It is very forklift upgrade, much danger. Aki From gandalf.corvotempesta at gmail.com Wed Oct 26 07:29:11 2016 From: gandalf.corvotempesta at gmail.com (Gandalf Corvotempesta) Date: Wed, 26 Oct 2016 09:29:11 +0200 Subject: Server migration In-Reply-To: <2ca2e088-5c17-33c0-5136-951aebeb2c70@dovecot.fi> References: <1879ea04-a29c-bf04-197c-4f8ffc0bf9bc@dovecot.fi> <2ca2e088-5c17-33c0-5136-951aebeb2c70@dovecot.fi> Message-ID: 2016-10-26 8:57 GMT+02:00 Aki Tuomi : > If you are moving from 1.x to 2.x, I think you should make some trials > first, and preferably move the user one at a time, blocking access to > old server/new server during move. It is very forklift upgrade, much danger. Yes, I'll do some test migration before moving the whole server. Maildir structure isn't changed between 1.x and 2.x, thus all emails should be safe. I have to test the new 2.2 configuration to see if existing users are able to log-in but how can I test if existing client would be able to preserve the mail ids without downloading everything again? From forondarena at gmail.com Wed Oct 26 08:14:00 2016 From: forondarena at gmail.com (Luis Ugalde) Date: Wed, 26 Oct 2016 10:14:00 +0200 Subject: Too many references: cannot splice In-Reply-To: References: Message-ID: Hi, Could you please have a look at https://lkml.org/lkml/2016/2/2/538 and see if this makes any sense to you? I've been checking kernel changes between linux_3.16.7 and linux_3.16.36, and this has popped out. Could this be the reason for the "too many references" errors? Regards, Luis Ugalde. On Thu, Oct 13, 2016 at 3:47 PM, Luis Ugalde wrote: > Hi, > > > A while ago I sent an email regarding these "*ETOOMANYREFS* Too many > references: cannot splice." that we've seen since Debian updated the Jessie > kernel to > > 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt20-1+deb8u3 (2016-01-17) x86_64 > > while older kernels, like 3.16.0-4-amd64 #1 SMP Debian > 3.16.7-ckt11-1+deb8u6 (2015-11-09) x86_64 showed no errors at all. > > I was wondering if no one else is getting these errors, or if you know any > workarounds that might probe useful, apart from downgrading the kernel. > > > I would say that the infrastructure we're running is quite standard, with > directors balancing users to NFS backed dovecot servers. > > > Best regards, > > Luis Ugalde. > > > > From dezillium at dezillium.com Wed Oct 26 10:56:36 2016 From: dezillium at dezillium.com (deZillium) Date: Wed, 26 Oct 2016 13:56:36 +0300 Subject: Replication with SSL Message-ID: <79c1c17b-cc85-e36a-ef58-0b0c2ec5c395@dezillium.com> Hello, - Set up a pair of servers according to http://wiki2.dovecot.org/Replication -Enabled SSL for both servers - Dovecot version: 2.2.13 (Debian 8.6) I couldn't find an option to specify the certificate that doveadm should use when connecting to the other server. Both servers have hostnames that are different, as verified by dovecot --hostdomain(as per the instructions) but use a common certificate when emailclients connect to them (high availability setup). Yes, single server login works as expected, been working for the past few years :-). Setting up a custom ssl_client_ca_file doesn't work, since doveadm doesn't know which certificate it should send when connecting to the other doveadm. Setting the ssl_client_ca_dir tothe directory with the global CAsdoesn't work either, since doveadm doesn't use the hostname that dovecot actually uses. The custom self-signed CA works when used outside dovecot(mysql for example). Is there any configuration thatneeds to be changed in order for doveadm to use a custom self signed certificate? Thank you From tss at iki.fi Wed Oct 26 10:59:49 2016 From: tss at iki.fi (Timo Sirainen) Date: Wed, 26 Oct 2016 13:59:49 +0300 Subject: Too many references: cannot splice In-Reply-To: References: Message-ID: <47C2075B-41BD-4929-ADC6-F4A3BC8AB420@iki.fi> On 26 Oct 2016, at 11:14, Luis Ugalde wrote: > > Hi, > > Could you please have a look at https://lkml.org/lkml/2016/2/2/538 and see > if this makes any sense to you? I've been checking kernel changes > between linux_3.16.7 and linux_3.16.36, and this has popped out. Could this > be the reason for the "too many references" errors? Does the attached patch help? -------------- next part -------------- A non-text attachment was scrubbed... Name: diff Type: application/octet-stream Size: 869 bytes Desc: not available URL: -------------- next part -------------- > > Regards, > > Luis Ugalde. > > On Thu, Oct 13, 2016 at 3:47 PM, Luis Ugalde wrote: > >> Hi, >> >> >> A while ago I sent an email regarding these "*ETOOMANYREFS* Too many >> references: cannot splice." that we've seen since Debian updated the Jessie >> kernel to >> >> 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt20-1+deb8u3 (2016-01-17) x86_64 >> >> while older kernels, like 3.16.0-4-amd64 #1 SMP Debian >> 3.16.7-ckt11-1+deb8u6 (2015-11-09) x86_64 showed no errors at all. >> >> I was wondering if no one else is getting these errors, or if you know any >> workarounds that might probe useful, apart from downgrading the kernel. >> >> >> I would say that the infrastructure we're running is quite standard, with >> directors balancing users to NFS backed dovecot servers. >> >> >> Best regards, >> >> Luis Ugalde. >> >> >> >> From arekm at maven.pl Wed Oct 26 11:26:00 2016 From: arekm at maven.pl (Arkadiusz =?utf-8?q?Mi=C5=9Bkiewicz?=) Date: Wed, 26 Oct 2016 13:26:00 +0200 Subject: multiple SSL certificates story Message-ID: <201610261326.00636.arekm@maven.pl> Hi. Little story :-) I'm playing with dovecot 2.2.25 and multiple SSL certificates. ~7000 certificates which are loaded twice, so my dovecot has ~14 000 certificate pairs (14k key + 14k cert) in config. 14 000 local_name entries. Like these: local_name imap.example.com { ssl_cert = References: Message-ID: <22e28706-ea4a-0ce7-b7ce-d3ba16b1ad5d@dovecot.fi> On 26.10.2016 13:48, Christian Ehrhardt wrote: > Hi, > I was wondering about a crash when building dovecot 2.2.25 on latest > Ubuntu. > I wondered as I've had the same source building on Debian just fine. > > Some debugging led me to this weird behavior: > Using this gdb command file called autoreportissue in my case: > break dcrypt_initialize > commands > p dcrypt_vfs > p &dcrypt_vfs > watch dcrypt_vfs > c > end > break dcrypt_set_vfs > commands > p dcrypt_vfs > p &dcrypt_vfs > c > end > r > > Running test-crypto on Debian and Ubuntu reported those two behaviours: > gdb -d /root/dovecot-2.2.25/src/ -x autoreportissue ./test-crypto > > Good: > Breakpoint 1, dcrypt_initialize (backend=0x555555587c02 "openssl", > set=0x0, error_r=0x0) at dcrypt.c:15 > 15 if (dcrypt_vfs != NULL) { > $1 = (struct dcrypt_vfs *) 0x0 > $2 = (struct dcrypt_vfs **) 0x555555796370 > Hardware watchpoint 3: dcrypt_vfs > Breakpoint 2, dcrypt_set_vfs (vfs=0x7ffff7835020 ) > at dcrypt.c:56 > 56 dcrypt_vfs = vfs; > $3 = (struct dcrypt_vfs *) 0x0 > $4 = (struct dcrypt_vfs **) 0x555555796370 > Hardware watchpoint 3: dcrypt_vfs > Old value = (struct dcrypt_vfs *) 0x0 > New value = (struct dcrypt_vfs *) 0x7ffff7835020 > dcrypt_set_vfs (vfs=0x7ffff7835020 ) at dcrypt.c:57 > 57 } > > Bad: > Breakpoint 1, dcrypt_initialize (backend=0x555555589f02 "openssl", > set=0x0, error_r=0x0) at dcrypt.c:11 > 11 { > $1 = (struct dcrypt_vfs *) 0x0 > $2 = (struct dcrypt_vfs **) 0x555555798370 > Hardware watchpoint 3: dcrypt_vfs > Breakpoint 2, dcrypt_set_vfs (vfs=0x7ffff780a020 ) > at dcrypt.c:56 > 56 dcrypt_vfs = vfs; > $3 = (struct dcrypt_vfs *) 0x0 > $4 = (struct dcrypt_vfs **) 0x7ffff780a890 > Panic: file dcrypt.c: line 34 (dcrypt_initialize): assertion failed: > (dcrypt_vfs != NULL) > Error: Raw backtrace: > /root/dovecot-2.2.25/src/lib-dcrypt/test-crypto(+0x15f7c) > [0x555555569f7c] -> > /root/dovecot-2.2.25/src/lib-dcrypt/test-crypto(default_error_handler+0) > [0x55555556a030] -> > /root/dovecot-2.2.25/src/lib-dcrypt/test-crypto(i_fatal+0) > [0x55555556a2ff] -> > /root/dovecot-2.2.25/src/lib-dcrypt/test-crypto(dcrypt_initialize+0x140) > [0x55555555f030] -> > /root/dovecot-2.2.25/src/lib-dcrypt/test-crypto(main+0x23) > [0x55555556706d] -> > /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf1) > [0x7ffff782d3f1] -> > /root/dovecot-2.2.25/src/lib-dcrypt/test-crypto(_start+0x2a) > [0x55555555edea] > Program received signal SIGABRT, Aborted. > > One can see that in the bad case the pointer of dcrypt_vfs is pointing > to something of the scope of the .libs/libdcrypt_openssl.so file and > not the dcrypt_initialize of test-crypto. > > That made me wonder even more - where would this issue of variable > scope come from. After more debugging I found that the linker flag > "-Bsymbolic-functions" is the reason. This is default on recent > Ubuntu, but not on Debian (?yet?). > > Eventually what happens is that the dcrypt_vfs becomes part of the > .libs/libdcrypt_openssl.so. So the call from there to dcrypt_set_vfs > ends up setting not the expected variable. > I was unable to come up with a reasonable fix since I'm not enough > into your sublib structure. > > For now I assume I'm gonna build the package stripping this flag in > Ubuntu. > But long term I think dovecot should fix it to work with that compiler > flag. > Therefore the report to make you aware. > > > P.S. thanks to the dovecot community for having unit tests that find > this at build time! > > -- > Christian Ehrhardt > Software Engineer, Ubuntu Server > Canonical Ltd Hi! Thank you for reporting this, we'll look into it. Aki Tuomi Dovecot oy From arekm at maven.pl Wed Oct 26 12:30:18 2016 From: arekm at maven.pl (Arkadiusz =?utf-8?q?Mi=C5=9Bkiewicz?=) Date: Wed, 26 Oct 2016 14:30:18 +0200 Subject: multiple SSL certificates story In-Reply-To: <201610261326.00636.arekm@maven.pl> References: <201610261326.00636.arekm@maven.pl> Message-ID: <201610261430.18909.arekm@maven.pl> On Wednesday 26 of October 2016, Arkadiusz Mi?kiewicz wrote: > What can be done to make it work and how? Don't know internals - but could dovecot do similar job as exim. I mean keep big config, store things as strings just like now: local_name imap.example.com { ssl_cert = References: <201610261326.00636.arekm@maven.pl> <201610261430.18909.arekm@maven.pl> Message-ID: <5c33f178-5302-1b02-8563-8c01aed790d5@dovecot.fi> On 26.10.2016 15:30, Arkadiusz Mi?kiewicz wrote: > On Wednesday 26 of October 2016, Arkadiusz Mi?kiewicz wrote: > >> What can be done to make it work and how? > Don't know internals - but could dovecot do similar job as exim. I mean keep > big config, store things as strings just like now: > > local_name imap.example.com { > ssl_cert = ssl_key = } > > but defer actual certificate loading to a moment when client connects and we > know it's TLS SNI name? > It is non-trivial change, but we'll take note and see if it could be implemented. OpenSSL supports this via SSL_CTX_set_tlsext_servername_callback(), but doing it is another thing. Aki From jkamp at amazon.nl Wed Oct 26 14:52:59 2016 From: jkamp at amazon.nl (=?UTF-8?Q?John_van_der_Kamp?=) Date: Wed, 26 Oct 2016 14:52:59 +0000 Subject: Subscription not immediately reflected In-Reply-To: <010001579fdf22e0-5a61f58e-71ee-4634-a302-828c62fd5453-000000@email.amazonses.com> References: <010001579fdf22e0-5a61f58e-71ee-4634-a302-828c62fd5453-000000@email.amazonses.com> Message-ID: <01000158017aded5-77d3ee45-3b1b-4c5c-bef3-023ff676e37d-000000@email.amazonses.com> I was able to find some time to debug this more, and I found the change that breaks it was introduced in 2.2.25: If I revert 18856082d632ac60996637547098688148826b5a from release-2.2.25 branch, the test works again. John -----Original Message----- From: dovecot [mailto:dovecot-bounces at dovecot.org] On Behalf Of John van der Kamp Sent: Friday, 7 October, 2016 18:00 To: dovecot at dovecot.org Subject: Subscription not immediately reflected Hello, ? I noticed that somewhere between 2.2.22 and 2.2.25 the workings of subscriptions seem to have changed. In version 2.2.25, when a client subscribes to a folder, and then does an LSUB command, it will not see that subscribed folder. If you retry the LSUB command, the change is there. Same with unsubscribes. In version 2.2.22 I did not see this weird behavior. ? John From aki.tuomi at dovecot.fi Thu Oct 27 09:47:08 2016 From: aki.tuomi at dovecot.fi (Aki Tuomi) Date: Thu, 27 Oct 2016 12:47:08 +0300 Subject: Subscription not immediately reflected In-Reply-To: <01000158017aded5-77d3ee45-3b1b-4c5c-bef3-023ff676e37d-000000@email.amazonses.com> References: <010001579fdf22e0-5a61f58e-71ee-4634-a302-828c62fd5453-000000@email.amazonses.com> <01000158017aded5-77d3ee45-3b1b-4c5c-bef3-023ff676e37d-000000@email.amazonses.com> Message-ID: On 26.10.2016 17:52, John van der Kamp wrote: > I was able to find some time to debug this more, and I found the change that breaks it was introduced in 2.2.25: > If I revert 18856082d632ac60996637547098688148826b5a from release-2.2.25 branch, the test works again. > > John > > -----Original Message----- > From: dovecot [mailto:dovecot-bounces at dovecot.org] On Behalf Of John van der Kamp > Sent: Friday, 7 October, 2016 18:00 > To: dovecot at dovecot.org > Subject: Subscription not immediately reflected > > Hello, > > > I noticed that somewhere between 2.2.22 and 2.2.25 the workings of subscriptions seem to have changed. > > In version 2.2.25, when a client subscribes to a folder, and then does an LSUB command, it will not see that subscribed folder. > > If you retry the LSUB command, the change is there. > > Same with unsubscribes. > > In version 2.2.22 I did not see this weird behavior. > > > John > Does it work if you issue NOOP in the middle? Aki From tss at iki.fi Thu Oct 27 09:55:14 2016 From: tss at iki.fi (Timo Sirainen) Date: Thu, 27 Oct 2016 12:55:14 +0300 Subject: Subscription not immediately reflected In-Reply-To: <01000158017aded5-77d3ee45-3b1b-4c5c-bef3-023ff676e37d-000000@email.amazonses.com> References: <010001579fdf22e0-5a61f58e-71ee-4634-a302-828c62fd5453-000000@email.amazonses.com> <01000158017aded5-77d3ee45-3b1b-4c5c-bef3-023ff676e37d-000000@email.amazonses.com> Message-ID: <602C8BF8-7ECA-45B4-8AD9-E54C6476DAFF@iki.fi> I can't reproduce this. Can you send your doveconf -n output and also an example IMAP session showing what goes wrong? I tested with Maildir and mdbox, and with and without mailbox_list_index=yes: x lsub "" * * LSUB () "/" INBOX x OK Lsub completed (0.000 + 0.000 secs). x subscribe Trash x OK Subscribe completed (0.000 + 0.000 secs). x lsub "" * * LSUB () "/" INBOX * LSUB (\Trash) "/" Trash x OK Lsub completed (0.000 + 0.000 secs). > On 26 Oct 2016, at 17:52, John van der Kamp wrote: > > I was able to find some time to debug this more, and I found the change that breaks it was introduced in 2.2.25: > If I revert 18856082d632ac60996637547098688148826b5a from release-2.2.25 branch, the test works again. > > John > > -----Original Message----- > From: dovecot [mailto:dovecot-bounces at dovecot.org] On Behalf Of John van der Kamp > Sent: Friday, 7 October, 2016 18:00 > To: dovecot at dovecot.org > Subject: Subscription not immediately reflected > > Hello, > > > I noticed that somewhere between 2.2.22 and 2.2.25 the workings of subscriptions seem to have changed. > > In version 2.2.25, when a client subscribes to a folder, and then does an LSUB command, it will not see that subscribed folder. > > If you retry the LSUB command, the change is there. > > Same with unsubscribes. > > In version 2.2.22 I did not see this weird behavior. > > > John > From tss at iki.fi Thu Oct 27 10:07:05 2016 From: tss at iki.fi (Timo Sirainen) Date: Thu, 27 Oct 2016 13:07:05 +0300 Subject: Subscription not immediately reflected In-Reply-To: <602C8BF8-7ECA-45B4-8AD9-E54C6476DAFF@iki.fi> References: <010001579fdf22e0-5a61f58e-71ee-4634-a302-828c62fd5453-000000@email.amazonses.com> <01000158017aded5-77d3ee45-3b1b-4c5c-bef3-023ff676e37d-000000@email.amazonses.com> <602C8BF8-7ECA-45B4-8AD9-E54C6476DAFF@iki.fi> Message-ID: <8E3C6DC1-531B-4B33-97A9-0ADEEBA18003@iki.fi> > On 27 Oct 2016, at 12:55, Timo Sirainen wrote: > > I can't reproduce this. Can you send your doveconf -n output and also an example IMAP session showing what goes wrong? I tested with Maildir and mdbox, and with and without mailbox_list_index=yes: > > x lsub "" * > * LSUB () "/" INBOX > x OK Lsub completed (0.000 + 0.000 secs). > x subscribe Trash > x OK Subscribe completed (0.000 + 0.000 secs). > x lsub "" * > * LSUB () "/" INBOX > * LSUB (\Trash) "/" Trash > x OK Lsub completed (0.000 + 0.000 secs). Although you could try if the attached patch happens to help? I think you'd then have to be using NFS or some other remote storage where time is >1 seconds different from Dovecot server's time. -------------- next part -------------- A non-text attachment was scrubbed... Name: diff Type: application/octet-stream Size: 2046 bytes Desc: not available URL: -------------- next part -------------- > >> On 26 Oct 2016, at 17:52, John van der Kamp wrote: >> >> I was able to find some time to debug this more, and I found the change that breaks it was introduced in 2.2.25: >> If I revert 18856082d632ac60996637547098688148826b5a from release-2.2.25 branch, the test works again. >> >> John >> >> -----Original Message----- >> From: dovecot [mailto:dovecot-bounces at dovecot.org] On Behalf Of John van der Kamp >> Sent: Friday, 7 October, 2016 18:00 >> To: dovecot at dovecot.org >> Subject: Subscription not immediately reflected >> >> Hello, >> >> >> I noticed that somewhere between 2.2.22 and 2.2.25 the workings of subscriptions seem to have changed. >> >> In version 2.2.25, when a client subscribes to a folder, and then does an LSUB command, it will not see that subscribed folder. >> >> If you retry the LSUB command, the change is there. >> >> Same with unsubscribes. >> >> In version 2.2.22 I did not see this weird behavior. >> >> >> John >> From jkamp at amazon.nl Thu Oct 27 11:21:33 2016 From: jkamp at amazon.nl (=?UTF-8?Q?John_van_der_Kamp?=) Date: Thu, 27 Oct 2016 11:21:33 +0000 Subject: Subscription not immediately reflected In-Reply-To: <8E3C6DC1-531B-4B33-97A9-0ADEEBA18003@iki.fi> References: <010001579fdf22e0-5a61f58e-71ee-4634-a302-828c62fd5453-000000@email.amazonses.com> Message-ID: <0100015805dfa823-41033e19-2b98-4ac7-bce0-16d7cff8a300-000000@email.amazonses.com> The mail isn't on NFS, so I don't think that?s the problem. Probably related to my setup, because I use the imapc proxy settings: mail_location = imapc:~/imapc imapc_host = 192.168.1.2 imapc_port = 143 passdb { driver = imap args = host=192.168.1.2 port=143 default_fields = userdb_imapc_user=%u userdb_imapc_password=%w } userdb { driver = prefetch } The subscribe commands, when directly executed to the proxied IMAP server work as expected. Output of my test: 18:01.48 > IMFE1 LOGIN "username" "password" 18:01.72 < * CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE IDLE SORT SORT=DISPLAY THREAD=REFERENCES THREAD=REFS THREAD=ORDEREDSUBJECT MULTIAPPEND URL-PARTIAL CATENATE UNSELECT CHILDREN NAMESPACE UIDPLUS LIST-EXTENDED I18NLEVEL=1 CONDSTORE QRESYNC ESEARCH ESORT SEARCHRES WITHIN CONTEXT=SEARCH LIST-STATUS BINARY MOVE 18:01.72 < IMFE1 OK Logged in 18:01.72 > IMFE2 LIST "" "" 18:01.72 < * LIST (\Noselect) "/" "" 18:01.72 < IMFE2 OK List completed (0.000 + 0.000 secs). 18:01.72 > IMFE3 CREATE subway 18:01.76 < IMFE3 OK Create completed (0.000 + 0.000 + 0.041 secs). 18:01.76 > IMFE4 LSUB "" * 18:01.81 < * LSUB () "/" INBOX 18:01.81 < * LSUB () "/" Outbox 18:01.81 < * LSUB () "/" "Deleted Items" 18:01.81 < * LSUB () "/" "Sent Items" 18:01.81 < * LSUB () "/" Drafts 18:01.81 < * LSUB () "/" "Junk E-mail" 18:01.81 < IMFE4 OK Lsub completed (0.000 + 0.000 + 0.049 secs). 18:01.81 > IMFE5 SUBSCRIBE subway 18:01.88 < IMFE5 OK Subscribe completed (0.000 + 0.000 + 0.072 secs). 18:01.88 > IMFE6 LSUB "" * 18:01.88 < * LSUB () "/" INBOX 18:01.88 < * LSUB () "/" Outbox 18:01.88 < * LSUB () "/" "Deleted Items" 18:01.88 < * LSUB () "/" "Sent Items" 18:01.88 < * LSUB () "/" Drafts 18:01.88 < * LSUB () "/" "Junk E-mail" 18:01.88 < IMFE6 OK Lsub completed (0.000 + 0.000 secs). John -----Original Message----- From: dovecot [mailto:dovecot-bounces at dovecot.org] On Behalf Of Timo Sirainen Sent: Thursday, 27 October, 2016 12:07 To: Dovecot Mailing List Cc: John van der Kamp Subject: Re: Subscription not immediately reflected > On 27 Oct 2016, at 12:55, Timo Sirainen wrote: > > I can't reproduce this. Can you send your doveconf -n output and also an example IMAP session showing what goes wrong? I tested with Maildir and mdbox, and with and without mailbox_list_index=yes: > > x lsub "" * > * LSUB () "/" INBOX > x OK Lsub completed (0.000 + 0.000 secs). > x subscribe Trash > x OK Subscribe completed (0.000 + 0.000 secs). > x lsub "" * > * LSUB () "/" INBOX > * LSUB (\Trash) "/" Trash > x OK Lsub completed (0.000 + 0.000 secs). Although you could try if the attached patch happens to help? I think you'd then have to be using NFS or some other remote storage where time is >1 seconds different from Dovecot server's time. > >> On 26 Oct 2016, at 17:52, John van der Kamp wrote: >> >> I was able to find some time to debug this more, and I found the change that breaks it was introduced in 2.2.25: >> If I revert 18856082d632ac60996637547098688148826b5a from release-2.2.25 branch, the test works again. >> >> John >> >> -----Original Message----- >> From: dovecot [mailto:dovecot-bounces at dovecot.org] On Behalf Of John van der Kamp >> Sent: Friday, 7 October, 2016 18:00 >> To: dovecot at dovecot.org >> Subject: Subscription not immediately reflected >> >> Hello, >> >> >> I noticed that somewhere between 2.2.22 and 2.2.25 the workings of subscriptions seem to have changed. >> >> In version 2.2.25, when a client subscribes to a folder, and then does an LSUB command, it will not see that subscribed folder. >> >> If you retry the LSUB command, the change is there. >> >> Same with unsubscribes. >> >> In version 2.2.22 I did not see this weird behavior. >> >> >> John >> From tanstaafl at libertytrek.org Thu Oct 27 12:29:09 2016 From: tanstaafl at libertytrek.org (Tanstaafl) Date: Thu, 27 Oct 2016 08:29:09 -0400 Subject: Server migration In-Reply-To: References: <1879ea04-a29c-bf04-197c-4f8ffc0bf9bc@dovecot.fi> Message-ID: <7109c6da-c5be-9a95-736a-2a6c840285ed@libertytrek.org> On 10/26/2016 2:38 AM, Gandalf Corvotempesta wrote: > This is much easier than dovecot replication as i can start immedialy with > no need to upgrade the old server > > my only question is: how to manage the email received on the new server > during the last rsync phase? Use IMAPSync - much better than rsync for this. From tss at iki.fi Thu Oct 27 12:36:03 2016 From: tss at iki.fi (Timo Sirainen) Date: Thu, 27 Oct 2016 15:36:03 +0300 Subject: Server migration In-Reply-To: <7109c6da-c5be-9a95-736a-2a6c840285ed@libertytrek.org> References: <1879ea04-a29c-bf04-197c-4f8ffc0bf9bc@dovecot.fi> <7109c6da-c5be-9a95-736a-2a6c840285ed@libertytrek.org> Message-ID: On 27 Oct 2016, at 15:29, Tanstaafl wrote: > > On 10/26/2016 2:38 AM, Gandalf Corvotempesta > wrote: >> This is much easier than dovecot replication as i can start immedialy with >> no need to upgrade the old server >> >> my only question is: how to manage the email received on the new server >> during the last rsync phase? > > Use IMAPSync - much better than rsync for this. imapsync will change IMAP UIDs and cause clients to redownload all mails. http://wiki2.dovecot.org/Migration/Dsync should work though. From gandalf.corvotempesta at gmail.com Thu Oct 27 12:58:16 2016 From: gandalf.corvotempesta at gmail.com (Gandalf Corvotempesta) Date: Thu, 27 Oct 2016 14:58:16 +0200 Subject: Server migration In-Reply-To: References: <1879ea04-a29c-bf04-197c-4f8ffc0bf9bc@dovecot.fi> <7109c6da-c5be-9a95-736a-2a6c840285ed@libertytrek.org> Message-ID: 2016-10-27 14:36 GMT+02:00 Timo Sirainen : > imapsync will change IMAP UIDs and cause clients to redownload all mails. http://wiki2.dovecot.org/Migration/Dsync should work though. Just to be sure: dsync from the *new* node would connect via IMAP to the older node and transfer everything ? By running this: doveadm -o mail_fsync=never sync -1 -R -u user at domain imapc: should be OK if newer mails are arrived on the new server ? From tss at iki.fi Thu Oct 27 13:07:07 2016 From: tss at iki.fi (Timo Sirainen) Date: Thu, 27 Oct 2016 16:07:07 +0300 Subject: v2.2.26 released Message-ID: http://dovecot.org/releases/2.2/dovecot-2.2.26.tar.gz http://dovecot.org/releases/2.2/dovecot-2.2.26.tar.gz.sig There were some changes since rc1: https://github.com/dovecot/core/commit/54d654098032d96975b70749b505fae538e97f7a Mainly there are quite a lot of director fixes and improvements. Here's the full list of changes: * master: Removed hardcoded 511 backlog limit for listen(). The kernel should limit this as needed. * doveadm import: Source user is now initialized the same as target user. Added -U parameter to override the source user. * Mailbox names are no longer limited to 16 hierarchy levels. We'll check another way to make sure mailbox names can't grow larger than 4096 bytes. + Added a concept of "alternative usernames" by returning user_* extra field(s) in passdb. doveadm proxy list shows these alt usernames in "doveadm proxy list" output. "doveadm director&proxy kick" adds -f parameter. The alt usernames don't have to be unique, so this allows creation of user groups and kicking them in one command. + auth: passdb/userdb dict allows now %variables in key settings. + auth: If passdb returns noauthenticate=yes extra field, assume that it only set extra fields and authentication wasn't actually performed. + auth: passdb static now supports password={scheme} prefix. + auth, login_log_format_elements: Added %{local_name} variable, which expands to TLS SNI hostname if given. + imapc: Added imapc_max_line_length to limit maximum memory usage. + imap, pop3: Added rawlog_dir setting to store IMAP/POP3 traffic logs. This replaces at least partially the rawlog plugin. + dsync: Added dsync_features=empty-header-workaround setting. This makes incremental dsyncs work better for servers that randomly return empty headers for mails. When an empty header is seen for an existing mail, dsync assumes that it matches the local mail. + doveadm sync/backup: Added -I parameter to skip too large mails. + doveadm sync/backup: Fixed -t parameter and added -e for "end date". + doveadm mailbox metadata: Added -s parameter to allow accessing server metadata by using empty mailbox name. + Added "doveadm service status" and "doveadm process status" commands. + director: Added director_flush_socket. See http://wiki2.dovecot.org/Director#Flush_socket + doveadm director flush: Users are now moved only max 100 at a time to avoid load spikes. --max-parallel parameter overrides this. + Added FILE_LOCK_SLOW_WARNING_MSECS environment, which logs a warning if any lock is waited on or kept for this many milliseconds. - master process's listener socket was leaked to all child processes. This might have allowed untrusted processes to capture and prevent "doveadm service stop" comands from working. - login proxy: Fixed crash when outgoing SSL connections were hanging. - auth: userdb fields weren't passed to auth-workers, so %{userdb:*} from previous userdbs didn't work there. - auth: Each userdb lookup from cache reset its TTL. - auth: Fixed auth_bind=yes + sasl_bind=yes to work together - auth: Blocking userdb lookups reset extra fields set by previous userdbs. - auth: Cache keys didn't include %{passdb:*} and %{userdb:*} - auth-policy: Fixed crash due to using already-freed memory if policy lookup takes longer than auth request exists. - lib-auth: Unescape passdb/userdb extra fields. Mainly affected returning extra fields with LFs or TABs. - lmtp_user_concurrency_limit>0 setting was logging unnecessary anvil errors. - lmtp_user_concurrency_limit is now checked before quota check with lmtp_rcpt_check_quota=yes to avoid unnecessary quota work. - lmtp: %{userdb:*} variables didn't work in mail_log_prefix - autoexpunge settings for mailboxes with wildcards didn't work when namespace prefix was non-empty. - Fixed writing >2GB to iostream-temp files (used by fs-compress, fs-metawrap, doveadm-http) - director: Ignore duplicates in director_servers setting. - director: Many fixes related to connection handshaking, user moving and error handling. - director: Don't break with shutdown_clients=no - zlib, IMAP BINARY: Fixed internal caching when accessing multiple newly created mails. They all had UID=0 and the next mail could have wrongly used the previously cached mail. - doveadm stats reset wasn't reseting all the stats. - auth_stats=yes: Don't update num_logins, since it doubles them when using with mail stats. - quota count: Fixed deadlocks when updating vsize header. - dict-quota: Fixed crashes happening due to memory corruption. - dict proxy: Fixed various timeout-related bugs. - doveadm proxying: Fixed -A and -u wildcard handling. - doveadm proxying: Fixed hangs and bugs related to printing. - imap: Fixed wrongly triggering assert-crash in client_check_command_hangs. - imap proxy: Don't send ID command pipelined with nopipelining=yes - imap-hibernate: Don't execute quota_over_script or last_login after un-hibernation. - imap-hibernate: Don't un-hibernate if client sends DONE+IDLE in one IP packet. - imap-hibernate: Fixed various failures when un-hibernating. - fts: fts_autoindex=yes was broken in 2.2.25 unless fts_autoindex_exclude settings existed. - fts-solr: Fixed searching multiple mailboxes (patch by x16a0) - doveadm fetch body.snippet wasn't working in 2.2.25. Also fixed a crash with certain emails. - pop3-migration + dbox: Various fixes related to POP3 UIDL optimization in 2.2.25. - pop3-migration: Fixed "truncated email header" workaround. From tss at iki.fi Thu Oct 27 13:14:19 2016 From: tss at iki.fi (Timo Sirainen) Date: Thu, 27 Oct 2016 16:14:19 +0300 Subject: Server migration In-Reply-To: References: <1879ea04-a29c-bf04-197c-4f8ffc0bf9bc@dovecot.fi> <7109c6da-c5be-9a95-736a-2a6c840285ed@libertytrek.org> Message-ID: On 27 Oct 2016, at 15:58, Gandalf Corvotempesta wrote: > > 2016-10-27 14:36 GMT+02:00 Timo Sirainen : >> imapsync will change IMAP UIDs and cause clients to redownload all mails. http://wiki2.dovecot.org/Migration/Dsync should work though. > > Just to be sure: dsync from the *new* node would connect via IMAP to > the older node and transfer everything ? > By running this: > > doveadm -o mail_fsync=never sync -1 -R -u user at domain imapc: > > should be OK if newer mails are arrived on the new server ? Yes. From arekm at maven.pl Thu Oct 27 13:39:49 2016 From: arekm at maven.pl (Arkadiusz =?utf-8?q?Mi=C5=9Bkiewicz?=) Date: Thu, 27 Oct 2016 15:39:49 +0200 Subject: v2.2.26 released In-Reply-To: References: Message-ID: <201610271539.49714.arekm@maven.pl> On Thursday 27 of October 2016, Timo Sirainen wrote: > http://dovecot.org/releases/2.2/dovecot-2.2.26.tar.gz > http://dovecot.org/releases/2.2/dovecot-2.2.26.tar.gz.sig Please merge to 2.2 branch this fix. I'm hitting that problem on 2.2.25: From 6c969ac21a43cc10ee1f1a91a4f39e4864c886cb Mon Sep 17 00:00:00 2001 From: Aki Tuomi Date: Fri, 15 Jul 2016 11:31:25 +0300 Subject: [PATCH] auth: Remove i_assert for credentials scheme --- src/auth/auth-request.c | 2 -- 1 file changed, 2 deletions(-) -- Arkadiusz Mi?kiewicz, arekm / ( maven.pl | pld-linux.org ) From aki.tuomi at dovecot.fi Thu Oct 27 15:24:16 2016 From: aki.tuomi at dovecot.fi (Aki Tuomi) Date: Thu, 27 Oct 2016 18:24:16 +0300 Subject: v2.2.26 released In-Reply-To: <201610271539.49714.arekm@maven.pl> References: <201610271539.49714.arekm@maven.pl> Message-ID: <00422fb7-76f8-0c5a-5edb-d259e1881db8@dovecot.fi> On 27.10.2016 16:39, Arkadiusz Mi?kiewicz wrote: > On Thursday 27 of October 2016, Timo Sirainen wrote: >> http://dovecot.org/releases/2.2/dovecot-2.2.26.tar.gz >> http://dovecot.org/releases/2.2/dovecot-2.2.26.tar.gz.sig > Please merge to 2.2 branch this fix. I'm hitting that problem on 2.2.25: > > From 6c969ac21a43cc10ee1f1a91a4f39e4864c886cb Mon Sep 17 00:00:00 2001 > From: Aki Tuomi > Date: Fri, 15 Jul 2016 11:31:25 +0300 > Subject: [PATCH] auth: Remove i_assert for credentials scheme > > --- > src/auth/auth-request.c | 2 -- > 1 file changed, 2 deletions(-) > That fix is included in 2.2.26. Aki From larryrtx at gmail.com Thu Oct 27 15:38:22 2016 From: larryrtx at gmail.com (Larry Rosenman) Date: Thu, 27 Oct 2016 10:38:22 -0500 Subject: v2.2.26 released In-Reply-To: <00422fb7-76f8-0c5a-5edb-d259e1881db8@dovecot.fi> References: <201610271539.49714.arekm@maven.pl> <00422fb7-76f8-0c5a-5edb-d259e1881db8@dovecot.fi> Message-ID: was the fix for kqueue() EINVAL's in 2.2.26 as well? On Thu, Oct 27, 2016 at 10:24 AM, Aki Tuomi wrote: > > > On 27.10.2016 16:39, Arkadiusz Mi?kiewicz wrote: > >> On Thursday 27 of October 2016, Timo Sirainen wrote: >> >>> http://dovecot.org/releases/2.2/dovecot-2.2.26.tar.gz >>> http://dovecot.org/releases/2.2/dovecot-2.2.26.tar.gz.sig >>> >> Please merge to 2.2 branch this fix. I'm hitting that problem on 2.2.25: >> >> From 6c969ac21a43cc10ee1f1a91a4f39e4864c886cb Mon Sep 17 00:00:00 2001 >> From: Aki Tuomi >> Date: Fri, 15 Jul 2016 11:31:25 +0300 >> Subject: [PATCH] auth: Remove i_assert for credentials scheme >> >> --- >> src/auth/auth-request.c | 2 -- >> 1 file changed, 2 deletions(-) >> >> > That fix is included in 2.2.26. > > Aki > -- Larry Rosenman http://www.lerctr.org/~ler Phone: +1 214-642-9640 (c) E-Mail: larryrtx at gmail.com US Mail: 17716 Limpia Crk, Round Rock, TX 78664-7281 From aki.tuomi at dovecot.fi Thu Oct 27 15:44:15 2016 From: aki.tuomi at dovecot.fi (Aki Tuomi) Date: Thu, 27 Oct 2016 18:44:15 +0300 Subject: v2.2.26 released In-Reply-To: References: <201610271539.49714.arekm@maven.pl> <00422fb7-76f8-0c5a-5edb-d259e1881db8@dovecot.fi> Message-ID: Yes. Aki On 27.10.2016 18:38, Larry Rosenman wrote: > was the fix for kqueue() EINVAL's in 2.2.26 as well? > > > On Thu, Oct 27, 2016 at 10:24 AM, Aki Tuomi wrote: > >> >> On 27.10.2016 16:39, Arkadiusz Mi?kiewicz wrote: >> >>> On Thursday 27 of October 2016, Timo Sirainen wrote: >>> >>>> http://dovecot.org/releases/2.2/dovecot-2.2.26.tar.gz >>>> http://dovecot.org/releases/2.2/dovecot-2.2.26.tar.gz.sig >>>> >>> Please merge to 2.2 branch this fix. I'm hitting that problem on 2.2.25: >>> >>> From 6c969ac21a43cc10ee1f1a91a4f39e4864c886cb Mon Sep 17 00:00:00 2001 >>> From: Aki Tuomi >>> Date: Fri, 15 Jul 2016 11:31:25 +0300 >>> Subject: [PATCH] auth: Remove i_assert for credentials scheme >>> >>> --- >>> src/auth/auth-request.c | 2 -- >>> 1 file changed, 2 deletions(-) >>> >>> >> That fix is included in 2.2.26. >> >> Aki >> > > From arekm at maven.pl Thu Oct 27 16:31:25 2016 From: arekm at maven.pl (Arkadiusz =?utf-8?q?Mi=C5=9Bkiewicz?=) Date: Thu, 27 Oct 2016 18:31:25 +0200 Subject: v2.2.26 released In-Reply-To: <00422fb7-76f8-0c5a-5edb-d259e1881db8@dovecot.fi> References: <201610271539.49714.arekm@maven.pl> <00422fb7-76f8-0c5a-5edb-d259e1881db8@dovecot.fi> Message-ID: <201610271831.26038.arekm@maven.pl> On Thursday 27 of October 2016, Aki Tuomi wrote: > On 27.10.2016 16:39, Arkadiusz Mi?kiewicz wrote: > > On Thursday 27 of October 2016, Timo Sirainen wrote: > >> http://dovecot.org/releases/2.2/dovecot-2.2.26.tar.gz > >> http://dovecot.org/releases/2.2/dovecot-2.2.26.tar.gz.sig > > > > Please merge to 2.2 branch this fix. I'm hitting that problem on 2.2.25: > > From 6c969ac21a43cc10ee1f1a91a4f39e4864c886cb Mon Sep 17 00:00:00 2001 > > > > From: Aki Tuomi > > Date: Fri, 15 Jul 2016 11:31:25 +0300 > > Subject: [PATCH] auth: Remove i_assert for credentials scheme > > > > --- > > > > src/auth/auth-request.c | 2 -- > > 1 file changed, 2 deletions(-) > > That fix is included in 2.2.26. Are you sure? I don't see it there. > Aki -- Arkadiusz Mi?kiewicz, arekm / ( maven.pl | pld-linux.org ) From aki.tuomi at dovecot.fi Thu Oct 27 16:43:46 2016 From: aki.tuomi at dovecot.fi (Aki Tuomi) Date: Thu, 27 Oct 2016 19:43:46 +0300 Subject: v2.2.26 released In-Reply-To: <201610271831.26038.arekm@maven.pl> References: <201610271539.49714.arekm@maven.pl> <00422fb7-76f8-0c5a-5edb-d259e1881db8@dovecot.fi> <201610271831.26038.arekm@maven.pl> Message-ID: On 27.10.2016 19:31, Arkadiusz Mi?kiewicz wrote: > On Thursday 27 of October 2016, Aki Tuomi wrote: >> On 27.10.2016 16:39, Arkadiusz Mi?kiewicz wrote: >>> On Thursday 27 of October 2016, Timo Sirainen wrote: >>>> http://dovecot.org/releases/2.2/dovecot-2.2.26.tar.gz >>>> http://dovecot.org/releases/2.2/dovecot-2.2.26.tar.gz.sig >>> Please merge to 2.2 branch this fix. I'm hitting that problem on 2.2.25: >>> From 6c969ac21a43cc10ee1f1a91a4f39e4864c886cb Mon Sep 17 00:00:00 2001 >>> >>> From: Aki Tuomi >>> Date: Fri, 15 Jul 2016 11:31:25 +0300 >>> Subject: [PATCH] auth: Remove i_assert for credentials scheme >>> >>> --- >>> >>> src/auth/auth-request.c | 2 -- >>> 1 file changed, 2 deletions(-) >> That fix is included in 2.2.26. > Are you sure? I don't see it there. > >> Aki You are right, it was supposed to be there. Unfortunately it isn't. We'll see what can be done. Aki From lists-dovecot at m.fago.me Thu Oct 27 19:55:20 2016 From: lists-dovecot at m.fago.me (Moritz Fago) Date: Thu, 27 Oct 2016 21:55:20 +0200 Subject: Bugreport: managesieve-login won't start without a ssl-key Message-ID: <4A87E65B-125B-43F7-A831-E152FB2477BB@m.fago.me> Hello, If you don?t have a ssl_key and ssl_cert configured in your dovecot config managesieve-login will fail to start with the following error message: dovecot: managesieve-login: Fatal: Couldn't parse private ssl_key: error:0906D06C:PEM routines:PEM_read_bio:no start line: Expecting: ANY PRIVATE KEY, even if you haven?t enabled ssl for managesieve-login. Infos according to http://www.dovecot.org/bugreport.html: Filesystem: ext4 doveconf -n: # 2.2.13: /etc/dovecot/dovecot.conf # OS: Linux 3.16.0-4-amd64 x86_64 Debian 8.6 auth_default_realm = toppoint.de auth_mechanisms = plain login auth_username_format = %Ln mail_location = maildir:~/Maildir managesieve_notify_capability = mailto managesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy include variables body enotify environment mailbox date ihave namespace inbox { inbox = yes location = mailbox Drafts { special_use = \Drafts } mailbox Junk { special_use = \Junk } mailbox Sent { special_use = \Sent } mailbox "Sent Messages" { special_use = \Sent } mailbox Trash { special_use = \Trash } prefix = } passdb { args = dovecot driver = pam } plugin { sieve = ~/.sieve/dovecot.sieve sieve_dir = ~/.sieve } protocols = " imap lmtp sieve pop3" service auth { unix_listener /var/spool/postfix/private/auth { group = postfix mode = 0660 user = postfix } } service lmtp { unix_listener /var/spool/postfix/private/dovecot-lmtp { group = postfix mode = 0600 user = postfix } } service managesieve-login { inet_listener sieve { port = 4190 ssl = yes } } ssl = required ssl_cert = References: <4aad0d05-bc43-4fda-c4e7-544fc59557f4@shout.net> <20161013095531.00007012@seibercom.net> <1454388715.707.1476367654083@appsuite-dev.open-xchange.com> <20161013182334.f65847ce815588d05557bd94@domain007.com> <20161013185200.5aa3b7a5d485f24b2a036c84@domain007.com> <1040717331.825.1476374511077@appsuite-dev.open-xchange.com> <918e60ae-be12-6994-e397-eeb0ae11313a@shout.net> Message-ID: <3dd328d4-aac4-3686-d68e-50840b8d291c@shout.net> So after several days of more troubleshooting, I have some things to report to the list. First and foremost, I have discovered that the issue has nothing to do with SSL/TLS, which was my earlier suspicion because after doing some PCAPs I discovered that the transactions were negotiating TLS 1.2 on the new server, as opposed to 1.0 on the old. Also thank you for the rawlog suggestion: that helped a lot in determining what was happening on the IMAP level. That all said, this is what I've discovered: There is a very curious and reproducible four-second delay during the negotiation between server and client which is not present in Dovecot 2.1. This is what our customer is complaining about using Outlook 2010. During a plaintext TCP stream, I'm seeing this: 1. Client connects (SYN) to server. 2. Server ACKs and throws back CAPABILITIES. 3. User attempts to auth with DIGEST-MD5. 4. Server says, "no thanks." (Not sure why, but I don't believe this is relevant.) 5. User attempts to auth with plaintext. 6. Server says, "Yup. You are you. You're logged in." 7. Client sends the following: ID ("name" "Microsoft Outlook" "version" "14.0") 8. Server sends an ACK ... and then there's this very curious four-second delay. 9. Server then sends out new CAPABILITIES, and everything proceeds thereafter as normal and zippy and fast. Does this shed any light on the subject? On 10/13/16 11:21 AM, Bryan Holloway wrote: > On 10/13/16 11:01 AM, Aki Tuomi wrote: >> >>> On October 13, 2016 at 6:52 PM Konstantin Khomoutov >>> wrote: >>> >>> >>> On Thu, 13 Oct 2016 10:35:14 -0500 >>> Bryan Holloway wrote: >>> >>>>> [...] >>>>>> Is there a way to see the IMAP commands coming from the client? >>>>>> I've tried looking at PCAPs, but of course they're encrypted so I >>>>>> can't see the actual dialog going on between the server and >>>>>> client. I didn't see an obvious way to do this in the docs. >>>>> >>>>> If you have access to the SSL/TLS key (IOW, the private part of the >>>>> cert) the server uses to secure IMAP connections you can dump the >>>>> IMAP traffic using the `ssldump` utility (which builds on >>>>> `tcpdump`). >>>> >>>> I do, but the client is using a DH key exchange so I only have the >>>> server-side private key. >>>> >>>> Tried that using Wireshark's decoder features and ran into this >>>> problem. I'm assuming I'd run into the same using ssldump, but I'll >>>> give it a shot! >>> >>> I think DH is not the culprit: just to be able to actually decode SSL >>> traffic, you must have the server private key when you're decoding the >>> SSL handshake phase -- to be able to recover the session keys, which >>> you then use to decode the actual tunneled data. >> >> You can also enable only non DH algorithms in ssl settings if rawlog >> isn't working for you. >> >> Aki >> > > Ah -- interesting tip. I hadn't thought of that. Thank you! I'll report > my findings to the list. From mpeters at domblogger.net Fri Oct 28 01:49:36 2016 From: mpeters at domblogger.net (Michael A. Peters) Date: Thu, 27 Oct 2016 18:49:36 -0700 Subject: v2.2.26 released In-Reply-To: References: <201610271539.49714.arekm@maven.pl> <00422fb7-76f8-0c5a-5edb-d259e1881db8@dovecot.fi> <201610271831.26038.arekm@maven.pl> Message-ID: On 10/27/2016 09:43 AM, Aki Tuomi wrote: > > *snip* >> >>> Aki > > You are right, it was supposed to be there. Unfortunately it isn't. > > We'll see what can be done. > > Aki I maintain an RPM of the 2.2.x branch. Should I wait with pushing the update? From stephan at rename-it.nl Fri Oct 28 07:18:17 2016 From: stephan at rename-it.nl (Stephan Bosch) Date: Fri, 28 Oct 2016 09:18:17 +0200 Subject: Bugreport: managesieve-login won't start without a ssl-key In-Reply-To: <4A87E65B-125B-43F7-A831-E152FB2477BB@m.fago.me> References: <4A87E65B-125B-43F7-A831-E152FB2477BB@m.fago.me> Message-ID: Op 10/27/2016 om 9:55 PM schreef Moritz Fago: > Hello, > > If you don?t have a ssl_key and ssl_cert configured in your dovecot config managesieve-login will fail to start with the following error message: dovecot: managesieve-login: Fatal: Couldn't parse private ssl_key: error:0906D06C:PEM routines:PEM_read_bio:no start line: Expecting: ANY PRIVATE KEY, even if you haven?t enabled ssl for managesieve-login. I must say I don't really know what that error means. I see a few things though: > Infos according to http://www.dovecot.org/bugreport.html: > > Filesystem: ext4 > doveconf -n: > # 2.2.13: /etc/dovecot/dovecot.conf > # OS: Linux 3.16.0-4-amd64 x86_64 Debian 8.6 > auth_default_realm = toppoint.de > auth_mechanisms = plain login > auth_username_format = %Ln > mail_location = maildir:~/Maildir > managesieve_notify_capability = mailto > managesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy include variables body enotify environment mailbox date ihave > namespace inbox { > inbox = yes > location = > mailbox Drafts { > special_use = \Drafts > } > mailbox Junk { > special_use = \Junk > } > mailbox Sent { > special_use = \Sent > } > mailbox "Sent Messages" { > special_use = \Sent > } > mailbox Trash { > special_use = \Trash > } > prefix = > } > passdb { > args = dovecot > driver = pam > } > plugin { > sieve = ~/.sieve/dovecot.sieve > sieve_dir = ~/.sieve > } > protocols = " imap lmtp sieve pop3" > service auth { > unix_listener /var/spool/postfix/private/auth { > group = postfix > mode = 0660 > user = postfix > } > } > service lmtp { > unix_listener /var/spool/postfix/private/dovecot-lmtp { > group = postfix > mode = 0600 > user = postfix > } > } > service managesieve-login { > inet_listener sieve { > port = 4190 > ssl = yes > } This means that you're making a 'sieves' protocol, i.e. ManageSieve with TLS from the start. It doesn't exist by the standard. ManageSieve only uses the STARTTLS command. Leave out the ssl=yes here. > } > ssl = required > ssl_cert = ssl_cipher_list = HIGH::!aNULL:!eNULL:!kRSA:!kPSK:!kSRP:!aDSS:!kECDH:!kDH:!MD5:!SHA1:!RC2:!RC4:!SEED:!IDEA:!DES:!3DES > ssl_dh_parameters_length = 2048 > ssl_key = ssl_prefer_server_ciphers = yes > ssl_protocols = !SSLv3 !SSLv2 > userdb { > driver = passwd > } > protocol lmtp { > mail_plugins = sieve > } > protocol imap { > ssl_cert = ssl_key = } > protocol pop3 { > ssl_cert = ssl_key = } I see you have these set for imap and pop3, but not for "protocol sieve". Is that intentional? Regards, Stephan. From aki.tuomi at dovecot.fi Fri Oct 28 07:28:11 2016 From: aki.tuomi at dovecot.fi (Aki Tuomi) Date: Fri, 28 Oct 2016 10:28:11 +0300 Subject: Bugreport: managesieve-login won't start without a ssl-key In-Reply-To: References: <4A87E65B-125B-43F7-A831-E152FB2477BB@m.fago.me> Message-ID: <3daa6aa7-6a1f-7af0-66a0-1dc7847171a8@dovecot.fi> On 28.10.2016 10:18, Stephan Bosch wrote: > Op 10/27/2016 om 9:55 PM schreef Moritz Fago: >> Hello, >> >> If you don?t have a ssl_key and ssl_cert configured in your dovecot config managesieve-login will fail to start with the following error message: dovecot: managesieve-login: Fatal: Couldn't parse private ssl_key: error:0906D06C:PEM routines:PEM_read_bio:no start line: Expecting: ANY PRIVATE KEY, even if you haven?t enabled ssl for managesieve-login. > I must say I don't really know what that error means. I see a few things > though: > >> Infos according to http://www.dovecot.org/bugreport.html: >> >> Filesystem: ext4 >> doveconf -n: >> # 2.2.13: /etc/dovecot/dovecot.conf >> # OS: Linux 3.16.0-4-amd64 x86_64 Debian 8.6 >> auth_default_realm = toppoint.de >> auth_mechanisms = plain login >> auth_username_format = %Ln >> mail_location = maildir:~/Maildir >> managesieve_notify_capability = mailto >> managesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy include variables body enotify environment mailbox date ihave >> namespace inbox { >> inbox = yes >> location = >> mailbox Drafts { >> special_use = \Drafts >> } >> mailbox Junk { >> special_use = \Junk >> } >> mailbox Sent { >> special_use = \Sent >> } >> mailbox "Sent Messages" { >> special_use = \Sent >> } >> mailbox Trash { >> special_use = \Trash >> } >> prefix = >> } >> passdb { >> args = dovecot >> driver = pam >> } >> plugin { >> sieve = ~/.sieve/dovecot.sieve >> sieve_dir = ~/.sieve >> } >> protocols = " imap lmtp sieve pop3" >> service auth { >> unix_listener /var/spool/postfix/private/auth { >> group = postfix >> mode = 0660 >> user = postfix >> } >> } >> service lmtp { >> unix_listener /var/spool/postfix/private/dovecot-lmtp { >> group = postfix >> mode = 0600 >> user = postfix >> } >> } >> service managesieve-login { >> inet_listener sieve { >> port = 4190 >> ssl = yes >> } > This means that you're making a 'sieves' protocol, i.e. ManageSieve with > TLS from the start. It doesn't exist by the standard. ManageSieve only > uses the STARTTLS command. Leave out the ssl=yes here. > >> } >> ssl = required >> ssl_cert = > ssl_cipher_list = HIGH::!aNULL:!eNULL:!kRSA:!kPSK:!kSRP:!aDSS:!kECDH:!kDH:!MD5:!SHA1:!RC2:!RC4:!SEED:!IDEA:!DES:!3DES >> ssl_dh_parameters_length = 2048 >> ssl_key = > ssl_prefer_server_ciphers = yes >> ssl_protocols = !SSLv3 !SSLv2 >> userdb { >> driver = passwd >> } >> protocol lmtp { >> mail_plugins = sieve >> } >> protocol imap { >> ssl_cert = > ssl_key = > } >> protocol pop3 { >> ssl_cert = > ssl_key = > } > I see you have these set for imap and pop3, but not for "protocol > sieve". Is that intentional? > > Regards, > > Stephan. I can also see that imap.toppoint.de.crt is specified in main config body and inside imap protocol as well. Aki From aki.tuomi at dovecot.fi Fri Oct 28 08:47:37 2016 From: aki.tuomi at dovecot.fi (Aki Tuomi) Date: Fri, 28 Oct 2016 11:47:37 +0300 Subject: v2.2.26 released In-Reply-To: <201610271539.49714.arekm@maven.pl> References: <201610271539.49714.arekm@maven.pl> Message-ID: <5efe029a-916e-c068-4981-7ef88c0d5206@dovecot.fi> 27.10.2016 16:39, Arkadiusz Mi?kiewicz wrote: > On Thursday 27 of October 2016, Timo Sirainen wrote: >> http://dovecot.org/releases/2.2/dovecot-2.2.26.tar.gz >> http://dovecot.org/releases/2.2/dovecot-2.2.26.tar.gz.sig > Please merge to 2.2 branch this fix. I'm hitting that problem on 2.2.25: > > From 6c969ac21a43cc10ee1f1a91a4f39e4864c886cb Mon Sep 17 00:00:00 2001 > From: Aki Tuomi > Date: Fri, 15 Jul 2016 11:31:25 +0300 > Subject: [PATCH] auth: Remove i_assert for credentials scheme > > --- > src/auth/auth-request.c | 2 -- > 1 file changed, 2 deletions(-) > Hi! Do you have some details how to reproduce this issue on your end? Aki From arekm at maven.pl Fri Oct 28 09:00:36 2016 From: arekm at maven.pl (Arkadiusz =?utf-8?q?Mi=C5=9Bkiewicz?=) Date: Fri, 28 Oct 2016 11:00:36 +0200 Subject: v2.2.26 released In-Reply-To: <5efe029a-916e-c068-4981-7ef88c0d5206@dovecot.fi> References: <201610271539.49714.arekm@maven.pl> <5efe029a-916e-c068-4981-7ef88c0d5206@dovecot.fi> Message-ID: <201610281100.36705.arekm@maven.pl> On Friday 28 of October 2016, Aki Tuomi wrote: > 27.10.2016 16:39, Arkadiusz Mi?kiewicz wrote: > > On Thursday 27 of October 2016, Timo Sirainen wrote: > >> http://dovecot.org/releases/2.2/dovecot-2.2.26.tar.gz > >> http://dovecot.org/releases/2.2/dovecot-2.2.26.tar.gz.sig > > > > Please merge to 2.2 branch this fix. I'm hitting that problem on 2.2.25: > > > > From 6c969ac21a43cc10ee1f1a91a4f39e4864c886cb Mon Sep 17 00:00:00 2001 > > From: Aki Tuomi > > Date: Fri, 15 Jul 2016 11:31:25 +0300 > > Subject: [PATCH] auth: Remove i_assert for credentials scheme > > > > --- > > > > src/auth/auth-request.c | 2 -- > > 1 file changed, 2 deletions(-) > > Hi! > > Do you have some details how to reproduce this issue on your end? It seems to be related to "unknown user" always. Oct 21 08:43:51 mbox dovecot: auth-worker(31838): sql(abc, 1.1.1.1,): unknown user Oct 21 08:43:51 mbox dovecot: auth: Panic: file auth-request.c: line 1053 (auth_request_lookup_credentials): assertion failed: (request->credentials_scheme == scheme) Oct 21 08:43:51 mbox dovecot: auth: Error: Raw backtrace: /usr/lib64/dovecot/libdovecot.so.0(+0x8d822) [0x7fb7d85f5822] -> /usr/lib64/dovecot/libdovecot.so.0(+0x8d90d) [0x7fb7d85f590d] -> /usr/lib64/dovecot/libdovecot.so.0(i_fatal+0) [0x7fb7d8593e51] -> dovecot/auth [4 wait, 0 passdb, 0 userdb](auth_request_lookup_credentials+0xd8) [0x4176b8] -> dovecot/auth [4 wait, 0 passdb, 0 userdb]() [0x4245b2] -> dovecot/auth [4 wait, 0 passdb, 0 userdb]() [0x4172cb] -> dovecot/auth [4 wait, 0 passdb, 0 userdb](auth_request_lookup_credentials_callback+0x68) [0x417388] -> dovecot/auth [4 wait, 0 passdb, 0 userdb](passdb_handle_credentials+0x92) [0x427ea2] -> dovecot/auth [4 wait, 0 passdb, 0 userdb]() [0x428686] -> dovecot/auth [4 wait, 0 passdb, 0 userdb]() [0x41de8a] -> /usr/lib64/dovecot/libdovecot.so.0(io_loop_call_io+0x4c) [0x7fb7d86096cc] -> /usr/lib64/dovecot/libdovecot.so.0(io_loop_handler_run_internal+0x101) [0x7fb7d860ab51] -> /usr/lib64/dovecot/libdovecot.so.0(io_loop_handler_run+0x25) [0x7fb7d8609755] -> /usr/lib64/dovecot/libdovecot.so.0(io_loop_run+0x38) [0x7fb7d86098f8] -> /usr/lib64/dovecot/libdovecot.so.0(master_service_run+0x13) [0x7fb7d859a263] -> dovecot/auth [4 wait, 0 passdb, 0 userdb](main+0x3af) [0x40d41f] -> /lib64/libc.so.6(__libc_start_main+0xf0) [0x7fb7d7728800] -> dovecot/auth [4 wait, 0 passdb, 0 userdb](_start+0x2a) [0x40d60a] Oct 21 08:43:51 mbox dovecot: pop3-login: Warning: Auth connection closed with 2 pending requests (max 1 secs, pid=31822, EOF) Oct 21 08:43:51 mbox dovecot: auth: Fatal: master: service(auth): child 31833 killed with signal 6 (core dumps disabled) But didn't try to reproduce it as "[PATCH] auth: Remove i_assert for credentials scheme" fixes it. > > Aki -- Arkadiusz Mi?kiewicz, arekm / ( maven.pl | pld-linux.org ) From tss at iki.fi Fri Oct 28 09:30:13 2016 From: tss at iki.fi (Timo Sirainen) Date: Fri, 28 Oct 2016 12:30:13 +0300 Subject: v2.2.26 released In-Reply-To: <201610281100.36705.arekm@maven.pl> References: <201610271539.49714.arekm@maven.pl> <5efe029a-916e-c068-4981-7ef88c0d5206@dovecot.fi> <201610281100.36705.arekm@maven.pl> Message-ID: <46896E0B-8F11-4199-AA26-D89EF1BA3E91@iki.fi> On 28 Oct 2016, at 12:00, Arkadiusz Mi?kiewicz wrote: > > On Friday 28 of October 2016, Aki Tuomi wrote: >> 27.10.2016 16:39, Arkadiusz Mi?kiewicz wrote: >>> On Thursday 27 of October 2016, Timo Sirainen wrote: >>>> http://dovecot.org/releases/2.2/dovecot-2.2.26.tar.gz >>>> http://dovecot.org/releases/2.2/dovecot-2.2.26.tar.gz.sig >>> >>> Please merge to 2.2 branch this fix. I'm hitting that problem on 2.2.25: >>> >>> From 6c969ac21a43cc10ee1f1a91a4f39e4864c886cb Mon Sep 17 00:00:00 2001 >>> From: Aki Tuomi >>> Date: Fri, 15 Jul 2016 11:31:25 +0300 >>> Subject: [PATCH] auth: Remove i_assert for credentials scheme >>> >>> --- >>> >>> src/auth/auth-request.c | 2 -- >>> 1 file changed, 2 deletions(-) >> >> Hi! >> >> Do you have some details how to reproduce this issue on your end? > > It seems to be related to "unknown user" always. > > Oct 21 08:43:51 mbox dovecot: auth-worker(31838): sql(abc, 1.1.1.1,): unknown user > Oct 21 08:43:51 mbox dovecot: auth: Panic: file auth-request.c: line 1053 (auth_request_lookup_credentials): assertion failed: (request->credentials_scheme == scheme) Just for completeness: What auth mechanisms are you using? It looks to me like this assert would happen only with NTLM or SKEY mechanisms. From gilles.chauvin at univ-rouen.fr Fri Oct 28 12:36:02 2016 From: gilles.chauvin at univ-rouen.fr (Gilles Chauvin) Date: Fri, 28 Oct 2016 14:36:02 +0200 Subject: Panic: file dsync-brain-mailbox.c: line 358 (dsync_brain_sync_mailbox_deinit): assertion failed: (brain->failed || brain->sync_type == DSYNC_BRAIN_SYNC_TYPE_CHANGED) Message-ID: <54a78ad2-5058-f71d-a7b8-033bd9d214f1@univ-rouen.fr> Hello, Here is a Panic that happened while doing some testing with two servers both running Dovecot v2.2.26 on CentOS 7. These are test servers owning 32 accounts whose data were copied from our production server. What I've done is: server01# doveadm force-resync -A '*' server01# doveadm replicator replicate -f '*' For 5 accounts I obtained the following crash: 2016-10-28T14:09:43.236946+02:00 server01 dovecot: dsync-server(someuser): Panic: file dsync-brain-mailbox.c: line 358 (dsync_brain_sync_mailbox_deinit): assertion failed: (brain->failed || brain->sync_type == DSYNC_BRAIN_SYNC_TYPE_CHANGED) 2016-10-28T14:09:43.237441+02:00 server01 dovecot: dsync-server(someuser): Error: Raw backtrace: /usr/local/lib/dovecot/libdovecot.so.0(+0x8f7e0) [0x7f3d9318d7e0] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x8f8be) [0x7f3d9318d8be] -> /usr/local/lib/dovecot/libdovecot.so.0(i_fatal+0) [0x7f3d9312b9be] -> dovecot/doveadm-server [10.0.0.2 someuser slave_recv_mailbox](dsync_brain_sync_mailbox_deinit+0x163) [0x438243] -> dovecot/doveadm-server [10.0.0.2 someuser slave_recv_mailbox](dsync_brain_slave_recv_mailbox+0x277) [0x438da7] -> dovecot/doveadm-server [10.0.0.2 someuser slave_recv_mailbox](dsync_brain_run+0x5fe) [0x4368be] -> dovecot/doveadm-server [10.0.0.2 someuser slave_recv_mailbox]() [0x436c71] -> dovecot/doveadm-server [10.0.0.2 someuser slave_recv_mailbox]() [0x44becf] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x4c) [0x7f3d931a0c3c] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_handler_run_internal+0xe7) [0x7f3d931a1fd7] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0x25) [0x7f3d931a0cc5] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_run+0x38) [0x7f3d931a0e78] -> dovecot/doveadm-server [10.0.0.2 someuser slave_recv_mailbox]() [0x41fc7e] -> dovecot/doveadm-server [10.0.0.2 someuser slave_recv_mailbox]() [0x421256] -> dovecot/doveadm-server [10.0.0.2 someuser slave_recv_mailbox]() [0x433654] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x4c) [0x7f3d931a0c3c] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_handler_run_internal+0xe7) [0x7f3d931a1fd7] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0x25) [0x7f3d931a0cc5] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_run+0x38) [0x7f3d931a0e78] -> /usr/local/lib/dovecot/libdovecot.so.0(master_service_run+0x13) [0x7f3d93131a23] -> dovecot/doveadm-server [10.0.0.2 someuser slave_recv_mailbox](main+0x197) [0x413c87] -> /lib64/libc.so.6(__libc_start_main+0xf5) [0x7f3d92d5db15] -> dovecot/doveadm-server [10.0.0.2 someuser slave_recv_mailbox]() [0x413d25] 2016-10-28T14:09:43.238013+02:00 server01 dovecot: dsync-server(someuser): Fatal: master: service(doveadm): child 96390 killed with signal 6 (core dumps disabled) 2016-10-28T14:09:43.505098+02:00 server01 dovecot: dsync-server(someuser): Error: read(server02.localdomain) failed: read(size=5807) failed: Connection reset by peer (last sent=mailbox_state, last recv=mailbox_state) Regards, Gilles. -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3086 bytes Desc: S/MIME Cryptographic Signature URL: From ricardomachini at gmail.com Fri Oct 28 13:39:12 2016 From: ricardomachini at gmail.com (Ricardo Machini Barbosa) Date: Fri, 28 Oct 2016 11:39:12 -0200 Subject: Dovecot POP3 - Enable POP for mail that arrives from now on Message-ID: <047101d23120$ada32370$08e96a50$@gmail.com> Hello, Someone knows some method for implement feature like Gmail and Zimbra, about download POP3 messages only that "arrives from now" ? Regards, Ricardo Machini From leo at strike.wu.ac.at Fri Oct 28 13:41:52 2016 From: leo at strike.wu.ac.at (Alexander 'Leo' Bergolth) Date: Fri, 28 Oct 2016 15:41:52 +0200 Subject: use a second userdb that only returns extra fields Message-ID: <581355A0.5050902@strike.wu.ac.at> Hi! Is it possible to get all basic userdb information from the passwd userdb and add a second userdb of type checkpassword that only sets some additional extra fields like namespaces? I tried the following setup: -------------------- 8< -------------------- userdb { driver = passwd result_success = continue-ok } userdb { driver = checkpassword args = /usr/local/sbin/dovecot-userdb.py skip = never } -------------------- 8< -------------------- ... but it seems that as soon as the second userdb is active, dovecot doesn't take settings like uid and gid from the first userdb anymore. (Even if I don't set userdb_uid and userdb_gid in checkpassword.) On the other hand there are no environment variables that pass the settings from the previous lookup to the checkpassword script. Cheers, --leo -- e-mail ::: Leo.Bergolth (at) wu.ac.at fax ::: +43-1-31336-906050 location ::: IT-Services | Vienna University of Economics | Austria From tanstaafl at libertytrek.org Fri Oct 28 13:54:38 2016 From: tanstaafl at libertytrek.org (Tanstaafl) Date: Fri, 28 Oct 2016 09:54:38 -0400 Subject: Server migration In-Reply-To: References: <1879ea04-a29c-bf04-197c-4f8ffc0bf9bc@dovecot.fi> <7109c6da-c5be-9a95-736a-2a6c840285ed@libertytrek.org> Message-ID: <32af9379-7a4f-b61e-ec41-5c63e795a6dc@libertytrek.org> On 10/27/2016 8:36 AM, Timo Sirainen wrote: > On 27 Oct 2016, at 15:29, Tanstaafl wrote: >> On 10/26/2016 2:38 AM, Gandalf Corvotempesta >>> my only question is: how to manage the email received on the new server >>> during the last rsync phase? >> >> Use IMAPSync - much better than rsync for this. > imapsync will change IMAP UIDs and cause clients to redownload all > mails. http://wiki2.dovecot.org/Migration/Dsync should work though. Oh... I thought the --useuid option eliminated this problem? https://imapsync.lamiral.info/FAQ.d/FAQ.Duplicates.txt From tss at iki.fi Fri Oct 28 14:28:58 2016 From: tss at iki.fi (Timo Sirainen) Date: Fri, 28 Oct 2016 17:28:58 +0300 Subject: Panic: file dsync-brain-mailbox.c: line 358 (dsync_brain_sync_mailbox_deinit): assertion failed: (brain->failed || brain->sync_type == DSYNC_BRAIN_SYNC_TYPE_CHANGED) In-Reply-To: <54a78ad2-5058-f71d-a7b8-033bd9d214f1@univ-rouen.fr> References: <54a78ad2-5058-f71d-a7b8-033bd9d214f1@univ-rouen.fr> Message-ID: <70C508BB-1CC2-4D6A-B103-483ECF677139@iki.fi> On 28 Oct 2016, at 15:36, Gilles Chauvin wrote: > > Hello, > > Here is a Panic that happened while doing some testing with two servers both running Dovecot v2.2.26 on CentOS 7. > > These are test servers owning 32 accounts whose data were copied from our production server. > > > What I've done is: > > server01# doveadm force-resync -A '*' > server01# doveadm replicator replicate -f '*' > > > For 5 accounts I obtained the following crash: > > 2016-10-28T14:09:43.236946+02:00 server01 dovecot: dsync-server(someuser): Panic: file dsync-brain-mailbox.c: line 358 (dsync_brain_sync_mailbox_deinit): assertion failed: (brain->failed || brain->sync_type == DSYNC_BRAIN_SYNC_TYPE_CHANGED) This code hasn't changed for quite a long time. So I don't think this is a new bug in 2.2.26. Can you try reproduce it easily? If yes, could you try if the attached patch fixes it? -------------- next part -------------- A non-text attachment was scrubbed... Name: diff Type: application/octet-stream Size: 594 bytes Desc: not available URL: -------------- next part -------------- From gilles.chauvin at univ-rouen.fr Fri Oct 28 15:28:09 2016 From: gilles.chauvin at univ-rouen.fr (Gilles Chauvin) Date: Fri, 28 Oct 2016 17:28:09 +0200 Subject: Panic: file dsync-brain-mailbox.c: line 358 (dsync_brain_sync_mailbox_deinit): assertion failed: (brain->failed || brain->sync_type == DSYNC_BRAIN_SYNC_TYPE_CHANGED) In-Reply-To: <70C508BB-1CC2-4D6A-B103-483ECF677139@iki.fi> References: <54a78ad2-5058-f71d-a7b8-033bd9d214f1@univ-rouen.fr> <70C508BB-1CC2-4D6A-B103-483ECF677139@iki.fi> Message-ID: <0bbdce92-ebc0-798a-49fc-ff973ca0df8e@univ-rouen.fr> Hi Timo, On 28/10/2016 16:28, Timo Sirainen wrote: > This code hasn't changed for quite a long time. So I don't think this is a new bug in 2.2.26. Can you try reproduce it easily? If yes, could you try if the attached patch fixes it? > The last time we played with Dovecot's replication was during the v2.1 era and we ended avoiding its use due to numerous bugs and serious issues. Now, we are planning on migrating our Dovecot 2.2.18 VM to two physical servers running the latest release and we thought it would be a good idea to run some new tests, 4 years later, to see how it goes now! We started our new tests some days ago with v2.2.25. This explains why, if this problem isn't new, I wasn't able to report it sooner. Back on topic: While typing the same commands as before, the problem doesn't seem to reproduce after your patch was applied. I'll let you know if it shows up again. Thanks, Regards, Gilles. -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3086 bytes Desc: S/MIME Cryptographic Signature URL: From tss at iki.fi Fri Oct 28 16:51:43 2016 From: tss at iki.fi (Timo Sirainen) Date: Fri, 28 Oct 2016 19:51:43 +0300 Subject: v2.2.26.0 released Message-ID: <6CB829CF-3BC6-4089-BB35-01BAA0F99EF7@iki.fi> http://dovecot.org/releases/2.2/dovecot-2.2.26.0.tar.gz http://dovecot.org/releases/2.2/dovecot-2.2.26.0.tar.gz.sig v2.2.26 had a couple of nasty bugs left in it, so here's a fixup release. The version number is also a little bit weird, but had to be done this way (although 2.2.26.0.1 could have been another possibility). - Fixed some compiling issues. - auth: Fixed assert-crash when using NTLM or SKEY mechanisms and multiple passdbs. - auth: Fixed crash when exporting to auth-worker passdb extra fields that had empty values. - dsync: Fixed assert-crash in dsync_brain_sync_mailbox_deinit From jtam.home at gmail.com Fri Oct 28 20:49:54 2016 From: jtam.home at gmail.com (Joseph Tam) Date: Fri, 28 Oct 2016 13:49:54 -0700 (PDT) Subject: iPhone/iPad IMAP connection bursts causes user+IP exceeded In-Reply-To: References: Message-ID: I frequently see this from my iPhone/iPad IMAP users: Oct 24 21:30:55 server dovecot: imap-login: Login: user=, ... [... repeated 10 times ...] Oct 24 21:32:54 server dovecot: imap-login: Maximum number of connections from user+IP exceeded (mail_max_userip_connections=12): user= Oct 24 21:32:54 server dovecot: imap(user): Logged out ... [... repeated 11 times ...] These bursts of logins/max/logouts would cycling on for a few minutes. Googling this problem seems to turn up lots of similar complaints about iOS mail mail clients. e.g. https://discussions.apple.com/thread/2547839?tstart=0 iOS mail readers do not limit connections limit as other mailreaders can. I could increase mail_max_userip_connections, but that just moves the goal posts. Using the new rawlog feature in 2.2.26 (thanks Dovecot team!), I was able to see that these connection bursts are caused by clients doing global searches. The rawlogs show each mailbox being SELECT'd and searched (e.g. From header string): 1477369968.730450 2 ID ("name" "iPad Mail" "version" "13G36" "os" "iOS" "os-version" "9.3.5 (13G36)") 1477369968.781932 3 SELECT {mailbox} 1477369968.961636 4 UID SEARCH RETURN (COUNT) 1:* NOT DELETED 1477369969.006087 5 UID SEARCH RETURN (ALL) 1:* NOT DELETED 1477369969.052701 6 UID SEARCH RETURN (ALL) {search-term} NOT DELETED 1477369974.624153 7 LOGOUT Questions: 1) How does this affect the user? I heard from one user that it makes global searches unusable because his reader just spins its wheel. I'm not sure whether this is impatience or this results in failed searches. 2) Is there a client-side fix (e.g. connection limiting)? Apple appears to be intransigent on addressing this. 3) Will maintaining search indices (e.g. solr) help with this? Maybe the searches are taking too long and the connections pile up waiting for previous searches to finish. Thanks, Joseph Tam From gandalf.corvotempesta at gmail.com Sat Oct 29 14:17:46 2016 From: gandalf.corvotempesta at gmail.com (Gandalf Corvotempesta) Date: Sat, 29 Oct 2016 16:17:46 +0200 Subject: Dovecot Proxy and Director Message-ID: Hi, just a simple question: by using a directory and a proxy, I would be able to totally hide the pop3/imap server ip addresses from outside? I'm asking this because I would like to hide the real server IP for security reasosn (DDoS and so on). The proxy would be placed on servers with high bandwidth while the pop3/imap dovecot servers are placed in a small datacenter that would go down easily in case of attack From aki.tuomi at dovecot.fi Sat Oct 29 15:02:20 2016 From: aki.tuomi at dovecot.fi (Aki Tuomi) Date: Sat, 29 Oct 2016 18:02:20 +0300 (EEST) Subject: Dovecot Proxy and Director In-Reply-To: References: Message-ID: <645308118.281.1477753341534@appsuite-dev.open-xchange.com> > On October 29, 2016 at 5:17 PM Gandalf Corvotempesta wrote: > > > Hi, > just a simple question: by using a directory and a proxy, I would be > able to totally hide the pop3/imap server ip addresses from outside? > I'm asking this because I would like to hide the real server IP for > security reasosn (DDoS and so on). > > The proxy would be placed on servers with high bandwidth while the > pop3/imap dovecot servers are placed in a small datacenter that would > go down easily in case of attack You could use private ip addresses backends so you don't even need to expose them to internet at all. Aki From gandalf.corvotempesta at gmail.com Sat Oct 29 15:08:22 2016 From: gandalf.corvotempesta at gmail.com (Gandalf Corvotempesta) Date: Sat, 29 Oct 2016 17:08:22 +0200 Subject: Dovecot Proxy and Director In-Reply-To: <645308118.281.1477753341534@appsuite-dev.open-xchange.com> References: <645308118.281.1477753341534@appsuite-dev.open-xchange.com> Message-ID: 2016-10-29 17:02 GMT+02:00 Aki Tuomi : > You could use private ip addresses backends so you don't even need to expose them to internet at all. This means creating a VPN between my local DC with Dovecot servers and the cloud service provider with proxies. From sami.ketola at dovecot.fi Sun Oct 30 09:32:23 2016 From: sami.ketola at dovecot.fi (Sami Ketola) Date: Sun, 30 Oct 2016 11:32:23 +0200 Subject: Server migration In-Reply-To: <32af9379-7a4f-b61e-ec41-5c63e795a6dc@libertytrek.org> References: <1879ea04-a29c-bf04-197c-4f8ffc0bf9bc@dovecot.fi> <7109c6da-c5be-9a95-736a-2a6c840285ed@libertytrek.org> <32af9379-7a4f-b61e-ec41-5c63e795a6dc@libertytrek.org> Message-ID: > On 28 Oct 2016, at 16.54, Tanstaafl wrote: > > On 10/27/2016 8:36 AM, Timo Sirainen wrote: >> On 27 Oct 2016, at 15:29, Tanstaafl wrote: >>> On 10/26/2016 2:38 AM, Gandalf Corvotempesta >>>> my only question is: how to manage the email received on the new server >>>> during the last rsync phase? >>> >>> Use IMAPSync - much better than rsync for this. > >> imapsync will change IMAP UIDs and cause clients to redownload all >> mails. http://wiki2.dovecot.org/Migration/Dsync should work though. > > Oh... I thought the --useuid option eliminated this problem? > > https://imapsync.lamiral.info/FAQ.d/FAQ.Duplicates.txt It does not. There is no option at IMAP level to set the UID. In this case ?useuid seems to keep track on source:uid -> dest:uid pairs on multiple syncs and uses uid numbers to avoid syncing mails as duplicates instead of using headers to do that. Sami From jules at ispire.me Sun Oct 30 10:04:22 2016 From: jules at ispire.me (Julian Sternberg) Date: Sun, 30 Oct 2016 11:04:22 +0100 Subject: Defining INDEX target to other location than maildir seems to have no effect. Message-ID: Dovecot Version 2.2.13 Linux Distribution: Debian Jessie CPU Architecture: x64 Filesystem: GlusterFS/NFS, XFS for Base System/Index Files. Two Dovecot/Postfix nodes accessing same GlusterFS/NFS Maildir. Regardless what i choose in mail_location (:INDEX=MEMORY or :INDEX=/var/indexes/%d/%n) all Mailbox index files will still get created within the users mail_location maildir: ~/Maildir My mail_location Maildir directory is shared on GlusterFS mount so i need to get index files away from this share due locking mechanism and faster caching. The weird is, if i set INDEX to /var/indexes, the index files getting created sometimes but then not updated frequently and exist parallel on Maildir which are mostly newer than on /var/indexes. If you delete the dovecot.index* files from Maildir, they will get recreated immediatly on Imap access but not on the alternative set INDEX location, they are getting recreated within Maildir again. Here is doveconf -n Output: # 2.2.13: /etc/dovecot/dovecot.conf # OS: Linux 3.16.0-4-amd64 x86_64 Debian 8.6 auth_mechanisms = plain login cram-md5 disable_plaintext_auth = no first_valid_uid = 2000 hostname = censored.hostname.com last_valid_uid = 2000 lda_mailbox_autocreate = yes lda_mailbox_autosubscribe = yes listen = * lock_method = dotlock mail_fsync = always mail_gid = 2000 mail_home = /storage/vmail/%d/%n mail_location = maildir:~/Maildir:LAYOUT=fs:INDEX=MEMORY mail_nfs_storage = yes mail_privileged_group = vmail mail_temp_dir = /var/tmp mail_uid = 2000 maildir_very_dirty_syncs = yes managesieve_notify_capability = mailto managesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy include variables body enotify environment mailbox date ihave mmap_disable = yes namespace inbox { inbox = yes location = mailbox Archive { auto = no special_use = \Archive } mailbox Archives { auto = no special_use = \Archive } mailbox "Deleted Items" { auto = no special_use = \Trash } mailbox "Deleted Messages" { auto = no special_use = \Trash } mailbox Drafts { auto = no special_use = \Drafts } mailbox Sent { auto = subscribe special_use = \Sent } mailbox "Sent Items" { auto = no special_use = \Sent } mailbox "Sent Messages" { auto = no special_use = \Sent } mailbox Spam { auto = create special_use = \Junk } mailbox Trash { auto = subscribe special_use = \Trash } mailbox virtual/All { auto = no special_use = \All } prefix = separator = / type = private } passdb { args = /etc/dovecot/dovecot-sql.conf.ext driver = sql } plugin { quota = maildir:User quota quota_rule = *:storage=1G quota_rule2 = Trash:storage=+100M quota_rule3 = Sent:storage=+100M quota_warning = storage=95%% quota-warning 95 %u quota_warning2 = storage=80%% quota-warning 80 %u sieve = /storage/vmail/%d/%n/sieve/dovecot.sieve sieve_before = /storage/vmail/sieve/dovecot.sieve sieve_dir = /storage/vmail/%d/%n/sieve sieve_global = /storage/vmail/sieve } postmaster_address = postmaster at censored.hostname.com protocols = " imap lmtp sieve pop3" quota_full_tempfail = yes service auth { unix_listener /var/spool/postfix/private/auth { group = postfix mode = 0660 user = postfix } unix_listener auth-userdb { group = vmail mode = 0666 user = vmail } } service imap-login { inet_listener imaps { port = 993 ssl = yes } service_count = 0 } service lmtp { unix_listener /var/spool/postfix/private/dovecot-lmtp { group = postfix mode = 0666 user = postfix } } service managesieve-login { inet_listener sieve { port = 4190 } service_count = 1 } service pop3-login { inet_listener pop3 { port = 110 } inet_listener pop3s { port = 995 ssl = yes } } service quota-warning { executable = script /usr/local/bin/quota-warning.sh unix_listener quota-warning { user = vmail } user = vmail } ssl = required ssl_ca = Hello Dovecot users, Here's the definitive 0.4.16 release. There were no changes since the release candidate. The reported replication issues are still open, since we haven't been able to reproduce them so far. Changelog v0.4.16: * Part of the Sieve extprograms implementation was moved to Dovecot, which means that this release depends on Dovecot v2.2.26+. * ManageSieve: The PUTSCRIPT command now allows uploading empty Sieve scripts. There was really no good reason to disallow doing that. + Sieve vnd.dovecot.report extension: + Added a Dovecot-Reporting-User field to the report body, which contains the e-mail address of the user sending the report. + Added support for configuring the "From:" address used in the report. + LDA sieve plugin: Implemented support for a "discard script" that is run when the message is going to be discarded. This allows doing something other than throwing the message away for good. + Sieve vnd.dovecot.environment extension: Added vnd.dovecot.config.* environment items. These environment items map to sieve_env_* settings from the plugin {} section in the configuration. Such values can of course also be returned from userdb. + Sieve vacation extension: Use the Microsoft X-Auto-Response-Suppress header to prevent unwanted responses from and to (older) Microsoft products. + ManageSieve: Added rawlog_dir setting to store ManageSieve traffic logs. This replaces at least partially the rawlog plugin (mimics similar IMAP/POP3 change). - doveadm sieve plugin: synchronization: Prevent setting file timestamps to unix epoch time. This occurred when Dovecot passed the timestamp as 'unknown' during synchronization. - Sieve exprograms plugin: Fixed spurious '+' sometimes returned at the end of socket-based program output. - imapsieve plugin: Fixed crash occurring in specific situations. - Performed various fixes based on static analysis and Clang warnings. The release is available as follows: http://pigeonhole.dovecot.org/releases/2.2/dovecot-2.2-pigeonhole-0.4.16.tar.gz http://pigeonhole.dovecot.org/releases/2.2/dovecot-2.2-pigeonhole-0.4.16.tar.gz.sig Refer to http://pigeonhole.dovecot.org and the Dovecot v2.x wiki for more information. Have fun testing this release and don't hesitate to notify me when there are any problems. Regards, -- Stephan Bosch stephan at rename-it.nl From sven_roellig at yahoo.de Mon Oct 31 09:17:34 2016 From: sven_roellig at yahoo.de (Sven Roellig) Date: Mon, 31 Oct 2016 09:17:34 +0000 (UTC) Subject: doveadm stats top crash References: <1972723913.1850886.1477905454215.ref@mail.yahoo.com> Message-ID: <1972723913.1850886.1477905454215@mail.yahoo.com> Hi,when i start doveadm stats top it crash Panic: key not found from hash Error: Raw backtrace: /usr/lib/dovecot/libdovecot.so.0(+0x9560e) [0x7fe0e5e5c60e] -> /usr/lib/dovecot/libdovecot.so.0(+0x95688) [0x7fe0e5e5c688] -> /usr/lib/dovecot/libdovecot.so.0(i_fatal+0) [0x7fe0e5df496e] -> doveadm(+0x25290) [0x7fe0e6abb290] -> /usr/lib/dovecot/libdovecot.so.0(io_loop_handle_timeouts+0xfb) [0x7fe0e5e7104b] -> /usr/lib/dovecot/libdovecot.so.0(io_loop_handler_run_internal+0xc3) [0x7fe0e5e72643] -> /usr/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0x25) [0x7fe0e5e71265] -> /usr/lib/dovecot/libdovecot.so.0(io_loop_run+0x30) [0x7fe0e5e71400] -> doveadm(+0x256a1) [0x7fe0e6abb6a1] -> doveadm(doveadm_cmd_ver2_to_cmd_wrapper+0x23a) [0x7fe0e6acf8aa] -> doveadm(doveadm_cmd_run_ver2+0x560) [0x7fe0e6ad0600] -> doveadm(doveadm_cmd_try_run_ver2+0x37) [0x7fe0e6ad0657] -> doveadm(main+0x1e4) [0x7fe0e6ab07c4] -> /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5) [0x7fe0e5a3db45] -> doveadm(+0x1aba8) [0x7fe0e6ab0ba8] Aborted Sven From mrobti at insiberia.net Mon Oct 31 09:19:43 2016 From: mrobti at insiberia.net (mrobti at insiberia.net) Date: Mon, 31 Oct 2016 02:19:43 -0700 Subject: Safe to run migration on live/production server? Message-ID: I have an old mbox formatted INBOX and folders that need to be converted to a Maildir. Can I run the dsync command when the server is live and Dovecot LMTP is delivering new mails to the same user? Will it be able to preserve UIDs and not confuse any indexes? I started the job with deliveries disabled so the conversion could complete, but it's taking too long. I want to know if I can re-enable mail deliveries and not cause problems. I see it building a massive number of files in the user's Maildir/tmp folder, which I presume is normal. By the way, what's the size multiplier for mbox to maildir for many small emails only a couple K each? I'm finding it's a lot more disk space for maildir in this case. From mail at tomsommer.dk Mon Oct 31 10:01:41 2016 From: mail at tomsommer.dk (Tom Sommer) Date: Mon, 31 Oct 2016 11:01:41 +0100 Subject: Errors with count:User quota and NFS Message-ID: <3c824d592f58b8922de0e810c168f508@tomsommer.dk> I upgraded to 2.2.26.0 and enabled count as quota backend, expecting the recent fixes to allow me to use the backend, however I get the following errors: Oct 31 10:52:13 imap(xxxx at xxxx.xx): Error: Transaction log file /mnt/nfs/xxxx.xx/xxx/indexes/dovecot.list.index.log: marked corrupted Oct 31 10:52:15 imap(xxx at xxxx.xx): Error: Transaction log file /mnt/nfs/xxxx.xx/xxx/indexes/dovecot.list.index.log: marked corrupted Oct 31 10:52:37 imap(xxx at xxxx.xx): Warning: Locking transaction log file /mnt/nfs/xxx.xx/xxx/indexes/dovecot.list.index.log took 31 seconds (syncing) Oct 31 10:52:37 imap(xxx at xxx.xx): Warning: Locking transaction log file /mnt/nfs/xxx.xx/xxx/indexes/dovecot.list.index.log took 31 seconds (syncing) Oct 31 10:52:43 imap(xxx at xxx.xx): Error: Transaction log file /mnt/nfs/xxx.xx/xx/indexes/dovecot.list.index.log: marked corrupted Oct 31 10:52:52 imap(xxx at xxx.xx): Warning: Locking transaction log file /mnt/nfs/xxx.xx/xxx/indexes/dovecot.list.index.log took 31 seconds (syncing) Oct 31 10:53:04 imap(xxx at xxx.x): Warning: Locking transaction log file /mnt/nfs/xxx.xx/xxx/indexes/dovecot.list.index.log took 60 seconds (syncing) Oct 31 10:53:06 imap(xxx at xxx.xx): Warning: Locking transaction log file /mnt/nfs/xxx.xxx/xxx/indexes/dovecot.list.index.log took 31 seconds (syncing) (all different accounts) If I disable count as backend, there are no errors. I'm running my mail-storage on NFS, so I suspect the errors are due to locking? So is there no way to run count as quota backend with NFS? Thanks. -- Tom From mail at tomsommer.dk Mon Oct 31 10:45:55 2016 From: mail at tomsommer.dk (Tom Sommer) Date: Mon, 31 Oct 2016 11:45:55 +0100 Subject: Errors with count:User quota and NFS In-Reply-To: <3c824d592f58b8922de0e810c168f508@tomsommer.dk> References: <3c824d592f58b8922de0e810c168f508@tomsommer.dk> Message-ID: On 2016-10-31 11:01, Tom Sommer wrote: > I upgraded to 2.2.26.0 and enabled count as quota backend, expecting > the recent fixes to allow me to use the backend, however I get the > following errors: It just occured to me, that the reason for the locking/errors may be, that it is big mailboxes being recalculated? --- Tom From tanstaafl at libertytrek.org Mon Oct 31 11:11:56 2016 From: tanstaafl at libertytrek.org (Tanstaafl) Date: Mon, 31 Oct 2016 07:11:56 -0400 Subject: Server migration In-Reply-To: References: <1879ea04-a29c-bf04-197c-4f8ffc0bf9bc@dovecot.fi> <7109c6da-c5be-9a95-736a-2a6c840285ed@libertytrek.org> <32af9379-7a4f-b61e-ec41-5c63e795a6dc@libertytrek.org> Message-ID: <4ab2b970-8c9f-5d44-c247-d44a89f10ab7@libertytrek.org> On 10/30/2016 5:32 AM, Sami Ketola wrote: > On 28 Oct 2016, at 16.54, Tanstaafl wrote: >> Oh... I thought the --useuid option eliminated this problem? >> >> https://imapsync.lamiral.info/FAQ.d/FAQ.Duplicates.txt > It does not. There is no option at IMAP level to set the UID. > > In this case ?useuid seems to keep track on source:uid -> dest:uid > pairs on multiple syncs and uses uid numbers to avoid syncing mails > as duplicates instead of using headers to do that. Ok, interesting. So... how does dsync do it? Or would it only work between two dovecot servers? I'm interested in migrating from other servers (Office 365 in one case). Thanks, Charles From s.hanselman at datagatesystems.com Mon Oct 31 19:35:02 2016 From: s.hanselman at datagatesystems.com (Stephen Hanselman) Date: Mon, 31 Oct 2016 12:35:02 -0700 Subject: IP Addresses Message-ID: <067901d233ad$e00b2030$a0216090$@datagatesystems.com> Good Morning, Can someone point me to the area in Dovecot that deals with incoming IP addresses. Specifically I want to determine if it is possible to "spoof" the address or is the address I look at in the headers the actual address that made the connection request (hopefully it is). Thank you Steve Hanselman -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5820 bytes Desc: not available URL: From tss at iki.fi Mon Oct 31 21:04:36 2016 From: tss at iki.fi (Timo Sirainen) Date: Mon, 31 Oct 2016 23:04:36 +0200 Subject: Errors with count:User quota and NFS In-Reply-To: <3c824d592f58b8922de0e810c168f508@tomsommer.dk> References: <3c824d592f58b8922de0e810c168f508@tomsommer.dk> Message-ID: On 31 Oct 2016, at 12:01, Tom Sommer wrote: > > I upgraded to 2.2.26.0 and enabled count as quota backend, expecting the recent fixes to allow me to use the backend, however I get the following errors: > > Oct 31 10:52:13 imap(xxxx at xxxx.xx): Error: Transaction log file /mnt/nfs/xxxx.xx/xxx/indexes/dovecot.list.index.log: marked corrupted > Oct 31 10:52:15 imap(xxx at xxxx.xx): Error: Transaction log file /mnt/nfs/xxxx.xx/xxx/indexes/dovecot.list.index.log: marked corrupted These mean that something marked the index as corrupted. There really should be another error message logged about what did it and why.. > Oct 31 10:52:37 imap(xxx at xxxx.xx): Warning: Locking transaction log file /mnt/nfs/xxx.xx/xxx/indexes/dovecot.list.index.log took 31 seconds (syncing) > Oct 31 10:52:37 imap(xxx at xxx.xx): Warning: Locking transaction log file /mnt/nfs/xxx.xx/xxx/indexes/dovecot.list.index.log took 31 seconds (syncing) This just means something is being slow. Not necessarily a problem. Although it could also indicate a deadlock. Is this Maildir? Did you say you were using lock_method=dotlock? From jean.francois.pion at free.fr Mon Oct 31 15:49:56 2016 From: jean.francois.pion at free.fr (jean francois pion) Date: Mon, 31 Oct 2016 16:49:56 +0100 Subject: mail relay for local network Message-ID: <2c887d44-ad59-ec7b-6a64-619b0b85cb0c@free.fr> hello, i'm quite a newbie to the mail server instalation and admin (you must start one day !) i would like to use a rapberry pi to build a system to get the mails from different mail accounts via pop, to store them in the raspberry memory an be able to get them with my thunderbird client on the different computers on my local network. no need to accesse them via internet it is for local use only. the puropose is to avoid gettin my fai mail box full and loosing mails (quite a big mail traffic ) no need for spamkiller i've got what i want already on the computer no need for an smtp relay the computer can acces the fai smtp, is dovecot able to do that ? thank you sorry for the bad english -- -- JF Pion *Quand je suis all? ? l'?cole, ils m'ont demand? ce que je voulais ?tre quand je serai grand. J'ai r?pondu : "Heureux"* Ils m?ont dit que je n?avais pas compris la question, j?ai r?pondu qu?ils n?avaient pas compris la vie. John Lennon Des montages ?lectroniques pour le mod?lisme http://jean.francois.pion.free.fr le site du vol ?lectrique http://electrofly.free.fr/ --- L'absence de virus dans ce courrier ?lectronique a ?t? v?rifi?e par le logiciel antivirus Avast. https://www.avast.com/antivirus