[Pdns-users] remote backend
Alexis Fidalgo
alzrck at gmail.com
Wed Nov 29 17:20:01 UTC 2023
Yes, I’m on it right now.
Changes are very often, mongo is in ssd and I’m querying by the _id that is indexed, plus I’m using a mongo replica (it’s supposed to be a benefit for reading)
But yes, I’m right now doing exactly what you told, writing an http test unit in golang to heavily charge the responder.
Let’s see what happens.
Again guys, thanks again for your help and support on this.
Enviado desde dispositivo móvil
> El 29 nov 2023, a la(s) 11:53, Brian Candler <b.candler at pobox.com> escribió:
>
> On 29/11/2023 14:04, Alexis Fidalgo wrote:
>> So, by now, i dont know what is making for a query to be answered and another not (timeout) and in a retry is answered ok. (this is why i thought on speed and considered the unix socket but now i know it’s not that)
>
> Put logging in your remote backend and show what queries it receives and how long it takes to respond to each one. Use these logs to check that the queries generated by PowerDNS are what you expect (it may make multiple requests for a single received DNS query).
>
> You can also take PowerDNS entirely out of the problem by making a set of suitable test HTTP calls directly to your backend, for the same set queries that PowerDNS would generate. If you can prove that your backend is taking too long to answer them (on the first attempt at least), then you know where to investigate.
>
> For example, it might be that MongoDB is doing a lot of slow disk seeks (is it spinning rust or SSD?) but once it has the answer, everything it needs is cached in RAM so it's much quicker on the second attempt. Or maybe it's not indexed properly. You really need to drill down further to prove or disprove that idea.
>
> If you find that MongoDB is the bottleneck and can't be tuned, then there are other options. For example, if this database doesn't change very often, then you could write it out to a CDB file:
>
> https://cr.yp.to/cdb.html
> https://en.wikipedia.org/wiki/Cdb_(software)
>
> This is optimised for very fast lookup with minimum seeking, and can be indexed in a single pass - but it can't be modified, so you'd have to regenerate the whole file periodically. Also it has a 4GB size limit which is probably an issue here (limiting you to avg 14 bytes per key/value pair) so you may need to split into multiple files.
>
> A suitably-indexed Postgres table with 300 million entries is big but not impossible, and PowerDNS could query it directly.
>
More information about the Pdns-users
mailing list