[Pdns-users] PDNS 2.9.21 using LOTS of virtual memory...
Stephen Manchester
smanches at craftyspace.com
Thu Sep 20 22:28:48 UTC 2007
Here is the output of pmap. It's showing the problem as a series of
10MB allocations. I chopped out a big section that was completely
redundant.
21210: /usr/local/pdns/sbin/pdns_server-instance --daemon --
guardian=yes
Address Kbytes Mode Offset Device Mapping
00111000 36 r-x-- 0000000000000000 008:00003 libnss_files-2.3.4.so
0011a000 4 r---- 0000000000008000 008:00003 libnss_files-2.3.4.so
0011b000 4 rw--- 0000000000009000 008:00003 libnss_files-2.3.4.so
00348000 84 r-x-- 0000000000000000 008:00003 ld-2.3.4.so
0035d000 4 r---- 0000000000015000 008:00003 ld-2.3.4.so
0035e000 4 rw--- 0000000000016000 008:00003 ld-2.3.4.so
00366000 1172 r-x-- 0000000000000000 008:00003 libc-2.3.4.so
0048b000 8 r---- 0000000000124000 008:00003 libc-2.3.4.so
0048d000 8 rw--- 0000000000126000 008:00003 libc-2.3.4.so
0048f000 8 rw--- 000000000048f000 000:00000 [ anon ]
00493000 8 r-x-- 0000000000000000 008:00003 libdl-2.3.4.so
00495000 4 r---- 0000000000001000 008:00003 libdl-2.3.4.so
00496000 4 rw--- 0000000000002000 008:00003 libdl-2.3.4.so
00499000 132 r-x-- 0000000000000000 008:00003 libm-2.3.4.so
004ba000 4 r---- 0000000000020000 008:00003 libm-2.3.4.so
004bb000 4 rw--- 0000000000021000 008:00003 libm-2.3.4.so
004e5000 56 r-x-- 0000000000000000 008:00003 libpthread-2.3.4.so
004f3000 4 r---- 000000000000d000 008:00003 libpthread-2.3.4.so
004f4000 4 rw--- 000000000000e000 008:00003 libpthread-2.3.4.so
004f5000 8 rw--- 00000000004f5000 000:00000 [ anon ]
006b4000 72 r-x-- 0000000000000000 008:00003 libnsl-2.3.4.so
006c6000 4 r---- 0000000000011000 008:00003 libnsl-2.3.4.so
006c7000 4 rw--- 0000000000012000 008:00003 libnsl-2.3.4.so
006c8000 8 rw--- 00000000006c8000 000:00000 [ anon ]
0074f000 28 r-x-- 0000000000000000 008:00003
libgcc_s-3.4.6-20060404.so.1
00756000 4 rw--- 0000000000007000 008:00003
libgcc_s-3.4.6-20060404.so.1
0079b000 768 r-x-- 0000000000000000 008:00003 libstdc++.so.6.0.3
0085b000 20 rw--- 00000000000bf000 008:00003 libstdc++.so.6.0.3
00860000 24 rw--- 0000000000860000 000:00000 [ anon ]
00872000 20 r-x-- 0000000000000000 008:00003 libcrypt-2.3.4.so
00877000 4 r---- 0000000000004000 008:00003 libcrypt-2.3.4.so
00878000 4 rw--- 0000000000005000 008:00003 libcrypt-2.3.4.so
00879000 156 rw--- 0000000000879000 000:00000 [ anon ]
08048000 2048 r-x-- 0000000000000000 008:00003 pdns_server
08248000 1080 rw--- 00000000001ff000 008:00003 pdns_server
08356000 8 rw--- 0000000008356000 000:00000 [ anon ]
09bf3000 876 rw--- 0000000009bf3000 000:00000 [ anon ]
8f1cb000 4 ----- 000000008f1cb000 000:00000 [ anon ]
8f1cc000 10240 rw--- 000000008f1cc000 000:00000 [ anon ]
8fbcc000 4 ----- 000000008fbcc000 000:00000 [ anon ]
8fbcd000 10240 rw--- 000000008fbcd000 000:00000 [ anon ]
905cd000 4 ----- 00000000905cd000 000:00000 [ anon ]
...
b6b57000 4 ----- 00000000b6b57000 000:00000 [ anon ]
b6b58000 10240 rw--- 00000000b6b58000 000:00000 [ anon ]
b7558000 4 ----- 00000000b7558000 000:00000 [ anon ]
b7559000 10252 rw--- 00000000b7559000 000:00000 [ anon ]
bfeac000 1360 rw--- 00000000bfeac000 000:00000 [ stack ]
ffffe000 4 ----- 0000000000000000 000:00000 [ anon ]
mapped: 675972K writeable/private: 669716K shared: 0K
Current status page information from the server....
Uptime: 7.84 hours Queries/second, 1, 5, 10 minute averages: 0.272,
0.199, 0.195. Max queries/second: 1.59
Cache hitrate, 1, 5, 10 minute averages: 30%, 31%, 35%
Backend query cache hitrate, 1, 5, 10 minute averages: 16%, 27%, 31%
Backend query load, 1, 5, 10 minute averages: 0.386, 0.294, 0.271.
Max queries/second: 0.866
Total queries: 5639. Question/answer latency: 0ms
corrupt-packets 1 Number of corrupt packets received
deferred-cache-inserts 61 Amount of cache inserts that were deferred
because of maintenance
deferred-cache-lookup 17 Amount of cache lookups that were deferred
because of maintenance
latency 0 Average number of microseconds needed to answer a question
packetcache-hit 1850
packetcache-miss 3786
packetcache-size 5
qsize-q 1 Number of questions waiting for database attention
query-cache-hit 4443 Number of hits on the query cache
query-cache-miss 8066 Number of misses on the query cache
recursing-answers 0 Number of recursive answers sent out
recursing-questions 0 Number of questions sent to recursor
servfail-packets 0 Number of times a server-failed packet was sent out
tcp-answers 0 Number of answers sent out over TCP
tcp-queries 0 Number of TCP queries received
timedout-packets 78 Number of packets which weren't answered within
timeout set
udp-answers 5505 Number of answers sent out over UDP
udp-queries 5639 Number of UDP queries received
udp4-answers 5505 Number of IPv4 answers sent out over UDP
udp4-queries 5638 Number of IPv4UDP queries received
udp6-answers 0 Number of IPv6 answers sent out over UDP
udp6-queries 0 Number of IPv6 UDP queries received
On Sep 20, 2007, at 3:07 PM, Stephen Manchester wrote:
> I was noticing it's mem usage in top. Resident memory is usually
> less than 10k. It is also showing the high VM usage in /proc/
> meminfo. We've been having some oom issues lately, and this caught
> my eye as being not right. Not that VM would directly cause oom
> condition, but it does seem like a bug as why would it need so
> much? There might be a correlating bug that actually causes all
> the memory to go resident at some strange time, which would cause
> an oom condition.
>
> I did set the backend cache to 5 minutes, and the packet cache to 1
> minute. most of our records never change, we just add to them so
> this seemed a good way to reduce load. All other config options are
> pretty much default.
>
> What would/could it be allocating that much virtual memory for?
>
>
> On Sep 20, 2007, at 2:54 PM, bert hubert wrote:
>
>> On Thu, Sep 20, 2007 at 02:52:50PM -0700, Stephen Manchester wrote:
>>> I've been running PDNS server on CentOS 4.4 for about 3 months now.
>>> I've noticed that PDNS allocates a LOT of virtual memory as it's
>>> running, and never frees it up. It can do this at an incredibly
>>> fast
>>> rate at times, as much as 2.9GB within 24 hours. Has anyone seen
>>> this before and know why it might be happening?
>>
>> Stephen,
>>
>> How do you measure its actual usage?
>>
>> The trick is to look at the 'RES' entry of 'top' for example.
>>
>> Other measurements can be a tad confusing, I think due to the
>> static linking
>> we employ in our RPMs. The Kees Monshouwer RPMs might be better in
>> this
>> respect: ftp://ftp.monshouwer.com/pub/linux/pdns-server
>>
>> Bert
>> --
>> http://www.PowerDNS.com Open source, database driven DNS
>> Software
>> http://netherlabs.nl Open and Closed source services
>
> _______________________________________________
> Pdns-users mailing list
> Pdns-users at mailman.powerdns.com
> http://mailman.powerdns.com/mailman/listinfo/pdns-users
More information about the Pdns-users
mailing list