Login | Register For Free | Help
Search for: (Advanced)

Mailing List Archive: Varnish: Bugs

#897: sess_mem "leak" on hyper-threaded cpu

 

 

Varnish bugs RSS feed   Index | Next | Previous | View Threaded


varnish-bugs at varnish-cache

Apr 8, 2011, 5:36 PM

Post #1 of 13 (1063 views)
Permalink
#897: sess_mem "leak" on hyper-threaded cpu

#897: sess_mem "leak" on hyper-threaded cpu
-------------------------------------------------+--------------------------
Reporter: askalski | Type: defect
Status: new | Priority: normal
Milestone: | Component: build
Version: trunk | Severity: major
Keywords: sess_mem leak n_sess race condition |
-------------------------------------------------+--------------------------
There is a race condition on the n_sess statistic, which causes the
counter to drift upward to ridiculously high levels:

{{{
100000 . . N struct sess_mem
867438 . . N struct sess
}}}

Because SES_Delete() uses the n_sess counter to decide whether to pre-
allocate additional workspaces (sess_mem), this leads varnish eventually
to allocate session_max of them (100000 by default), which consumes an
excessive amount of memory.

{{{
97a1d998 (Poul-Henning Kamp 2010-06-17 08:47:19 +0000 220)
VSC_main->n_sess++; /* XXX: locking ? */
...
97a1d998 (Poul-Henning Kamp 2010-06-17 08:47:19 +0000 261)
VSC_main->n_sess--; /* XXX: locking ? */
...
28e7319e (Poul-Henning Kamp 2010-01-26 21:58:30 +0000 285) /* Try to
precreate some ses-mem so the acceptor will not have to */
97a1d998 (Poul-Henning Kamp 2010-06-17 08:47:19 +0000 286) if
(VSC_main->n_sess_mem < VSC_main->n_sess + 10) {
28e7319e (Poul-Henning Kamp 2010-01-26 21:58:30 +0000 287) sm
= ses_sm_alloc();
28e7319e (Poul-Henning Kamp 2010-01-26 21:58:30 +0000 288) if
(sm != NULL) {
28e7319e (Poul-Henning Kamp 2010-01-26 21:58:30 +0000 289)
ses_setup(sm);
28e7319e (Poul-Henning Kamp 2010-01-26 21:58:30 +0000 290)
Lck_Lock(&ses_mem_mtx);
28e7319e (Poul-Henning Kamp 2010-01-26 21:58:30 +0000 291)
VTAILQ_INSERT_HEAD(&ses_free_mem[1 - ses_qp], sm, li
28e7319e (Poul-Henning Kamp 2010-01-26 21:58:30 +0000 292)
Lck_Unlock(&ses_mem_mtx);
28e7319e (Poul-Henning Kamp 2010-01-26 21:58:30 +0000 293) }
28e7319e (Poul-Henning Kamp 2010-01-26 21:58:30 +0000 294) }
}}}

The bug only seems to manifest itself on machines with hyper-threaded
CPU's. I was able to reproduce the issue on my laptop (Core i7, 2-core +
HT = 4 virtual cores) by hitting varnish with heavy concurrency (ab
-c128).

{{{
# Test 1: All virtual cores active - Bug exists
$ egrep 'core id' /proc/cpuinfo
core id : 0
core id : 2
core id : 0
core id : 2

# Test 2: Two virtual cores disabled, HT disabled - No bug
$ egrep 'core id' /proc/cpuinfo
core id : 0
core id : 2

# Test 3: Two virtual cores disabled, HT enabled - Bug exists
$ egrep 'core id' /proc/cpuinfo
core id : 0
core id : 0
}}}

Locking stat_mtx solves the problem.

--
Ticket URL: <http://www.varnish-cache.org/trac/ticket/897>
Varnish <http://varnish-cache.org/>
The Varnish HTTP Accelerator

_______________________________________________
varnish-bugs mailing list
varnish-bugs [at] varnish-cache
http://www.varnish-cache.org/lists/mailman/listinfo/varnish-bugs


varnish-bugs at varnish-cache

Apr 9, 2011, 8:59 PM

Post #2 of 13 (1039 views)
Permalink
Re: #897: sess_mem "leak" on hyper-threaded cpu [In reply to]

#897: sess_mem "leak" on hyper-threaded cpu
-------------------------------------------------+--------------------------
Reporter: askalski | Type: defect
Status: new | Priority: normal
Milestone: | Component: build
Version: trunk | Severity: major
Keywords: sess_mem leak n_sess race condition |
-------------------------------------------------+--------------------------

Comment(by askalski):

I wrote a short program to test the increment/decrement race condition,
and ran it on a dual socket quad core machine with HT. The program runs
through a series of 2-thread trials, with each trial setting thread CPU
affinities to a different pair (ex: thread A runs on cpu0, thread B runs
on cpu1.)

{{{
for (i = arg->iterations; i > 0; --i) {
++counter;
--counter;
}
}}}

Cpu0 and cpu8 share the same physical id and core id.

{{{
$ ./racetest
cpu0 vs cpu0 (1000000000 iterations)... counter drift = 2
cpu0 vs cpu1 (1000000000 iterations)... counter drift = 318
cpu0 vs cpu2 (1000000000 iterations)... counter drift = 709
cpu0 vs cpu3 (1000000000 iterations)... counter drift = 313
cpu0 vs cpu4 (1000000000 iterations)... counter drift = 690
cpu0 vs cpu5 (1000000000 iterations)... counter drift = 336
cpu0 vs cpu6 (1000000000 iterations)... counter drift = 578
cpu0 vs cpu7 (1000000000 iterations)... counter drift = 359
cpu0 vs cpu8 (1000000000 iterations)... counter drift = 13720798
cpu0 vs cpu9 (1000000000 iterations)... counter drift = 325
cpu0 vs cpu10 (1000000000 iterations)... counter drift = 581
cpu0 vs cpu11 (1000000000 iterations)... counter drift = 361
cpu0 vs cpu12 (1000000000 iterations)... counter drift = 685
cpu0 vs cpu13 (1000000000 iterations)... counter drift = 316
cpu0 vs cpu14 (1000000000 iterations)... counter drift = 637
cpu0 vs cpu15 (1000000000 iterations)... counter drift = 337
}}}

--
Ticket URL: <http://www.varnish-cache.org/trac/ticket/897#comment:1>
Varnish <http://varnish-cache.org/>
The Varnish HTTP Accelerator

_______________________________________________
varnish-bugs mailing list
varnish-bugs [at] varnish-cache
http://www.varnish-cache.org/lists/mailman/listinfo/varnish-bugs


varnish-bugs at varnish-cache

Apr 11, 2011, 3:21 PM

Post #3 of 13 (1027 views)
Permalink
Re: #897: sess_mem "leak" on hyper-threaded cpu [In reply to]

#897: sess_mem "leak" on hyper-threaded cpu
-------------------------------------------------+--------------------------
Reporter: askalski | Type: defect
Status: new | Priority: normal
Milestone: | Component: build
Version: trunk | Severity: major
Keywords: sess_mem leak n_sess race condition |
-------------------------------------------------+--------------------------

Comment(by askalski):

I did some testing to figure out how this bug relates to abnormally high
memory usage on production varnish servers (the original issue that got me
looking into this.) I generated synthetic load against a varnishd
(malloc,64M) running on my laptop (Ubuntu 10.10, kernel 2.6.35), until the
n_sess_mem counter maxed out at 100000. Varnish memory usage reached 2GB
resident.

{{{
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
nobody 23388 5.7 52.7 8369944 2032540 ? Sl 15:07 9:24
/usr/sbin/varnishd \
-P /var/run/varnishd.pid -a :6081 -T localhost:6082 \
-f /etc/varnish/default.vcl -S /etc/varnish/secret \
-s malloc,64M
}}}

I then ran a program which attached to the varnishd process via ptrace,
scanned its mapped memory for sess_mem objects (magic number 0x555859c5),
then tallied up the number of dirty pages belonging to each (scanning 17
pages per struct: 3288 byte sess_mem/http + 65536 bytes workspace). The
scanner found all 100000 of the structs. (Note: The 64MB of SMA-cached
data had all expired by the time I was able to scan the process memory; --
I used short TTL's.)

{{{
sess_mem_count = 100000
dirty_pages = 504430
memory_used = 2017720kB (1970 MB)
}}}

The reason that so much session workspace memory was dirtied was not
because of my generated request/response load (they relatively little of
the workspace.) Rather, it was because the workspaces were allocated, by
malloc, to memory locations that had previously been dirtied by the SMA
stevedore (objects that either had expired, or were LRU-evicted.)

I have a few production machines where the varnishd is using 6.5GB over
what SMA was configured to use. Unfortunately, I cannot perform the same
analysis on the memory of those processes, because ptrace() causes the
varnish child to exit on RHEL5/2.6.18 (the poll() in CLS_Poll returns
EINTR, which is interpreted as an error. Kernel 2.6.24+ is able to
restart sys_poll() without userspace intervention.)

Here's the post-mortem varnishstat:

{{{
client_conn 3846502 362.50 Client connections accepted
client_drop 96 0.01 Connection dropped, no sess/wrk
client_req 3813266 359.37 Client requests received
cache_hit 0 0.00 Cache hits
cache_hitpass 0 0.00 Cache hits for pass
cache_miss 185362 17.47 Cache misses
backend_conn 1833 0.17 Backend conn. success
backend_unhealthy 0 0.00 Backend conn. not attempted
backend_busy 0 0.00 Backend conn. too many
backend_fail 0 0.00 Backend conn. failures
backend_reuse 183098 17.26 Backend conn. reuses
backend_toolate 0 0.00 Backend conn. was closed
backend_recycle 183527 17.30 Backend conn. recycles
backend_unused 0 0.00 Backend conn. unused
fetch_head 0 0.00 Fetch head
fetch_length 0 0.00 Fetch with Length
fetch_chunked 185362 17.47 Fetch chunked
fetch_eof 0 0.00 Fetch EOF
fetch_bad 0 0.00 Fetch had bad headers
fetch_close 0 0.00 Fetch wanted close
fetch_oldhttp 0 0.00 Fetch pre HTTP/1.1 closed
fetch_zero 0 0.00 Fetch zero len
fetch_failed 0 0.00 Fetch failed
n_sess_mem 100000 . N struct sess_mem
n_sess 100442 . N struct sess
n_object 0 . N struct object
n_vampireobject 0 . N unresurrected objects
n_objectcore 3 . N struct objectcore
n_objecthead 3 . N struct objecthead
n_smf 0 . N struct smf
n_smf_frag 0 . N small free smf
n_smf_large 0 . N large free smf
n_vbe_conn 1 . N struct vbe_conn
n_wrk 10 . N worker threads
n_wrk_create 502 0.05 N worker threads created
n_wrk_failed 0 0.00 N worker threads not created
n_wrk_max 1829 0.17 N worker threads limited
n_wrk_queue 0 0.00 N queued work requests
n_wrk_overflow 94955 8.95 N overflowed work requests
n_wrk_drop 303 0.03 N dropped work requests
n_backend 1 . N backends
n_expired 1001 . N expired objects
n_lru_nuked 184361 . N LRU nuked objects
n_lru_saved 0 . N LRU saved objects
n_lru_moved 1 . N LRU moved objects
n_deathrow 0 . N objects on deathrow
losthdr 0 0.00 HTTP header overflows
n_objsendfile 0 0.00 Objects sent with sendfile
n_objwrite 3808607 358.93 Objects sent with write
n_objoverflow 0 0.00 Objects overflowing workspace
s_sess 3846406 362.49 Total Sessions
s_req 3813266 359.37 Total Requests
s_pipe 0 0.00 Total pipe
s_pass 0 0.00 Total pass
s_fetch 185362 17.47 Total fetch
s_hdrbytes 877580576 82704.79 Total header bytes
s_bodybytes 13580918034 1279890.49 Total body bytes
sess_closed 3831452 361.08 Session Closed
sess_pipeline 0 0.00 Session Pipeline
sess_readahead 0 0.00 Session Read Ahead
sess_linger 185362 17.47 Session Linger
sess_herd 411446 38.78 Session herd
shm_records 129034970 12160.49 SHM records
shm_writes 16142937 1521.34 SHM writes
shm_flushes 25 0.00 SHM flushes due to overflow
shm_cont 1069825 100.82 SHM MTX contention
shm_cycles 37 0.00 SHM cycles through buffer
sm_nreq 0 0.00 allocator requests
sm_nobj 0 . outstanding allocations
sm_balloc 0 . bytes allocated
sm_bfree 0 . bytes free
sma_nreq 555085 52.31 SMA allocator requests
sma_nobj 0 . SMA outstanding allocations
sma_nbytes 0 . SMA outstanding bytes
sma_balloc 24435160288 . SMA bytes allocated
sma_bfree 24435160288 . SMA bytes free
sms_nreq 3627904 341.90 SMS allocator requests
sms_nobj 0 . SMS outstanding allocations
sms_nbytes 0 . SMS outstanding bytes
sms_balloc 1443905792 . SMS bytes allocated
sms_bfree 1443905792 . SMS bytes freed
backend_req 184726 17.41 Backend requests made
n_vcl 1 0.00 N vcl total
n_vcl_avail 1 0.00 N vcl available
n_vcl_discard 0 0.00 N vcl discarded
n_purge 1 . N total active purges
n_purge_add 1 0.00 N new purges added
n_purge_retire 0 0.00 N old purges deleted
n_purge_obj_test 0 0.00 N objects tested
n_purge_re_test 0 0.00 N regexps tested against
n_purge_dups 0 0.00 N duplicate purges removed
hcb_nolock 183710 17.31 HCB Lookups without lock
hcb_lock 185362 17.47 HCB Lookups with lock
hcb_insert 185360 17.47 HCB Inserts
esi_parse 0 0.00 Objects ESI parsed (unlock)
esi_errors 0 0.00 ESI parse errors (unlock)
accept_fail 0 0.00 Accept failures
client_drop_late 207 0.02 Connection dropped late
uptime 10611 1.00 Client uptime
backend_retry 0 0.00 Backend conn. retry
dir_dns_lookups 0 0.00 DNS director lookups
dir_dns_failed 0 0.00 DNS director failed lookups
dir_dns_hit 0 0.00 DNS director cached lookups hit
dir_dns_cache_full 0 0.00 DNS director full dnscache
fetch_1xx 0 0.00 Fetch no body (1xx)
fetch_204 0 0.00 Fetch no body (204)
fetch_304 0 0.00 Fetch no body (304)
}}}

--
Ticket URL: <http://varnish-cache.org/trac/ticket/897#comment:2>
Varnish <http://varnish-cache.org/>
The Varnish HTTP Accelerator

_______________________________________________
varnish-bugs mailing list
varnish-bugs [at] varnish-cache
http://www.varnish-cache.org/lists/mailman/listinfo/varnish-bugs


varnish-bugs at varnish-cache

Apr 13, 2011, 9:52 AM

Post #4 of 13 (1029 views)
Permalink
Re: #897: sess_mem "leak" on hyper-threaded cpu [In reply to]

#897: sess_mem "leak" on hyper-threaded cpu
-------------------------------------------------+--------------------------
Reporter: askalski | Type: defect
Status: new | Priority: normal
Milestone: | Component: build
Version: trunk | Severity: major
Keywords: sess_mem leak n_sess race condition |
-------------------------------------------------+--------------------------

Comment(by askalski):

Attached is a patch that solves the race condition without locking in the
acceptor thread (thanks to Mithrandir for pointing that out.) It does so
by splitting the "n_sess" statistic into "n_sess_new" (no mutex), and
"n_sess_delete" (mutex, worker). The worker threads then calculate the
number of outstanding sessions by subtracting. I didn't bother checking
for counter overflow, because it would take 1000 years at a rate of 500+
million sessions per second for that to happen.

--
Ticket URL: <http://www.varnish-cache.org/trac/ticket/897#comment:3>
Varnish <http://varnish-cache.org/>
The Varnish HTTP Accelerator

_______________________________________________
varnish-bugs mailing list
varnish-bugs [at] varnish-cache
http://www.varnish-cache.org/lists/mailman/listinfo/varnish-bugs


varnish-bugs at varnish-cache

Jun 14, 2011, 3:26 AM

Post #5 of 13 (993 views)
Permalink
Re: #897: sess_mem "leak" on hyper-threaded cpu [In reply to]

#897: sess_mem "leak" on hyper-threaded cpu
----------------------+-----------------------------------------------------
Reporter: askalski | Owner: phk
Type: defect | Status: new
Priority: normal | Milestone:
Component: varnishd | Version: trunk
Severity: major | Keywords: sess_mem leak n_sess race condition
----------------------+-----------------------------------------------------
Changes (by phk):

* owner: => phk
* component: build => varnishd


--
Ticket URL: <http://www.varnish-cache.org/trac/ticket/897#comment:4>
Varnish <http://varnish-cache.org/>
The Varnish HTTP Accelerator

_______________________________________________
varnish-bugs mailing list
varnish-bugs [at] varnish-cache
http://www.varnish-cache.org/lists/mailman/listinfo/varnish-bugs


varnish-bugs at varnish-cache

Jul 22, 2011, 12:16 AM

Post #6 of 13 (957 views)
Permalink
Re: #897: sess_mem "leak" on hyper-threaded cpu [In reply to]

#897: sess_mem "leak" on hyper-threaded cpu
----------------------+-----------------------------------------------------
Reporter: askalski | Owner: phk
Type: defect | Status: new
Priority: normal | Milestone:
Component: varnishd | Version: trunk
Severity: major | Keywords: sess_mem leak n_sess race condition
----------------------+-----------------------------------------------------

Comment(by abienvenu):

I run a website serving one million pages per day, and I noticed "N struct
sess" grows steadily, day after day, when Varnish 2.1.5 runs on an
hyperthreaded server. The memory leak was about 200MB per day.

See this memory usage graph from cacti (I restarted Varnish on July 12th).

[[Image(http://data.imagup.com/10/1125983924.png)]]

From the previous patch of askalski, I made a patch for Varnish 2.1.5, and
it works perfectly well.

I would really like these patches to make their path through the official
Varnish releases.

--
Ticket URL: <http://varnish-cache.org/trac/ticket/897#comment:5>
Varnish <http://varnish-cache.org/>
The Varnish HTTP Accelerator

_______________________________________________
varnish-bugs mailing list
varnish-bugs [at] varnish-cache
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-bugs


varnish-bugs at varnish-cache

Oct 28, 2011, 5:22 PM

Post #7 of 13 (830 views)
Permalink
Re: #897: sess_mem "leak" on hyper-threaded cpu [In reply to]

#897: sess_mem "leak" on hyper-threaded cpu
----------------------+-----------------------------------------------------
Reporter: askalski | Owner: phk
Type: defect | Status: new
Priority: normal | Milestone:
Component: varnishd | Version: trunk
Severity: major | Keywords: sess_mem leak n_sess race condition
----------------------+-----------------------------------------------------

Comment(by JakaJancar):

I believe I'm getting affected by this. Varnish gets 100000 sess and
sess_mem, at which point it's using 1 GB of memory and stops accepting
most new requests.

I'm getting 500 connections/s on on an 8-core HT-enabled machine and I get
too 100k in ~15 minutes.

Is this solved in 3.0? Will it ever be in 2.1.6?

--
Ticket URL: <https://www.varnish-cache.org/trac/ticket/897#comment:6>
Varnish <https://varnish-cache.org/>
The Varnish HTTP Accelerator

_______________________________________________
varnish-bugs mailing list
varnish-bugs [at] varnish-cache
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-bugs


varnish-bugs at varnish-cache

Oct 28, 2011, 5:37 PM

Post #8 of 13 (835 views)
Permalink
Re: #897: sess_mem "leak" on hyper-threaded cpu [In reply to]

#897: sess_mem "leak" on hyper-threaded cpu
----------------------+-----------------------------------------------------
Reporter: askalski | Owner: phk
Type: defect | Status: new
Priority: normal | Milestone:
Component: varnishd | Version: trunk
Severity: major | Keywords: sess_mem leak n_sess race condition
----------------------+-----------------------------------------------------

Comment(by JakaJancar):

Another post-mortem from me: http://pastebin.com/0usH0vyT

--
Ticket URL: <https://www.varnish-cache.org/trac/ticket/897#comment:7>
Varnish <https://varnish-cache.org/>
The Varnish HTTP Accelerator

_______________________________________________
varnish-bugs mailing list
varnish-bugs [at] varnish-cache
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-bugs


varnish-bugs at varnish-cache

Apr 14, 2012, 7:15 AM

Post #9 of 13 (722 views)
Permalink
Re: #897: sess_mem "leak" on hyper-threaded cpu [In reply to]

#897: sess_mem "leak" on hyper-threaded cpu
----------------------+-----------------------------------------------------
Reporter: askalski | Owner: phk
Type: defect | Status: new
Priority: normal | Milestone:
Component: varnishd | Version: trunk
Severity: major | Keywords: sess_mem leak n_sess race condition
----------------------+-----------------------------------------------------

Comment(by mark):

I've ported the patch to 3.0.2 (attached), but it does not seem to fix a
session leak we're experiencing.

--
Ticket URL: <https://www.varnish-cache.org/trac/ticket/897#comment:8>
Varnish <https://varnish-cache.org/>
The Varnish HTTP Accelerator

_______________________________________________
varnish-bugs mailing list
varnish-bugs [at] varnish-cache
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-bugs


varnish-bugs at varnish-cache

Apr 23, 2012, 3:46 AM

Post #10 of 13 (719 views)
Permalink
Re: #897: sess_mem "leak" on hyper-threaded cpu [In reply to]

#897: sess_mem "leak" on hyper-threaded cpu
----------------------+-----------------------------------------------------
Reporter: askalski | Owner: phk
Type: defect | Status: new
Priority: normal | Milestone:
Component: varnishd | Version: trunk
Severity: major | Keywords: sess_mem leak n_sess race condition
----------------------+-----------------------------------------------------

Comment(by martin):

For leaking sessions you might also want to try out the patch
8306db9a95c7b3022fdeee038b1e6973f46382f9. This is not applicable in trunk,
but this patch will go into the next 3.0 release.

--
Ticket URL: <https://www.varnish-cache.org/trac/ticket/897#comment:9>
Varnish <https://varnish-cache.org/>
The Varnish HTTP Accelerator

_______________________________________________
varnish-bugs mailing list
varnish-bugs [at] varnish-cache
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-bugs


varnish-bugs at varnish-cache

May 21, 2012, 3:35 AM

Post #11 of 13 (681 views)
Permalink
Re: #897: sess_mem "leak" on hyper-threaded cpu [In reply to]

#897: sess_mem "leak" on hyper-threaded cpu
-------------------------------------------------+--------------------------
Reporter: askalski | Owner: phk
Type: defect | Status: closed
Priority: normal | Milestone:
Component: varnishd | Version: trunk
Severity: major | Resolution: fixed
Keywords: sess_mem leak n_sess race condition |
-------------------------------------------------+--------------------------
Changes (by martin):

* status: new => closed
* resolution: => fixed


Comment:

The relevant areas of this has redesigned in trunk and will be part of the
next major release of Varnish.

For 3.0 this should not cause any major issues (potentially reaching
sess_max a tiny bit faster than strictly necessary), so I am closing this
bug.

--
Ticket URL: <https://www.varnish-cache.org/trac/ticket/897#comment:10>
Varnish <https://varnish-cache.org/>
The Varnish HTTP Accelerator

_______________________________________________
varnish-bugs mailing list
varnish-bugs [at] varnish-cache
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-bugs


varnish-bugs at varnish-cache

Jul 3, 2012, 1:04 AM

Post #12 of 13 (629 views)
Permalink
Re: #897: sess_mem "leak" on hyper-threaded cpu [In reply to]

#897: sess_mem "leak" on hyper-threaded cpu
-------------------------------------------------+--------------------------
Reporter: askalski | Owner: phk
Type: defect | Status: reopened
Priority: normal | Milestone:
Component: varnishd | Version: 3.0.2
Severity: major | Resolution:
Keywords: sess_mem leak n_sess race condition |
-------------------------------------------------+--------------------------
Changes (by martin):

* status: closed => reopened
* version: trunk => 3.0.2
* resolution: fixed =>


Comment:

Have found a setup now where they managed to hit session_max twice a day
because of this issue, so I guess this warrants reopening this bug.
Specific for this case is a very high connection rate, without http keep-
alives.

--
Ticket URL: <https://www.varnish-cache.org/trac/ticket/897#comment:11>
Varnish <https://varnish-cache.org/>
The Varnish HTTP Accelerator

_______________________________________________
varnish-bugs mailing list
varnish-bugs [at] varnish-cache
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-bugs


varnish-bugs at varnish-cache

Jul 12, 2012, 12:46 AM

Post #13 of 13 (621 views)
Permalink
Re: #897: sess_mem "leak" on hyper-threaded cpu [In reply to]

#897: sess_mem "leak" on hyper-threaded cpu
-------------------------------------------------+--------------------------
Reporter: askalski | Owner: phk
Type: defect | Status: closed
Priority: normal | Milestone:
Component: varnishd | Version: 3.0.2
Severity: major | Resolution: fixed
Keywords: sess_mem leak n_sess race condition |
-------------------------------------------------+--------------------------
Changes (by Martin Blix Grydeland <martin@…>):

* status: reopened => closed
* resolution: => fixed


Comment:

(In [596246ea3fc36847dc27d1672dd37a8fef817ac5]) Make n_sess be the
difference between in use and released session
objects.

This avoids a memory race on the n_sess counter, which can lead to
excessive session object allocation. Keeping the counters of in use
and released separate allows the acceptor to continue to run lockless.

Fixes: #897

--
Ticket URL: <https://www.varnish-cache.org/trac/ticket/897#comment:12>
Varnish <https://varnish-cache.org/>
The Varnish HTTP Accelerator

_______________________________________________
varnish-bugs mailing list
varnish-bugs [at] varnish-cache
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-bugs

Varnish bugs RSS feed   Index | Next | Previous | View Threaded
 
 


Interested in having your list archived? Contact Gossamer Threads
 
  Web Applications & Managed Hosting Powered by Gossamer Threads Inc.