Code Monkey home page Code Monkey logo

Comments (5)

GoogleCodeExporter avatar GoogleCodeExporter commented on August 17, 2024
I suspect that the real memory hog is the padding rather than read buffer 
count. That addition was motivated for large, high volume caches which is the 
typical use-case I tend to be exposed to. When we wrote Guava's cache, that 
became the default general purpose application cache so I shifted CLHM for the 
remaining users who needed more raw performance that Guava's.

I think the padding may not be offering a significant enough win that it could 
be removed to provide balance for low-volume caches. Could you simply replace 
the PaddedAtomicReference usages with an AtomicReference and check memory usage 
against that local SNAPSHOT? If that works, I'd prefer it over a system 
property.

The best long-term approach would be to make the number of read buffers grow 
when contention is detected. See Java 8's Striped64 for numeric sums as an 
example. I plan on implementing that approach soon in my Java 8 rewrite of 
Guava's cache [1]. I hadn't intended to implement it here too, but will if 
necessary.

[1] https://github.com/ben-manes/caffeine/

Original comment by [email protected] on 26 Jan 2015 at 11:16

  • Changed state: Accepted

from concurrentlinkedhashmap.

GoogleCodeExporter avatar GoogleCodeExporter commented on August 17, 2024
Agreed that a dynamic solution is the best approach where it grows as 
necessary.  Unfortunately I don't have access to the system where this problem 
occurred, just the heap dump from it.  The scenario was that a team upgraded 
the server from an 8 CPU machine to a 32 CPU machine and started having memory 
issues which we traced to these maps.  The max heap setting for the Java 
process was set at 896 MB and was not altered for the more powerful server.  
Looking at the heap dump, there were 777 ConcurrentLinkedHashMaps which had 
3,183,369 PaddedAtomicReferences, which calculates out to about 430 MB.  If we 
switched to AtomicReference it would save about 364 MB.  Long story short, yes, 
switching to AtomicReference would fix our scenario.

Discussing with other engineers here, PaddedAtomicReference makes sense with 
high contention scenarios with plenty of memory.  If we understand it 
correctly, it is based somewhat on assumptions about the underlying 
architecture and JVM implementation.  In our situation we have low contention 
and not so much memory so it doesn't work as well.  Ideally there would be an 
option or options to choose the behavior most applicable to an application.

An engineer here also mentioned that you can probably remove one or more longs 
from the PaddedAtomicReference since the JVM will use a certain number of bytes 
for each object.  That would slightly optimize memory usage of 
PaddedAtomicReference while retaining its intent, but wouldn't fix our scenario.

Can we rename this ticket since the System property solution is not ideal?  Or 
should this ticket be closed and a new one written?

Original comment by [email protected] on 27 Jan 2015 at 7:11

from concurrentlinkedhashmap.

GoogleCodeExporter avatar GoogleCodeExporter commented on August 17, 2024
I think removing the padding entirely is the best option. A lot of work already 
went into avoiding buffer contention and it wastes too much memory to be worth 
it. The Java 8 version will continue to improve that area and is already much 
more more mindful on memory usage. It will have a number of improvements that 
it should be a long-term replacement for CLHM, which was my initial goal with 
Guava's but didn't pan out performance-wise.

I'll try to get a release out tonight.

Original comment by [email protected] on 28 Jan 2015 at 12:25

  • Changed title: Excessive memory usage due to padding
  • Changed state: Started

from concurrentlinkedhashmap.

GoogleCodeExporter avatar GoogleCodeExporter commented on August 17, 2024
Release v1.4.2 that removes the padding. This may take up to 2 hours to sync 
with Maven Central.

Original comment by [email protected] on 28 Jan 2015 at 1:36

  • Changed state: Fixed

from concurrentlinkedhashmap.

ben-manes avatar ben-manes commented on August 17, 2024

@ndjensen
FYI, dynamic resizing is now implemented in Caffeine v1.0.1. This results in an initial footprint of ~600 bytes. The number of read buffers grows based on contention up to a fixed maximum. Padding currently is not used.

from concurrentlinkedhashmap.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.