We're Hiring!

Memory Leak on v.4.4.8 OmeroServer on OpenSUSE?

General user discussion about using the OMERO platform to its fullest. Please ask new questions at https://forum.image.sc/tags/omero
Please note:
Historical discussions about OMERO. Please look for and ask new questions at https://forum.image.sc/tags/omero

There are workflow guides for various OMERO functions on our help site - http://help.openmicroscopy.org

You should find answers to any basic questions about using the clients there.

Memory Leak on v.4.4.8 OmeroServer on OpenSUSE?

Postby mandywil » Wed Jul 10, 2013 6:26 pm

Hi All,

Omero was pretty stable on our server until we upgraded from OpenSUSE 11.4 to OpenSUSE 12.2 (Mantis). Since then (and since we did the associate PostGresSQL upgrade) Omero will run for a little while, then die. At first we thought it was a Memory setting error, but now we are getting a different type of Memory error and it still runs for awhile, then dies. All of the client-side machines are running the latest version of the software.

Does anyone have any ideas? Right now we are "dealing" with it by adding a cronjob that restarts Omero periodically.

Thanks!
Mandy Wilson

Here is the beginning of the first error:
java.lang.Exception: org.openmicroscopy.shoola.env.data.DSAccessException: Cannot access data.
Cannot load hierarchy for class pojos.ProjectData.
at org.openmicroscopy.shoola.env.data.OMEROGateway.handleException(OMEROGateway.java:902)
at org.openmicroscopy.shoola.env.data.OMEROGateway.loadContainerHierarchy(OMEROGateway.java:2697)
at org.openmicroscopy.shoola.env.data.OmeroDataServiceImpl.loadContainerHierarchy(OmeroDataServiceImpl.java:224)
at org.openmicroscopy.shoola.env.data.views.calls.DMLoader$1.doCall(DMLoader.java:90)
at org.openmicroscopy.shoola.env.data.views.BatchCall.doStep(BatchCall.java:144)
at org.openmicroscopy.shoola.util.concur.tasks.CompositeTask.doStep(CompositeTask.java:226)
at org.openmicroscopy.shoola.env.data.views.CompositeBatchCall.doStep(CompositeBatchCall.java:126)
at org.openmicroscopy.shoola.util.concur.tasks.ExecCommand.exec(ExecCommand.java:165)
at org.openmicroscopy.shoola.util.concur.tasks.ExecCommand.run(ExecCommand.java:276)
at org.openmicroscopy.shoola.util.concur.tasks.AsyncProcessor$Runner.run(AsyncProcessor.java:91)
at java.lang.Thread.run(Thread.java:680)
Caused by: omero.InternalException
serverStackTrace = "ome.conditions.InternalException: Wrapped Exception: (java.lang.OutOfMemoryError):
GC overhead limit exceeded
"
serverExceptionClass = "ome.conditions.InternalException"
message = " Wrapped Exception: (java.lang.OutOfMemoryError):
GC overhead limit exceeded"
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
at java.lang.Class.newInstance0(Class.java:357)
at java.lang.Class.newInstance(Class.java:310)
at IceInternal.BasicStream$DynamicUserExceptionFactory.createAndThrow(BasicStream.java:2243)
at IceInternal.BasicStream.throwException(BasicStream.java:1632)
at IceInternal.Outgoing.throwUserException(Outgoing.java:442)
at omero.api._IContainerDelM.loadContainerHierarchy(_IContainerDelM.java:592)
at omero.api.IContainerPrxHelper.loadContainerHierarchy(IContainerPrxHelper.java:738)
at omero.api.IContainerPrxHelper.loadContainerHierarchy(IContainerPrxHelper.java:710)
at org.openmicroscopy.shoola.env.data.OMEROGateway.loadContainerHierarchy(OMEROGateway.java:2693)
... 9 more

And here is the beginning of the second error now that the Memory setting has been increased:
java.lang.Exception: org.openmicroscopy.shoola.env.data.DSAccessException: Cannot access data.
Cannot load hierarchy for class pojos.DatasetData.
at org.openmicroscopy.shoola.env.data.OMEROGateway.handleException(OMEROGateway.java:902)
at org.openmicroscopy.shoola.env.data.OMEROGateway.loadContainerHierarchy(OMEROGateway.java:2697)
at org.openmicroscopy.shoola.env.data.OmeroDataServiceImpl.loadContainerHierarchy(OmeroDataServiceImpl.java:224)
at org.openmicroscopy.shoola.env.data.views.calls.DMLoader$1.doCall(DMLoader.java:90)
at org.openmicroscopy.shoola.env.data.views.BatchCall.doStep(BatchCall.java:144)
at org.openmicroscopy.shoola.util.concur.tasks.CompositeTask.doStep(CompositeTask.java:226)
at org.openmicroscopy.shoola.env.data.views.CompositeBatchCall.doStep(CompositeBatchCall.java:126)
at org.openmicroscopy.shoola.util.concur.tasks.ExecCommand.exec(ExecCommand.java:165)
at org.openmicroscopy.shoola.util.concur.tasks.ExecCommand.run(ExecCommand.java:276)
at org.openmicroscopy.shoola.util.concur.tasks.AsyncProcessor$Runner.run(AsyncProcessor.java:91)
at java.lang.Thread.run(Thread.java:680)
Caused by: Ice.UnknownLocalException
unknown = "Ice::MarshalException
Ice.MarshalException
reason = "OutOfMemoryError occurred while allocating a ByteBuffer"
at IceInternal.Buffer.reserve(Buffer.java:163)
at IceInternal.Buffer.resize(Buffer.java:72)
at IceInternal.Buffer.expand(Buffer.java:59)
at IceInternal.BasicStream.expand(BasicStream.java:2147)
at IceInternal.BasicStream.writeString(BasicStream.java:1255)
at omero.RString.__write(RString.java:150)
at IceInternal.BasicStream.writeInstance(BasicStream.java:1809)
at IceInternal.BasicStream.writePendingObjects(BasicStream.java:1712)
at omero.api._AMD_IContainer_loadContainerHierarchy.ice_response(_AMD_IContainer_loadContainerHierarchy.java:31)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at ome.services.throttling.Task.response(Task.java:63)
at ome.services.throttling.Callback.run(Callback.java:57)
at ome.services.throttling.InThreadThrottlingStrategy.callInvokerOnRawArgs(InThreadThrottlingStrategy.java:56)
at ome.services.blitz.impl.AbstractAmdServant.callInvokerOnRawArgs(AbstractAmdServant.java:150)
at ome.services.blitz.impl.ContainerI.loadContainerHierarchy_async(ContainerI.java:179)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:307)
at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150)
at omero.cmd.CallContext.invoke(CallContext.java:59)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172)
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:202)
at com.sun.proxy.$Proxy82.loadContainerHierarchy_async(Unknown Source)
at omero.api._IContainerTie.loadContainerHierarchy_async(_IContainerTie.java:134)
at omero.api._IContainerDisp.___loadContainerHierarchy(_IContainerDisp.java:196)
at omero.api._IContainerDisp.__dispatch(_IContainerDisp.java:641)
at IceInternal.Incoming.invoke(Incoming.java:159)
at Ice.ConnectionI.invokeAll(ConnectionI.java:2037)
at Ice.ConnectionI.message(ConnectionI.java:972)
at IceInternal.ThreadPool.run(ThreadPool.java:577)
at IceInternal.ThreadPool.access$100(ThreadPool.java:12)
at IceInternal.ThreadPool$EventHandlerThread.run(ThreadPool.java:971)
Caused by: java.lang.OutOfMemoryError: Java heap space
"
at IceInternal.Outgoing.invoke(Outgoing.java:147)
at omero.api._IContainerDelM.loadContainerHierarchy(_IContainerDelM.java:585)
at omero.api.IContainerPrxHelper.loadContainerHierarchy(IContainerPrxHelper.java:738)
at omero.api.IContainerPrxHelper.loadContainerHierarchy(IContainerPrxHelper.java:710)
at org.openmicroscopy.shoola.env.data.OMEROGateway.loadContainerHierarchy(OMEROGateway.java:2693)
... 9 more
mandywil
 
Posts: 3
Joined: Wed Jul 10, 2013 2:56 pm

Re: Memory Leak on v.4.4.8 OmeroServer on OpenSUSE?

Postby bpindelski » Thu Jul 11, 2013 12:17 pm

Hi Mandy,

Thanks for using the forums. I hope we will be able to help you with your issue.
Before going ahead with proposing a solution, It would be really helpful to see the output of
- bin/omero config get,
- bin/omero admin diagnostics,
- ulimit -a
(all executed on the node running OMERO.server).

The supplied stack trace might suggest that the JVM in which the server process runs doesn't have enough memory assigned and the garbage collection process is using it all up.

Thanks,
Regards,
Blazej
bpindelski
 

Re: Memory Leak on v.4.4.8 OmeroServer on OpenSUSE?

Postby mandywil » Mon Jul 15, 2013 7:03 pm

Hi Blazej,

Sorry this took so long. Here are the results of your queries. Thanks for helping out!

> - bin/omero config get,

omero@gladden:~/omero-server> ./bin/omero config get
omero.db.name=omeroDB
Error printing text
omero.db.password='xxxxxxx'
omero.db.poolsize=50
omero.db.user=u_omero
omero.sessions.timeout=3600000
omero.web.application_host=http://gladden.vbi.vt.edu:80/omero
omero.web.application_server=fastcgi-tcp

> - bin/omero admin diagnostics,

omero@gladden:~/omero-server> ./bin/omero admin diagnostics

================================================================================
OMERO Diagnostics 4.4.8-ice33-b256
================================================================================

Commands: java -version 1.7.0 (/usr/bin/java)
Commands: python -V 2.7.3 (/usr/bin/python)
Commands: icegridnode --version 3.3.1 (/usr/bin/icegridnode)
Commands: icegridadmin --version 3.3.1 (/usr/bin/icegridadmin)
Commands: psql --version 9.1.9 (/usr/bin/psql)

Server: icegridnode running
Server: Blitz-0 active (pid = 32066, enabled)
Server: DropBox active (pid = 32095, enabled)
Server: FileServer active (pid = 32098, enabled)
Server: Indexer-0 active (pid = 32102, enabled)
Server: MonitorServer active (pid = 32104, enabled)
Server: OMERO.Glacier2 active (pid = 32105, enabled)
Server: OMERO.IceStorm active (pid = 32107, enabled)
Server: PixelData-0 active (pid = 32117, enabled)
Server: Processor-0 active (pid = 32130, enabled)
Server: Tables-0 inactive (disabled)
Server: TestDropBox inactive (enabled)


OMERO: SSL port 4064
OMERO: TCP port 4063

Log dir: /home/omero-service/OMERO.server-4.4.8-ice33-b256/var/log exists

Log files: Blitz-0.log 127.0 MB errors=459 warnings=657
Log files: DropBox.log 12.0 KB errors=2 warnings=5
Log files: FileServer.log 1.0 KB
Log files: Indexer-0.log 57.0 MB errors=2 warnings=3
Log files: MonitorServer.log 7.0 KB errors=0 warnings=2
Log files: OMEROweb.log 61.0 KB errors=8 warnings=4
Log files: OMEROweb_request.log 21.0 KB errors=4 warnings=0
Log files: PixelData-0.log 59.0 KB errors=2 warnings=3
Log files: Processor-0.log 28.0 KB errors=5 warnings=39
Log files: Tables-0.log n/a
Log files: TestDropBox.log n/a
Log files: master.err 80.0 KB
Log files: master.out 0.0 KB
Log files: Total size 185.18 MB

Parsing Blitz-0.log:[line:112] Your postgres hostname and/or port is invalid
Parsing Blitz-0.log:[line:272] => Server restarted <=
Parsing Blitz-0.log:[line:58852] => Server restarted <=
Parsing Blitz-0.log:[line:164005] => Server restarted <=
Parsing Blitz-0.log:[line:521765] => Server restarted <=

Parsing Blitz-0.log:[line:586131] => Server restarted <=

Environment:OMERO_HOME=(unset)
Environment:OMERO_NODE=(unset)
Environment:OMERO_MASTER=(unset)
EnvironmentATH=/usr/local/bin:/usr/bin:/bin:/usr/bin/X11:/usr/X11R6/bin:/usr/games:/opt/kde3/bin:/opt/dell/srvadmin/bin:/opt/phred/bin:/opt/phrap/bin:/opt/mira_3.4.0_prod_linux-gnu_x86_64_static/bin:/opt/phd2fasta-acd/bin
Environment:ICE_HOME=(unset)
Environment:LD_LIBRARY_PATH=(unset)
EnvironmentYLD_LIBRARY_PATH=(unset)

OMERO data dir: '/OMERO' Exists? True Is writable? True
OMERO.web status... [NOT STARTED]
omero@gladden:~/omero-server>

> - ulimit -a

omero@gladden:~/omero-server> ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 386948
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 386948
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
omero@gladden:~/omero-server>
mandywil
 
Posts: 3
Joined: Wed Jul 10, 2013 2:56 pm

Re: Memory Leak on v.4.4.8 OmeroServer on OpenSUSE?

Postby bpindelski » Tue Jul 16, 2013 9:11 am

Hi Mandy,

Thanks for the diagnostic output. I presume the number of errors in Blitz-0.log is due to the issue at hand. What we are seeing here is the Ice middleware running out of memory when it tries to allocate a buffer for messages coming from the OMERO clients.

In your first post you mentioned that "Memory setting has been increased". Since the OMERO.server settings look fine, it would be now helpful to know to what value have you set your Java memory allocation for the Blitz server process (in templates.xml)? What is the CPU and memory available for the OMERO.server?

Thanks for your reply,
Regards,

Blazej
bpindelski
 

Re: Memory Leak on v.4.4.8 OmeroServer on OpenSUSE?

Postby mandywil » Thu Jul 18, 2013 2:05 pm

Here is the information from template.xml:

Here's all the memory settings from templates.xml (hopefully RT won't mangle the format):
<target name="Blitz-hprof">
<option>-agentlib:hprof=cpu=samples,cutoff=0,thread=y,interval=1,depth=50,force=y,file=${OMERO_LOGS}Blitz-${index}.hprof</option>
</target>
<option>-Xmx768M</option>


<option>-XX:MaxPermSize=256m</option>
<target name="memcfg">
<option>${omero.blitz.maxmemory}</option>
<option>${omero.blitz.permgen}</option>

<server-template id="PixelDataTemplate">
<parameter name="index"/>
<parameter name="dir"/>
<parameter name="config" default="default"/>
<server id="PixelData-${index}" exe="${JAVA}" activation="always" pwd="${OMERO_HOME}">
<option>-Xmx512M</option>
<option>-Djava.awt.headless=true</option>
<option>-Dlog4j.configuration=${OMERO_ETC}log4j-indexing.xml</option>
<option>-Domero.logfile=${OMERO_LOGFILE}</option>
<option>-Domero.name=PixelData-${index}</option>
<option>-jar</option>
<option>${OMERO_JARS}blitz.jar</option>
<option>ome.pixeldata</option>
<adapter name="PixelDataAdapter" endpoints="tcp"/>

<server-template id="RepositoryTemplate">
<parameter name="index"/>
<parameter name="dir"/>
<parameter name="config" default="default"/>
<server id="Repository-${index}" exe="${JAVA}" activation="always" pwd="${OMERO_HOME}">
<option>-Xmx400M</option>
<option>-Djava.awt.headless=true</option>
<option>-Dlog4j.configuration=${OMERO_ETC}log4j.xml</option>
<option>-Domero.logfile=${OMERO_LOGFILE}</option>
<option>-Domero.name=Repository-${index}</option>
<option>-Domero.repo.dir=${dir}</option>
<option>-jar</option>
<option>${OMERO_JARS}blitz.jar</option>
<option>OMERO.repository</option>
<adapter name="RepositoryAdapter" endpoints="tcp">

Here is CPU and memory info:
gladden:~ # free -m
total used free shared buffers cached
Mem: 48385 32268 16116 0 0 28633
-/+ buffers/cache: 3634 44750
Swap: 16383 972 15411

Server is a PowerEdge R710 with dual socket Xeon.
mandywil
 
Posts: 3
Joined: Wed Jul 10, 2013 2:56 pm

Re: Memory Leak on v.4.4.8 OmeroServer on OpenSUSE?

Postby bpindelski » Fri Jul 19, 2013 8:12 am

Hi Mandy,

Thanks for taking the time and posting the server memory allocation settings (and the output of free). As I see, you have 48 GB of RAM available in the server. This is why I would recommend allocating more than the currently set 768 MB for the JVM (unless you have valid reasons not to). Starting with 2 GB would be reasonable.

To change the settings, stop the server with bin/omero admin stop and use your text editor of choice to edit etc/grid/templates.xml. In the file, change "-Xmx768M" to "-Xmx2048M". Save and start the server up.

The Xmx setting allocates heap memory to the Java virtual machine. The server process running inside the JVM will not use more memory than specified in the templates.xml file.

I can also recommend looking at the presentation given at this year's OME user's meeting regarding server installation: https://www.openmicroscopy.org/site/community/minutes/meetings/june-2013-paris-users-meeting/presentations/Workshop-Installation-Unix.pdf/at_download/file

Regards,
Blazej
bpindelski
 


Return to User Discussion

Who is online

Users browsing this forum: No registered users and 1 guest