How to fix java.lang.OutOfMemoryError: Java heap space
Nov 4, 2014
If you get an OutOfMemoryError with the message “Java heap space” (not to be confused with message “ PermGen space ”), it simply means the JVM ran out of memory. When it occurs, you basically have 2 options:

Solution 1. Allow the JVM to use more memory
With the -Xmx JVM argument, you can set the heap size. For instance, you can allow the JVM to use 4 GB (4096 MB) of memory with the following command:
Solution 2. Improve or fix the application to reduce memory usage
In many cases, like in the case of a memory leak, that second option is the only good solution. A memory leak happens when the application creates more and more objects and never releases them. The garbage collector cannot collect those objects and the application will eventually run out of memory. At this point, the JVM will throw an OOM ( OutOfMemoryError ).
A memory leak can be very latent. For instance, the application might behave flawlessly during development and QA. However, it suddenly throws a OOM after several days in production at customer site. To solve that issue, you first need to find the root cause of it. The root cause can be very hard to find in development if the problem cannot be reproduced. Follow those steps to find the root cause of the OOM:
Step 1. Generate a heap dump on OutOfMemoryError
Start the application with the VM argument -XX:+HeapDumpOnOutOfMemoryError . This will tell the JVM to produce a heap dump when a OOM occurs:
Step 2. Reproduce the problem
Well, if you cannot reproduce the problem in dev, you may have to use the production environment. When you reproduce the problem and the application throws an OOM, it will generate a heap dump file.
Step 3. Investigate the issue using the heap dump file
Use VisualVM to read the heap dump file and diagnose the issue. VisualVM is a program located in JDK_HOME/bin/jvisualvm . The heap dump file has all information about the memory usage of the application. It allows you to navigate the heap and see which objects use the most memory and what references prevent the garbage collector from reclaiming the memory. Here is a screenshot of VisualVM with a heap dump loaded:

This will give you very strong hints and you will (hopefully) be able to find the root cause of the problem. The problem could be a cache that grows indefinitely, a list that keeps collecting business-specific data in memory, a huge request that tries to load almost all data from database in memory, etc.
Once you know the root cause of the problem, you can elaborate solutions to fix it. In case of a cache that grows indefinitely, a good solution could be to set a reasonable limit to that cache. In case of a query that tries to load almost all data from database in memory, you may have to change the way you manipulate data; you could even have to change the behavior of some functionalities of the application.
Manually triggering heap dump
If you do not want to wait for an OOM or if you just want to see what is in memory now, you can manually generate heap dump. Here 2 options to manually trigger a heap dump.
Option 1. Use VisualVM
Open VisualVM ( JDK_HOME/bin/jvisualvm ), right-click on the process on the left pane and select Heap Dump. That’s it.
Option 2. Use command line tools
If you do not have a graphical environment and can’t use vnc ( VisualVM needs a graphical environment), use jps and jmap to generate the heap dump file. Those programs are also located in JDK_HOME/bin/ .
Finally copy the heap dump file (heap.bin) to your workstation and use VisualVM to read the heap dump: File -> Load…
Alternatively, you can also use jhat to read heap dump files.
Solution 3 (bonus). Call me
You can also contact my application development company and I can personally help you with those kind of issues 🙂
Author: Jonathan Demers

Get in touch
How to Resolve the Issue java.lang.OutOfMemoryError: Java Heap Space
We are running 3 instances of ODK and 2 more Java applications on a single server. If we try to load all these applications simultaneously, some will not load and some will be slow. When we check tomcat's log file, Catalina. out, it says, java. lang.OutOfMemoryError: Java heap space. The cause of this is explained below,
Normally java applications are allowed to use only limited memory and java memory is separated into two different regions. These regions are called Heap space and Permgen (for Permanent Generation). The size of those regions is set during the Java Virtual Machine (JVM) launch and can be customized by specifying JVM parameters -Xmx and -XX: MaxPermSize. The java. lang.OutOfMemoryError: Java heap space error occurs when it attempts to add more data into the heap space area, but there is not enough room for it.
The solution to fix this problem is to increase the heap space(Default value maybe 128 MB). We can customize the heap space by specifying the JVM parameters -Xmx and -XX: MaxPermSize.
To achieve this, please do the following,
- Edit the catalina.sh file loacted in the bin directory of tomcat(/usr/share/tomcat7/bin).
- We can increase the maximum Java heap size to 512 MB, by inserting the following lines: JAVA_OPTS="-Xmx512000000 " export JAVA_OPTS
Hope this helps! Please feel free to get in touch with us for further queries.
- Technical Solution
Related Topics
- How to Install Webpack?
- Configuring Mattermost Server on Centos 7
- Integrate Zoom plugin with Mattermost
- Install Mattermost on Centos 7
- [Case Study] How to Deliver Emails Without Getting Filtered to the Spam Folder?
Related Articles

Which all software is affected by log4j shell vulnerability

How to Fix Let's Encrypt SSL Certbot Auto-Renew Error in Ubuntu 12.04 LTS

How to Install and Configure Jitsi Meet on an Ubuntu Server
- Free Ebooks
- All Articles
- Top 4 Java Heap related issues and how to fix them
Java heap related issues can cause severe damage to your application that will directly result in poor end user experience . Care must be taken to tune the Heap related parameters that suit your application. Out of the box default parameters are not enough.
Quick overview:
Java Heap is the memory used by your application to create and store objects. You define the maximum memory that can be for the Heap by specifying ‘-Xmx<size>’ java command line option (Example: ‘ -Xmx1024m ‘, where m stands for Mega bytes). As your application runs, it will gradually fill up Heap. JVM periodically runs a process called ‘ Garbage collection’ (GC) which will scan the heap and clear objects that are no longer referenced by your application.One cannot request Garbage Collection on demand. Only JVM can decide when to run GC.
Let me walk you through four of the most common Java Heap related issues and how to fix them.
1. ‘ OutOfMemory’ Error due to insufficient Heap
This is when JVM runs out of Heap and GC is unable to reclaim memory to meet the demand of the application. This error simply means that the Heap is filled with objects that are being used/referenced by your application.
It is possible that you indeed are using all the objects i.e there is real demand for memory. For example, if you are doing heavy weight activities like processing images/video or crunching big numbers.
How to identify ?
When you plot the heap usage graph (how to do this? Answer at the end of this article), you will most probably see a sudden spike in memory utilization , indicating a heavy weight transaction has just started. This will be in contrast to a ‘gradual increase in heap.
How to fix ?
You could try increasing the Heap size to see if you can live through the heavy weight transaction. Sometimes this is enough. But some times, the code needs to be revisited to see why the demand is high in first place . May be you are trying to pull millions of records from DB and process them at once. May be you are processing something unnecessarily.
2. Poor application response time due to long Garbage Collection Pauses
As mentioned earlier GC is responsible for scanning the heap and clearing unused objects so that the memory can be reclaimed. This can be a resource intensive process especially when the heap is big and it is filled to the brink. In most cases, when GC is running, the entire JVM is paused. When GC takes a long time to complete the JVM pause time also becomes long resulting in very poor end user experience. Ideally each GC pause should be less than 500ms depending upon how often the GC runs.
When you plot the graph for GC Time (How to do this ? Answer at the end of this article), you will see longer duration (several seconds). You will also hear from your customers about poor performance. In some cases, the CPU utilization of the Server can go up significantly as well.
Tuning can help . Make sure you have ‘generational’ heap configured . With generational heap, the heap has a special area called ‘nursery’ or ‘new generation’ that is used for short lived objects. The idea is GC is will be quicker in ‘new generation’ since the entire heap does not have to be scanned. The heap also has a ‘tenured’ or ‘old generation’ ara where long living objects are stored. A minor GC works on new generation and a Major GC (Full GC) works on the entire heap. This issue can also be code related so you need to analyze the code in parallel to tuning the heap. Some JVMs like IBM have an option to specify a GC policy such as ‘optgcpause’ which optimizes GC for smaller pause time.
3. OutofMemory Error due to Memory leak
Memory leak means the application is allocating memory and holding on to them unnecessarily. What this means is, after certain period, the heap will be filled with objects are being used by the application and GC will NOT be able to reclaim those. This results in the gory ‘OutOfMemory’ error.
When you plot the heap graph, you would see a ‘staircase’ pattern, indicating a gradual leak (rather than a sudden spike). You will also face ‘OutOfMemory’ errors
How to fix it?
Memory leak is most probably code issue. This can also happen due to a library you are using that is outside of your code. Profiling your application using tools like Jprobe will shed light.
4. Heap fragmentation
Heap gets fragmented when small and large objects are allocated in a mixed fashion that have various lifetimes. To some extent, you cannot avoid fragmentation – over time, heap will get fragmented. But there is couple of crucial things to consider.
a. When heap is fragmented, GC will try to compact the heap which can result in a longer GC pause time for a heavily fragmented heap
b. Heap fragmentation becomes an issue ONLY when your application requires to allocate memory for a large object (which will need contiguous blocks of memory).
You will see poor application response times, longer GC pauses and in some case ‘OutOfMemory’ errors.
How to fix?
Tuning can help. This is when tuning becomes extremely vendor specific (HP,IBM,Hotspot etc). You will have to check your JVM’s options (see if there is support for -XXcompactRatio). Also note that newer versions of JVMs are much efficient in handling the fragmentation as they dynamically compact the heap. So, you may never really run into this issue if you use newer releases of your JVM.
Now, How to monitor heap and GC ?
There are several options:
1. Use -verbose:gc to get verbose GC logs. It can be life saver. There are tools to analyze the verbose GC logs
2. Use JDK tools such as jconsole,visualvm and jstat.
3. Procure a commercial APM (Application Performance Management) tool that will not only monitor the heap but can alert you when things are about to get bad. APM can also help in identifying memory leaks.
You may want to take a look at my other article http://karunsubramanian.com/java/5-not-so-easy-ways-to-monitor-the-heap-usage-of-your-java-application/
Final note: The use of Heap dumps:
Heap dump can be an invaluable tool in troubleshooting memory related issues. A heap dump is a copy of the entire heap when the heap dump was taken. We can readily identify the objects that are filling up te memory by analyzing the heap dump (You can use Eclipse Memory analyzer)
There you have it. 4 most common heap related issues and how to fix them.
LEARN WITH PASSION
Great article
very useful article. thanks
Very good Information.
Leave a Comment
Next post: 4 things you need to know about CPU utilization of your Java application
Previous post: How to determine if my Unix Hardware is 64 bit ?
Get useful articles delivered to your email
What is your most pressing Monitoring concern?
- Gap in Monitoring
- Too many false alerts
- Too many siloed monitoring tools
- Lack of expertise in-house
Top Posts & Pages
- How to use rex command to extract fields in Splunk?
- What is SYN_SENT socket status?
- One important change in Memory Management in Java 8
- What is the Difference Between Splunk Universal Forwarder and Heavy Forwarder ?
- How to find out which jar files are loaded by your Application?
- 5 not so easy ways to monitor the Heap Usage of your Java Application
- How to install Apache Web Server using Yum?
- 10 awesome Linux/Unix commands every Middleware Administrator should know
- Configuring Alerts in AppDynamics
- AppDynamics (17)
- Database (1)
- DevOps (13)
- Dynatrace (4)
- ElasticSearch (1)
- Free Ebooks (5)
- Free Ebooks and Tools (2)
- Java Programming (45)
- Log Management (10)
- Monitoring (1)
- MS Excel (1)
- Network (18)
- New Relic (3)
- Oracle (30)
- Productivity (1)
- Security (17)
- Uncategorized (7)
- Web Server (13)
- WebLogic (51)
- WebSphere (53)
- Windows (12)
- Stack Overflow Public questions & answers
- Stack Overflow for Teams Where developers & technologists share private knowledge with coworkers
- Talent Build your employer brand
- Advertising Reach developers & technologists worldwide
- About the company
Collectives™ on Stack Overflow
Find centralized, trusted content and collaborate around the technologies you use most.
Q&A for work
Connect and share knowledge within a single location that is structured and easy to search.
Spark java.lang.OutOfMemoryError: Java heap space
My cluster: 1 master, 11 slaves, each node has 6 GB memory.
My settings:
Here is the problem:
First , I read some data (2.19 GB) from HDFS to RDD:
Second , do something on this RDD:
Last , output to HDFS:
When I run my program it shows:
There are too many tasks?
PS : Every thing is ok when the input data is about 225 MB.
How can I solve this problem?
- out-of-memory
- apache-spark
- how do run spark? is it from console? or which deploy scripts do you use? – Tombart Jan 15, 2014 at 14:46
- I use sbt to compile and run my app. sbt package then sbt run. I implemented the same program on hadoop a month ago , and I met the same problem of OutOfMemoryError, but in hadoop it can be easily solved by increasing the value of mapred.child.java.opts from Xmx200m to Xmx400m. Does spark have any jvm setting for it's tasks?I wonder if spark.executor.memory is the same meaning like mapred.child.java.opts in hadoop. In my program spark.executor.memory has already been setted to 4g much bigger than Xmx400m in hadoop. Thank you~ – Hellen Jan 16, 2014 at 1:26
- Are the three steps you mention the only ones you do? What's the size of the dataa generated by (data._1, desPoints) - this should fit in memory esp if this data is then shuffled to another stage – Arnon Rotem-Gal-Oz Feb 2, 2015 at 5:08
- 2 What is the memory configuration for the driver? Check which server get the out of memory error. Is it the driver or one of the executors. – RanP Oct 12, 2015 at 15:26
- See here all configurations properties: spark.apache.org/docs/2.1.0/configuration.html – Naramsim Mar 16, 2017 at 13:36
14 Answers 14
I have a few suggestions:
- If your nodes are configured to have 6g maximum for Spark (and are leaving a little for other processes), then use 6g rather than 4g, spark.executor.memory=6g . Make sure you're using as much memory as possible by checking the UI (it will say how much mem you're using)
- Try using more partitions, you should have 2 - 4 per CPU. IME increasing the number of partitions is often the easiest way to make a program more stable (and often faster). For huge amounts of data you may need way more than 4 per CPU, I've had to use 8000 partitions in some cases!
- Decrease the fraction of memory reserved for caching , using spark.storage.memoryFraction . If you don't use cache() or persist in your code, this might as well be 0. It's default is 0.6, which means you only get 0.4 * 4g memory for your heap. IME reducing the mem frac often makes OOMs go away. UPDATE: From spark 1.6 apparently we will no longer need to play with these values, spark will determine them automatically.
- Similar to above but shuffle memory fraction . If your job doesn't need much shuffle memory then set it to a lower value (this might cause your shuffles to spill to disk which can have catastrophic impact on speed). Sometimes when it's a shuffle operation that's OOMing you need to do the opposite i.e. set it to something large, like 0.8, or make sure you allow your shuffles to spill to disk (it's the default since 1.0.0).
- Watch out for memory leaks , these are often caused by accidentally closing over objects you don't need in your lambdas. The way to diagnose is to look out for the "task serialized as XXX bytes" in the logs, if XXX is larger than a few k or more than an MB, you may have a memory leak. See https://stackoverflow.com/a/25270600/1586965
- Related to above; use broadcast variables if you really do need large objects.
- If you are caching large RDDs and can sacrifice some access time consider serialising the RDD http://spark.apache.org/docs/latest/tuning.html#serialized-rdd-storage . Or even caching them on disk (which sometimes isn't that bad if using SSDs).
- ( Advanced ) Related to above, avoid String and heavily nested structures (like Map and nested case classes). If possible try to only use primitive types and index all non-primitives especially if you expect a lot of duplicates. Choose WrappedArray over nested structures whenever possible. Or even roll out your own serialisation - YOU will have the most information regarding how to efficiently back your data into bytes, USE IT !
- ( bit hacky ) Again when caching, consider using a Dataset to cache your structure as it will use more efficient serialisation. This should be regarded as a hack when compared to the previous bullet point. Building your domain knowledge into your algo/serialisation can minimise memory/cache-space by 100x or 1000x, whereas all a Dataset will likely give is 2x - 5x in memory and 10x compressed (parquet) on disk.
http://spark.apache.org/docs/1.2.1/configuration.html
EDIT : (So I can google myself easier) The following is also indicative of this problem:
- Thanks for your suggestions~ If I set spark.executor.memory=6g, spark will have the problem:"check your cluster UI to ensure that workers are registered and have sufficient memory". Setting spark.storage.memoryFraction to 0.1 can't solve the problem either. Maybe the problem lies in my code.Thank you! – Hellen Apr 2, 2014 at 5:05
- 3 @samthebest This is a fantastic answer. I really appreciate the logging help for finding memory leaks. – Myles Baker Apr 9, 2015 at 16:36
- 1 Hi @samthebest how did you specify 8000 partitions? Since I am using Spark sql I can only specify partition using spark.sql.shuffle.partitions, default value is 200 should I set it to more I tried to set it to 1000 but not helping getting OOM are you aware what should be the optimal partition value I have 1 TB skewed data to process and it involves group by hive queries. Please guide. – Umesh K Sep 2, 2015 at 7:15
- 2 Hi @user449355 please could you ask a new question? For fear of starting a long a comment thread :) If you are having issues, likely other people are, and a question would make it easier to find for all. – samthebest Sep 2, 2015 at 9:12
- 1 To your first point, @samthebest, you should not use ALL the memory for spark.executor.memory because you definitely need some amount of memory for I/O overhead. If you use all of it, it will slow down your program. The exception to this might be Unix, in which case you have swap space. – makansij Jul 16, 2016 at 23:07
To add a use case to this that is often not discussed, I will pose a solution when submitting a Spark application via spark-submit in local mode.
According to the gitbook Mastering Apache Spark by Jacek Laskowski :
You can run Spark in local mode. In this non-distributed single-JVM deployment mode, Spark spawns all the execution components - driver, executor, backend, and master - in the same JVM. This is the only mode where a driver is used for execution.
Thus, if you are experiencing OOM errors with the heap , it suffices to adjust the driver-memory rather than the executor-memory .
Here is an example:
- How much percentage we should be considering for driver memory in stand-alone mode. – Yashwanth Kambala Oct 4, 2019 at 13:09
- @Brian, In local mode, does the driver memory need to be larger than the input data size? Is it possible to specify number of partitions for input dataset, so the Spark job can deal with dataset much larger than the available RAM? – fuyi Jun 23, 2020 at 20:29
- Driver memory can't be larger than the input size. Consider you have a 160gb file to be loaded into your cluster. so, for that, you would create a driver with 161 GB? that's not feasible. Its how you determine the number of executors, their memory, and the buffer for overhead memory and their OS. You need to calculate all these things by seeing the yarn UI and the cluster memory given to you. For better performance, you also need to consider the executor-cores which should be always between 3-5 @fuyi – whatsinthename Sep 22, 2020 at 13:19
You should configure offHeap memory settings as shown below:
Give the driver memory and executor memory as per your machines RAM availability. You can increase the offHeap size if you are still facing the OutofMemory issue .

- Added offHeap setting helped – kennyut Nov 9, 2018 at 17:06
- 4 setting the driver memory in your code will not work, read spark documentation for this: Spark properties mainly can be divided into two kinds: one is related to deploy, like “spark.driver.memory”, “spark.executor.instances”, this kind of properties may not be affected when setting programmatically through SparkConf in runtime, or the behavior is depending on which cluster manager and deploy mode you choose, so it would be suggested to set through configuration file or spark-submit command line options. – Abdulhafeth Sartawi Jan 27, 2019 at 8:02
- 2 THE BEST ANSWER! My problem was that Spark wasn't installed at master node, I just used PySpark to connect to HDFS and got the same error. Using config solved the problem. – Mikhail_Sam Feb 12, 2019 at 8:11
- I just added the configurations using spark-submit command to fix the heap size issue. Thanks. – Pritam Sadhukhan Jun 10, 2020 at 5:25
- Note .config("spark.memory.offHeap.enabled",true) should be changed to .config("spark.memory.offHeap.enabled","true") for pyspark users. – scottlittle Sep 27, 2022 at 14:58
You should increase the driver memory. In your $SPARK_HOME/conf folder you should find the file spark-defaults.conf , edit and set the spark.driver.memory 4000m depending on the memory on your master, I think. This is what fixed the issue for me and everything runs smoothly
- How much percentage of mem to be alloted, in stand alone – Yashwanth Kambala Oct 4, 2019 at 13:15
Have a look at the start up scripts a Java heap size is set there, it looks like you're not setting this before running Spark worker.
You can find the documentation to deploy scripts here .
- Thank you~ I will try later. From spark ui, it shows the memory of every executor is 4096. So the setting has been enabled, right? – Hellen Jan 16, 2014 at 14:03
- Saw your answer while I'm facing similar issue ( stackoverflow.com/questions/34762432/… ). Looking the link you provided looks like setting Xms/Xmx is not there anymore, can you tell why? – Seffy Jan 13, 2016 at 9:39
- The content at the script linked to by start up scripts has changed unfortunately. No such options exist as of 2019-12-19 – David Groomes Dec 19, 2019 at 20:30
I suffered from this issue a lot when using dynamic resource allocation. I had thought it would utilize my cluster resources to best fit the application.
But the truth is the dynamic resource allocation doesn't set the driver memory and keeps it to its default value, which is 1G.
I resolved this issue by setting spark.driver.memory to a number that suits my driver's memory (for 32GB ram I set it to 18G).
You can set it using spark submit command as follows:
Very important note, this property will not be taken into consideration if you set it from code, according to Spark Documentation - Dynamically Loading Spark Properties :
Spark properties mainly can be divided into two kinds: one is related to deploy, like “spark.driver.memory”, “spark.executor.instances”, this kind of properties may not be affected when setting programmatically through SparkConf in runtime, or the behavior is depending on which cluster manager and deploy mode you choose, so it would be suggested to set through configuration file or spark-submit command line options; another is mainly related to Spark runtime control, like “spark.task.maxFailures”, this kind of properties can be set in either way.
- 2 You should use --conf spark.driver.memory=18g – merenptah Mar 29, 2019 at 7:30
Broadly speaking, spark Executor JVM memory can be divided into two parts. Spark memory and User memory. This is controlled by property spark.memory.fraction - the value is between 0 and 1. When working with images or doing memory intensive processing in spark applications, consider decreasing the spark.memory.fraction . This will make more memory available to your application work. Spark can spill, so it will still work with less memory share.
The second part of the problem is division of work. If possible, partition your data into smaller chunks. Smaller data possibly needs less memory. But if that is not possible, you are sacrifice compute for memory. Typically a single executor will be running multiple cores. Total memory of executors must be enough to handle memory requirements of all concurrent tasks. If increasing executor memory is not a option, you can decrease the cores per executor so that each task gets more memory to work with. Test with 1 core executors which have largest possible memory you can give and then keep increasing cores until you find the best core count.
The location to set the memory heap size (at least in spark-1.0.0) is in conf/spark-env. The relevant variables are SPARK_EXECUTOR_MEMORY & SPARK_DRIVER_MEMORY . More docs are in the deployment guide
Also, don't forget to copy the configuration file to all the slave nodes.
- 5 How do you know which one to adjust between SPARK_EXECUTOR_MEMORY & SPARK_DRIVER_MEMORY ? – makansij Jul 16, 2016 at 23:14
- 17 i.e. what error would tell you to increase the SPARK_EXECUTOR_MEMORY , and what error would tell you to increase SPARK_DRIVER_MEMORY ? – makansij Jul 16, 2016 at 23:43
Did you dump your master gc log? So I met similar issue and I found SPARK_DRIVER_MEMORY only set the Xmx heap. The initial heap size remains 1G and the heap size never scale up to the Xmx heap.
Passing "--conf "spark.driver.extraJavaOptions=-Xms20g" resolves my issue.
ps aux | grep java and the you'll see the follow log:=
24501 30.7 1.7 41782944 2318184 pts/0 Sl+ 18:49 0:33 /usr/ java /latest/bin/ java -cp /opt/spark/conf/:/opt/spark/jars/* -Xmx30g -Xms20g
I have few suggession for the above mentioned error.
● Check executor memory assigned as an executor might have to deal with partitions requiring more memory than what is assigned.
● Try to see if more shuffles are live as shuffles are expensive operations since they involve disk I/O, data serialization, and network I/O
● Use Broadcast Joins
● Avoid using groupByKeys and try to replace with ReduceByKey
● Avoid using huge Java Objects wherever shuffling happens

From my understanding of the code provided above, it loads the file and does map operation and saves it back. There is no operation that requires shuffle. Also, there is no operation that requires data to be brought to the driver hence tuning anything related to shuffle or driver may have no impact. The driver does have issues when there are too many tasks but this was only till spark 2.0.2 version. There can be two things which are going wrong.
- There are only one or a few executors. Increase the number of executors so that they can be allocated to different slaves. If you are using yarn need to change num-executors config or if you are using spark standalone then need to tune num cores per executor and spark max cores conf. In standalone num executors = max cores / cores per executor .
- The number of partitions are very few or maybe only one. So if this is low even if we have multi-cores,multi executors it will not be of much help as parallelization is dependent on the number of partitions. So increase the partitions by doing imageBundleRDD.repartition(11)
Simple if you are using a script or juyter notebook then set only config path when you start build a spark session...
- Worked like magic for me! – Merin Nakarmi Nov 17, 2022 at 19:19
heap space errors generally occur due to either bringing too much data back to the driver or the executor. In your code it does not seem like you are bringing anything back to the driver, but instead you maybe overloading the executors that are mapping an input record/row to another using the threeDReconstruction() method. I am not sure what is in the method definition but that is definitely causing this overloading of the executor. Now you have 2 options,
- edit your code to do the 3-D reconstruction in a more efficient manner.
- do no edit code, but give more memory to your executors, as well as give more memory-overhead. [spark.executor.memory or spark.driver.memoryOverhead]
I would advise being careful with the increase and use only as much as you need. Each job is unique in terms of its memory requirements, so I would advise empirically trying different values increasing every time by a power of 2 (256M,512M,1G .. and so on)
You will arrive at a value for the executor memory that will work. Try re-running the job with this value 3 or 5 times before settling for this configuration.
Setting these exact configurations helped resolving the issue.

Your Answer
Sign up or log in, post as a guest.
Required, but never shown
By clicking “Post Your Answer”, you agree to our terms of service , privacy policy and cookie policy
Not the answer you're looking for? Browse other questions tagged out-of-memory apache-spark or ask your own question .
- The Overflow Blog
- How Intuit democratizes AI development across teams through reusability sponsored post
- The nature of simulating nature: A Q&A with IBM Quantum researcher Dr. Jamie...
- Featured on Meta
- We've added a "Necessary cookies only" option to the cookie consent popup
- Launching the CI/CD and R Collectives and community editing features for...
- The [amazon] tag is being burninated
- Temporary policy: ChatGPT is banned
- Staging Ground Beta 1 Recap, and Reviewers needed for Beta 2
Hot Network Questions
- Pixel 5 vibrates once when turned face down
- NoData masks in QGIS
- Why are some high schools called hospitals?
- Would Fey Ancestry affect Cutting Words?
- Wavefunction of a particle decay
- Rolling cube on an infinite chessboard
- Kitchen receptacle not working even after replacement?
- Are there any other options to mitigate the Volley trait?
- Manhua about a prince who dies from illness and is revived as a kind of 'zombie' by a 'magician'
- Are the plants animated by an Assassin Vine considered magical?
- How to print hardware models for humans
- Does the attacker need to be on the same network to carry out a deauthentication attack?
- Does single case chance actually exist?
- Should I ask why they are interested in me in an interview for a faculty position?
- Looking for story where there is a virus making kids have powers
- What are the Stargate dial home device symbols used to dial Earth from Atlantis?
- Is it ok to run post hoc comparisons if ANOVA is nearly significant?
- Why did Ukraine abstain from the UNHRC vote on China?
- Why are aluminum frames painted at all?
- Max-heap implementation in C
- A plastic tab/tag stuck out of the GE dryer drum gap. Does anyone know what it is, if it is a part of the dryer, and if so how I can fix it?
- Move duplicated folder name up one level
- Does Quantum Computers crack RSA and AES?
- Disconnect between goals and daily tasks...Is it me, or the industry?
Your privacy
By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy .
Sign in to Community
Sign in to view all badges
Expand my Community achievements bar.

Adobe Experience Manager Sites & More
How to resolve out of memory error java heap space.
- Mark as New
- Subscribe to RSS Feed
Total Likes
- All forum topics
- Previous Topic
The ultimate experience is back.
Join us in vegas to build skills, learn from the world's top brands, and be inspired..
- Instructor-led training
- View all learning options
Documentation
- Documentation home
- Experience Cloud release notes
- Document Cloud release notes
- Community home
- Community Advisors
- Experience League Showcase
- Advertising Cloud
- Audience Manager
- Campaign Classic v7 & Campaign v8
- Campaign Standard
- Developer Cloud Extensibility
- Experience Cloud
- Experience Manager Sites & More
- Experience Platform
- Journey Optimizer
- Real-Time Customer Data Platform
- Creative Cloud
- Document Cloud
- Marketo Engage
- Experience Cloud support
- Document Cloud support
- Community forums
- Adobe Developer
- Adobe status
Adobe account
- Log in to your account
- Corporate responsibility
- Investor Relations
- Supply chain
- Trust Center
- Diversity & Inclusion
- COVID-19 Responses
How to resolve OutOfMemoryError: Java heap space error.
Article id: 44142, updated on:, issue/introduction.
We have seen below error message in the IM console and server.log. What should we do?
java.lang.OutOfMemoryError: Java heap space
Environment
The error indicates that Java ran out of memory to allocate to the application resulting in anything from minor issues such as that activity having to wait to complete, to complete system outages.
Additional Information


IRZU INSTITUTE
Inštitut za raziskovanje zvočnih umetnosti

Java Heap Space, How To Fix It? Example
Eclipse: Java heap space, How to fix it? Example
With this article, we’ll look at some examples of how to address the Eclipse: Java heap space, How to fix it? Example problem .
The Eclipse: Java heap space, How to fix it? Example was solved using a number of scenarios, as we have seen.
Table of Contents
How do I fix Java heap space error?
Fixing Java. lang. outofmemory Error in Java
- Open the eclipse.ini file of your Eclipse IDE as shown below:
- Now change the Xmx value as per your system requirement as shown below:
- Relaunch the Eclipse IDE, and the heap size will be increased.
How do I increase the heap size available to Eclipse?
Since Eclipse is a Java program, you can increase the heap size of Eclipse by using JVM memory options -Xms and -Xmx. There are two ways to provide JVM options to eclipse either updating the Eclipse shortcut or adding -vmargs on eclipse. ini file.

How do I resolve heap size?
There are several ways to eliminate a heap memory issue: Increase the maximum amount of heap available to the VM using the -Xmx VM argument. Use partitioning to distribute the data over additional machines. Overflow or expire the region data to reduce the heap memory footprint of the regions.12-Jun-2018
How do I fix out of memory in Eclipse?
Tuning Eclipse Performance and Avoiding OutOfMemory Exceptions
- Go to your Eclipse setup folder.
- If you are running Eclipse on Mac OS X then. Right click on eclipse.app icon. Click on Show Package Contents.
- Open eclipse.ini file.
- Change below parameters. -Xms512m.
- Add below parameters. -XX:PermSize=256m.
How do I free up JVM memory?
There is no way to force JVM to free up the memory, System. gc() is just a hint. It’s up to GC to manage the memory (do note there are various types of memory e.g. heap, meta space, off-heap).19-Jan-2019
How do I fix heap size limit exceeding?
If you find the heap size error, it is always caused by SOQL statement returning more than 30,000 records or more OR collection objects holding too many records in memory. To fix it, remove the SOQL statement and put a “where condition” to return a limited records (1 to 5) and fine tune any collection object.20-Dec-2016
How do I see heap memory in Eclipse?
Goto Window > Preferences > General and enable Show heap status and click OK. In the status bar of eclipse (bottom of the screen) a new UI element will appear. We can see 3 things: The amount of used memory by the application (including garbage that has not been collected), in the example 111MB.11-Nov-2011
What is the max heap size for 64 bit JVM?
For 64 bit platforms and Java stacks in general, the recommended Maximum Heap range for WebSphere Application Server, would be between (4096M – 8192M) or (4G – 8G).
What happens if heap is full?
When the heap becomes full, garbage is collected. During the garbage collection objects that are no longer used are cleared, thus making space for new objects.
What causes Java heap space error?
Usually, this error is thrown when there is insufficient space to allocate an object in the Java heap. In this case, The garbage collector cannot make space available to accommodate a new object, and the heap cannot be expanded further.
Read more here: Source link
- ARTISTIC (82)
- EDUCATIONAL (55)
- INTERVIEWS (5)
- POSTDOCS (3)
- PRODUCTION (16)
- 3D MODELING & FABRICATION (1)
- ANDROID APPS PERMISSION EDITING (1)
- ANDROID AUTOMATION ENGINEER (1)
- ARTIFICIAL INTELLIGENCE ARCHITECT (1)
- C/C++ DEVELOPER (1)
- EDITOR OR INTERN EDITOR (1)
- MARKETING (1)
- NODE.JS AUTOMATION (1)
- PRODUCT MANAGER (1)
- PUREDATA INTERACTIVE AUDIO PROGRAMMING (1)
- RESEARCH COMMUNITY MANAGER (1)
- RESEARCH WITH PUBLISHING OPPORTUNITY (1)
- SALES AND AFFILIATE MANAGEMENT (1)
- SOCIAL MEDIA MANAGER (1)
- SOCIAL MEDIA MANAGER AND STRATEGIST (1)
- TESTING ENGINEER (1)
- UNITY 3D C# GAME DEVELOPER (1)
- UNITY 3D C# GAME DEVELOPER AND TESTING ENGINEER (1)
- VIDEO ANIMATOR (1)
- WEB AUTOMATION AND TESTING (1)
- WORDPRESS MANAGER (1)
- AUGEMENTED REALITY (1)
- BREATHING GAMES (85)
- DIRECTIONAL SPEAKERS (4)
- MARINE INTELLIGENCE (3)
- RECORDING CABLES (1)
- SMART DRUMS (11)
- SMART GUITARS (9)
- SMART PIANO (1)
- VIRTUAL REALITY (10)
- 3D MODELLING (77)
- 3D PRINTING (399)
- ACOUSTICS & DSP (844)
- DATA MANAGEMENT (1,148)
- ELASTIC & LUCENE (77)
- FFT & WAVELETS (168)
- GAME ENGINES (161)
- GSR & BIOFEEDBACK (102)
- JS, REACT, & NODE (422)
- LINUX & ANDROID (323)
- LOW VOLTAGE IC (184)
- MEMS SYSTEMS (14)
- MQTT & IOT AUTOMATION (221)
- PUREDATA & MAX MSP (12)
- REST/HTTP & GRAPHQL (121)
- VIRTUALIZATION (140)
- KERNEL STREAMING (17)
Learn & Share
Resolve the “java.lang.outofmemoryerror: java heap space” issue in spark and hive(tez and mr).

OOM in the Driver process
To resolve the issue
OOM in the Executor process
We can tune the memory provided to the Executor and the number of executors to distribute the load
In HIVE (Tez or MR)
Example ERROR message
Also, consider tuning the code and data size for better performance.
Good luck with your Learning !!
Leave a Reply Cancel reply
Recent posts.
- No suggested jump to results
- Notifications
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement . We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
java.lang.OutOfMemoryError: Java heap space #2
jy1989 commented Mar 2, 2023 • edited
Jy1989 commented mar 2, 2023.
Sorry, something went wrong.
nackily commented Mar 3, 2023
No branches or pull requests

Java Tutorial
Control statements, java object class, java inheritance, java polymorphism, java abstraction, java encapsulation, java oops misc.
- Send your Feedback to [email protected]
Help Others, Please Share

Learn Latest Tutorials

Transact-SQL

Reinforcement Learning

R Programming

React Native

Python Design Patterns

Python Pillow

Python Turtle

Preparation

Verbal Ability

Interview Questions

Company Questions
Trending Technologies

Artificial Intelligence

Cloud Computing

Data Science

Machine Learning

B.Tech / MCA

Data Structures

Operating System

Computer Network

Compiler Design

Computer Organization

Discrete Mathematics

Ethical Hacking

Computer Graphics

Software Engineering

Web Technology

Cyber Security

C Programming

Control System

Data Mining

Data Warehouse
Javatpoint Services
JavaTpoint offers too many high quality services. Mail us on [email protected] , to get more information about given services.
- Website Designing
- Website Development
- Java Development
- PHP Development
- Graphic Designing
- Digital Marketing
- On Page and Off Page SEO
- Content Development
- Corporate Training
- Classroom and Online Training
Training For College Campus
JavaTpoint offers college campus training on Core Java, Advance Java, .Net, Android, Hadoop, PHP, Web Technology and Python. Please mail your requirement at [email protected] Duration: 1 week to 2 week

- Install App
Java Programming
Error: 'java heap space', how to fix it.

- Bahasa Indonesia
- Sign out of AWS Builder ID
- AWS Management Console
- Account Settings
- Billing & Cost Management
- Security Credentials
- AWS Personal Health Dashboard
- Support Center
- Knowledge Center
- AWS Support Overview
- AWS re:Post
How do I resolve the "java.lang.OutOfMemoryError: Java heap space" error in AWS Glue?
Last updated: 2022-02-07
My AWS Glue job fails with "Command failed with exit code 1" and Amazon CloudWatch Logs shows the "java.lang.OutOfMemoryError: Java heap space" error.
Short description
The "java.lang.OutOfMemoryError: Java heap space" error indicates that a driver or executor process is running out of memory. To determine whether a driver or an executor causes the OOM, see Debugging OOM exceptions and job abnormalities . The following resolution is for driver OOM exceptions only.
Driver OOM exceptions commonly happen when an Apache Spark job reads a large number of small files from Amazon Simple Storage Service (Amazon S3). Resolve driver OOM exceptions with DynamicFrames using one or more of the following methods.
When you activate the grouping feature, tasks process multiple files instead of individual files. For more information, see Fix the processing of multiple files using grouping .
Activate useS3ListImplementation
AWS Glue creates a file index in driver memory lists when it lists files. When you set useS3ListImplementation to True , as shown in the following example, AWS Glue doesn't cache the list of files in memory all at once. Instead, AWS Glue caches the list in batches. This means that the driver is less likely to run out of memory.
Here's an example of how to activate useS3ListImplementation with from_catalog :
Here's an example of how to activate useS3ListImplementation with from_options :
The useS3ListImplementation feature is an implementation of the Amazon S3 ListKeys operation , which splits large results sets into multiple responses. It's a best practice to use useS3ListImplementation with job bookmarks.
Additional troubleshooting
If grouping and useS3ListImplementation don't resolve driver OOM exceptions, try the following:
- Use CloudWatch Logs and CloudWatch metrics to analyze driver memory . Set up CloudWatch alarms to alert you when specific thresholds are breached in your job.
- Avoid using actions such as collect and count . These actions collect results on the driver, and that can cause driver OOM exceptions.
- Analyze your dataset and select the right worker type for your job. Consider scaling up to G.1X or G.2X.
Related information
Optimize memory management in AWS Glue
Reading input files in larger groups
Best practices for successfully managing memory for Apache Spark applications on Amazon EMR
Did this article help?
Do you need billing or technical support?
Ending Support for Internet Explorer

- Jaspersoft.com
- Login/Register
You are here
How to: fix java heap space error that may occur during the import of huge repository in linux os.
This situation and stack trace may occur during the import of a repository that contains a big number of objects.
To fix this problem, the Java Heap Space value should be increased up to the value that
could allow you to finish the import operation without any memory related errors.
To do this:
navigate to the configuration files that contains this adjustment:
buildomatic\js-import.sh
buildomatic\bin\js-import-export.sh
buildomatic\bin\import-export.xml
set the same value for the -Xmx parameter in the 3 configuration files
that are mentioned above
Log in or register to post comments
- Documentation
- Privacy Policy
- User Groups
- Contribute to our Wiki
- Request a Feature
- Report a Bug
- Share your Extension
- Terms of Use
- JasperReports Server
- JasperReports Library
- Jaspersoft ETL
- Jaspersoft Studio
- Visualize.js
- JasperReports IO
Sitecore Diaries
Learn, explore and enjoy sitecoring, how to resolve solr heap size issue.
We recently started facing heap size issue on one of our SOLR instance.
I thought it was easy fix by just adding more memory but it wasn’t as forward as I thought. Before you jump to the solution I will share a good link which will explain you the cause of it in the first place.
- https://sitecore.stackexchange.com/questions/622/how-much-memory-does-the-jvm-for-solr-need
This can be fixed by updating the memory value double of what is currently present in your SOLR instance (default is 512 MB)
- Go to the SOLR instance.
- Open solr.in.cmd file (C:\<SOLR instance>\solr-<version>\bin)
- Search for “SOLR_JAVA_MEM”.
- You will see a commented line – “REM set SOLR_JAVA_MEM=-Xms512m -Xmx512m”.
- Remove the REM at the beginning to uncomment it.
- Change the value to 1024m – “set SOLR_JAVA_MEM=-Xms1024m -Xmx1024m”
- Save the file and restart the service.
Tip: It wasn’t allowing me to use 1024m and the service was throwing error, I used 1000m and it worked.
If the above steps doesn’t work then you can update the values in solr.cmd file.
- Search for “SOLR_JAVA_MEM” in Solr.cmd file.
- You will see this line – “IF “%SOLR_JAVA_MEM%”==”” set SOLR_JAVA_MEM=-Xms512m -Xmx512m”
- Comment the line by adding REM at the beginning.
- Add new line line below it – “IF “set SOLR_JAVA_MEM=-Xms1024m -Xmx1024m”
The links that I referred while troubleshooting are:
- https://sitecore.stackexchange.com/questions/8849/java-lang-outofmemoryerror-during-solr-index-rebuilding
- https://horizontal.blog/2016/04/09/solr-core-configuration-considerations/
Thank you.. Keep Learning.. Keep Sitecoring.. 🙂
Share this:
- Click to share on Twitter (Opens in new window)
- Click to share on Facebook (Opens in new window)
- Click to share on LinkedIn (Opens in new window)
- Click to share on WhatsApp (Opens in new window)
- Click to email a link to a friend (Opens in new window)
Leave a Reply
Fill in your details below or click an icon to log in:
You are commenting using your WordPress.com account. ( Log Out / Change )
You are commenting using your Twitter account. ( Log Out / Change )
You are commenting using your Facebook account. ( Log Out / Change )
Connecting to %s
Notify me of new comments via email.
Notify me of new posts via email.

- Already have a WordPress.com account? Log in now.
- Follow Following
- Copy shortlink
- Report this content
- View post in Reader
- Manage subscriptions
- Collapse this bar

IMAGES
VIDEO
COMMENTS
To solve that issue, you first need to find the root cause of it. The root cause can be very hard to find in development if the problem cannot be reproduced. Follow those steps to find the root cause of the OOM: Step 1. Generate a heap dump on OutOfMemoryError Start the application with the VM argument -XX:+HeapDumpOnOutOfMemoryError.
*obviously the Oracle java site should be your first point of call. 1) I don't think you can adjust the heap space dynamically during runtime. So setting it the start is a must. 2) Depends on if you are running a 32/64bit OS, and how much RAM you have on the PC 3) See link above and Oracle's own docs, it is too large to describe :)
The solution to fix this problem is to increase the heap space (Default value maybe 128 MB). We can customize the heap space by specifying the JVM parameters -Xmx and -XX: MaxPermSize. To achieve this, please do the following, Edit the catalina.sh file loacted in the bin directory of tomcat (/usr/share/tomcat7/bin).
If you want to increase your heap space, you can use java -Xms<initial heap size> -Xmx<maximum heap size> on the command line. By default, the values are based on the JRE version and system configuration. You can find out more about the VM options on the Java website.
Add the -Xmx parameter to specify the maximum heap size. The default value depends on various factors like the JVM release/vendor, the actual memory of the machine, etc... In your situation, the java.lang.OutOfMemoryError: Java heap space reveals that the default value is not enough for this job so you need to override it.
Let me walk you through four of the most common Java Heap related issues and how to fix them. 1. ' OutOfMemory' Error due to insufficient Heap This is when JVM runs out of Heap and GC is unable to reclaim memory to meet the demand of the application.
It is important to perform a proper diagnostic first: Enable verbose:gc. This will allow you to understand the memory growing pattern over time. Generate and analyze a JVM Heap Dump. This will allow you to understand your application memory footprint and pinpoint the source of the memory leak (s).
Make sure you're using as much memory as possible by checking the UI (it will say how much mem you're using) Try using more partitions, you should have 2 - 4 per CPU. IME increasing the number of partitions is often the easiest way to make a program more stable (and often faster).
To set a different value, multiply the amount you require in GB by 1024 (the variable value needs to be in MB). Click on OK to save your changes, then click Apply and finally click OK once more ...
Increase heap size first. Use correct memory parameters [1] either with jar or <crx-quickstart folder>/bin/start script when starting AEM instance. Check your servlet/custom code for any possible memory leaks. Thanks, Wasil [1] Increase java heap size - /home/edivad 18.0K 16 1 Like Translate Reply Qamar_khan Level 3 02-01-2019 05:41 PST
java.lang.OutOfMemoryError: Java heap space Environment CA Identity Manager Cause The error indicates that Java ran out of memory to allocate to the application resulting in anything from minor issues such as that activity having to wait to complete, to complete system outages. Resolution
How do I resolve heap size? There are several ways to eliminate a heap memory issue: Increase the maximum amount of heap available to the VM using the -Xmx VM argument. Use partitioning to distribute the data over additional machines. Overflow or expire the region data to reduce the heap memory footprint of the regions.12-Jun-2018
Executors are the slave Java process, Where the actual task runs, As we know spark is memory intense framework, it is essential to provide the memory needed for processing else you will be seeing OOM. To resolve the issue. We can tune the memory provided to the Executor and the number of executors to distribute the load
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space at java.awt.image.DataBufferInt.(DataBufferInt.java:75) at java.awt.image ...
Solution 1: The easiest solution to remove such error is to increase the memory size of the JVM heap space. To increase the size of the heap space, use "-Xmx" java options to solve the out-of-memory error. This configuration will provide 1024 heap space to the application.
xmlResult = new StreamResult (new FileOutputStream ("C:/temp/outFilename")); try { // on this line of code , i get the heap size error. xsltengine.transform (xmlSource, xmlResult); }catch (Exception e) { System.out.println ("IN TRANSER HEAP prblem"); e.printStackTrace (); } logger.info ("XML TO WML TRANSFORMATION IS COMPLETED.");
How do I resolve "OutOfMemoryError" Hive Java heap space exceptions on Amazon EMR that occur when Hive outputs the query results? ... the Hive metastore, or the client side. To resolve this issue, increase the maximum memory allocation for the JVM or increase HADOOP_HEAPSIZE. ... # # java.lang.OutOfMemoryError: Java heap space # -XX ...
AWS Glue creates a file index in driver memory lists when it lists files. When you set useS3ListImplementation to True, as shown in the following example, AWS Glue doesn't cache the list of files in memory all at once.Instead, AWS Glue caches the list in batches. This means that the driver is less likely to run out of memory.
Solution: To fix this problem, the Java Heap Space value should be increased up to the value that could allow you to finish the import operation without any memory related errors. To do this: navigate to the configuration files that contains this adjustment: buildomatic\js-import.sh buildomatic\bin\js-import-export.sh
We recently started facing heap size issue on one of our SOLR instance. Error: Caused by: java.lang.OutOfMemoryError: Java heap space. I thought it was easy fix by just adding more memory but it wasn't as forward as I thought. Before you jump to the solution I will share a good link which will explain you the cause of it in the first place.