Out-of-Memory in RAD XE7

Abstract: Recommendations for addressing Out of Memory errors in the IDE.

We have seen several reports of problems related with “out-of-memory” problems in RAD Studio XE7. While the actual error message you see might be different (including access violations), it can be an error caused by the missing creation of an object due to the lack of available memory space. While similar problems have been experience in other recent versions, for some developers XE7 has made them more noticeable (while other developers find XE7 more stable than previous releases).

You can find an example of the feedback we are receiving (and some of the possible solutions) in the comments to this quality portal report:


The Embarcadero R&D team is actively working on solutions to address out-of-memory scenarios, and in fact XE7 already includes some options to address similar situations (like using out-of-process compilation). After recent reports, R&D has increased its efforts in this direction, looking into all potential workarounds, while working on permanent long-term solutions.

In this document, however, more than highlighting these longer terms plans, our goal is to provide some solutions, workarounds, and suggestions you should consider implementing today. Some of our customers have used this suggestions and workarounds to overcome their problems. These issues primarily show up in larger application (or large project groups with many applications), which can be very different in their code, project management, and architectures.

    1. Why does the IDE need so much memory?

The first question is why the RAD IDE needs so much memory (and more than in past versions). The main reasons relate to the parsing technologies used for the Code Insight support (Code Completion, Error Insight, and the like) that help improve productivity when writing code. The second big consumption of memory comes from the compiler itself, which caches the compiled DCU to dramatically increase compilation speed. By default, the compiler shares the same memory space as the IDE and the debugger also takes advantage of the same information.

The next question is why is there an increase in the memory used by the IDE in XE7? Basically, it boils down to having more features in the IDE, the compiler(s), the RTL, the component libraries and many other subsystems. Each of these might add only a little more memory requirement, but all together it is an increase.

This type of memory issue is a “ceiling issue”: if the IDE memory usage when working on and compiling your application is 95% of what is available, there is no problem. Add another 2 or 3% and you might occasionally hit the ceiling. Add another couple of percentage points, and you’ll likely experience an out-of-memory error. This is why working with your application project from previous versions might have issues in XE7 – even if in the meantime the Embarcadero R&D team did fix bugs and leaks in the IDE that were present in earlier versions, this is saving less than the extra memory required by new features... or by that extra additional unit in your program.

While in detailed technical terms this is a little more complicated, we hope the explanation helps your understanding that the issue is not due to new errors introduced in XE7. We have spent quite some time looking for similar scenarios, but we haven’t found any significant memory leak in any of the areas of the product that could be the root cause the current errors. As such, it is correct to describe these scenarios as “out-of-memory” (or OOM) errors, and not as memory leaks (as you might have seen).

There is another element to consider. In some cases, the issue is not just the total amount memory that is available to the IDE, but the amount of contiguous memory that is available at a given time. In other words, you might have a an out-of-memory error even at times it seems there is still enough memory available, but that is quite fragmented (which can happen after many allocation and de-allocation cycles). This is why closing the current project group (and also closing and reopening the IDE) can bring significant improvements.

    2. Out-of-Process Build

If you are experiencing out-of-memory issues (or related issues) the first thing you should do is enable the out-of-process compilation from the IDE (or to be more precise the executing of MS Build out of process). In short, rather than loading the compiler as a DLL in the memory space of the IDE (adding to the memory used for parsing and so on) the IDE will handle your compilation requests (including plain and simple Ctrl+F9 key strokes) by creating a separate process, running the compiler in it and terminating that process releasing all of the memory it used as the compilation is done. This has two net effects, one is allowing the compiler to use more memory and the second is cleaning up completely after each operation. The drawback is that not having a DCU cache compilation generally takes longer and you need to instrument the executable with debug information if you need to debug the application.


  • Project | Options | Delphi Compiler, set Use MSBuild externally to compile (do this for every project, if you have a project group)
  • Project | Options | Delphi Compiler | Compiling | Debugging, set Use debug .dcus to False
  • Project | Options | Delphi Compiler | Linking, set Include remote debug symbols to True

The process is explained in the XE7 doc wiki at:


For C++Builder the approach is the same but the steps are slightly different:

  • Project | Options | Project Properties | "Run C++ compiler in a separate process"
  • More hints:
    • There is a reported issue with using the external MS Build in case the project path includes non-ANSI characters (https://quality.embarcadero.com/browse/RSP-9613)
    • Remember to enable the debug information at the linker level, or the debugger won’t work at all
    • There is also a (small) leak in the Out of process build for Delphi applications, but it is less severe and it takes a much longer for this to become a problem (using the remote debugger locally, as explained below, resolves this leak).

    3. Reduce project groups size

According to our tests, the large amount of memory used when compiling and building a very large project or many projects in a project group is not an actual leak, as closing the project group reclaims it. This operation frees the DCU cache. It is important to notice that if you don’t really need to keep building all projects in the group, you could split the group so that you can build a set of projects, close that group, and then move the work on a different smaller project group. While creating an all-encompassing project group can indeed be handy, it may end up making the out-of-memory issues more likely to happen.

Notice in particular that if you have several projects in a project group that share a lot of libraries (and source code), those units are recompiled and cached separately for each project, because the project compiler settings could be different and the generated DCU not compatible. Again, separating the projects in different groups can help. However, this is a good scenario for taking advantage of runtime packages, as explained later.

    4. Reduce debugger footprint (when using packages)

When you debug a stand-alone executable, the debugger needs to load its symbols. Given the linker trims the unused types, the information being loaded is what the debugger needs.

When you are using runtime packages, however, the scenario is different. In fact, you are referring to an entire package even when you are using only a component from it. This puts a lot of burden on the debugger. However, it is a little known feature that you can select the runtime packages for which you want to the debugger to load the symbols.

If you load the main executable in the IDE and activate it (if part of a project group) you can do the following:

  • Open Project | Options | Debugger | Symbol Tables and uncheck the "Load all symbols" checkbox
  • To load the symbols for the modules you need there are two alternative ways:
  • Start debugging, right-clicking the packages in the Module View and then select to load the symbols for the most useful ones
  • In the Open Project | Options | Debugger | Symbol Tables click the button labeled "New..." and enter executable name for the "Module Name:" and $(OUTPUTPATH) for "Symbol table path:" and do the same for any packages for which debugging is needed

Symbol tables can be added and removed from the list in Project Options as debugging needs change.

Remember that if you are not using runtime packages you can enable them using

  • Project | Options | Packages | Runtime Packages

Another alternative for debugging is to use the remote debugger, even for a local Win32 application:

  1. Install and run the Platform Assistant Server (PAServer)
  2. Create a Profile for Win32, in the IDE
    1. Tools|Options|Connection Profile Manager|Add
    2. Give it a sensible name
    3. Platform: select "32-bit Windows"
    4. Give it your computer's real IP address (or name) (not localhost)
    5. Test Connection to make sure you can connect to PAServer
  3. In the "Project Manager" right click the Target Platforms |"32-bit Windows"|"Properties"|"Profile" and select your profile (created in step 2)
  4. Now if you debug your application, it will be run under the external debugger and you should have lots of memory free in the IDE

    5. Create an IDE image with less features

Besides working on reducing the memory requirements when compiling and debugging applications, another options that has worked for a few customers is to reduce the basic memory footprint of the IDE. While the IDE uses dynamic, delayed loading for libraries (for example, the VCL components will actually be loaded only the first time you load VCL designer) it is most often the case that you end up with more IDE features and components loaded in memory than you need for a specific project. Of course, this also includes third party components.

Rather than modifying you primary configuration, you can create a shortcut to execute the RAD Studio IDE with the –r command line parameter followed by a symbol, using a command line like the following to create a setting called “lean”:

“BDS.EXE –rlean”

This will create a brand new configuration for the IDE in the registry, separate from your standard configuration. You can combine it with the –p configuration to activate only one personality (particularly if you installed the full RAD product).

After this step, you can edit the configuration (for example removing unneeded design time packages) in the IDE itself or, for some more aggressive cuts, directly editing the list of design time packages in the registry.

If you are using multiple IDE plug-in packages (even more than runtime component packages) they will affect the IDE, using extra memory. This is certainly more significant for any plug-in that parses the code (of a large project) and keeps any information in memory.

A more extreme option, something we really don’t encourage unless everything else fails is to disable the structure pane of the IDE by adding the –noparser parameter on the IDE command line. This will affect many IDE productivity features that help you code, but it does save some memory and could possibly improve stability. Several customers reported it worked for them, and they accepted trading the features for the extra stability.

Speaking of radical options, trimming significant features, some customers have directly disabled (renaming them) the matching libraries of the IDE, including: Borland.Studio.Delphi.dll, Borland.Studio.Refactoring.dll, and refactoride210.bpl. This is a rather extreme approach, but has helped some users to get back using the IDE with limited issues.

    6. Consider changes to an application architecture

In some scenarios, the application architectures used since long time ago is showing its age and our real recommendation is to now consider updating it. This is not just to overcome IDE out-of-memory errors, but because it also makes your application more robust.

The specific scenario we are referring to here is the case of project groups with dozens of DLLs used by a main application. While this does make sense in Windows terms, it does cause a lot of duplicated symbol issues within the application itself, because the same class can be compiled multiple times in multiple modules.

Runtime packages were introduced in the early days of Delphi specifically to make module applications more robust and this is still the preferred architecture today. As a side effect, not only the same units are compiled only once (or actually, not even compiled if they are in a library package), but they are also needed only once (at most) by the debugger and all other systems. Ultimately there will be only one copy (instead of many) of those units in your compiled code.

We fully understand that re-architecting an existing DLL-based application is a significant effort, but we strongly suggest that direction beside the IDE memory problems… and only after trying other solutions outlined before.

    7. Be aware of the “Growth by Generics”

Another scenario that might depend on your application code and cause an increase of the memory used by the compiler and the debugger relates with the way generic data types are used. The way the Object Pascal compiler works can cause the generation of many different types based on the same generic definition, at times even totally identical types that are compiled in different modules. While we won’t certainly suggest removing generics, quite the contrary, there are a few options to consider:

  • Try to avoid circular unit references for units defining core generic types
  • Define and use the same concrete type definitions when possible
  • If possible, refactor generics to share code in base classes, from which a generic class inherits

    8. R&D is working on a long-term solution

As we mentioned earlier, the long-term solutions for a problem like this out-of-memory issue is to chase any and all memory leaks, improve caching algorithms to make them more effective (and more flexible for low-memory scenarios), optimize parsing code for space, and consider increasing the overall amount of memory available to the IDE process.

As you certainly know, even a 32-bit application can be configured to use all of the 4GB of theoretically available memory, rather than the standard 2 GB. Rest assured that we tried this internally, but more effort is required to productize a solution like this with the quality you expect. The RAD Studio IDE has modules written with multiple languages and technologies, and it will take our R&D team some time before all modules are safe to use under the large memory address. There is work being done today to identify and fix these issues, as this option will significantly increase the “memory ceiling” of the IDE, reducing a large number of out-of-memory error scenarios.

Ultimately, the goal will be to move the IDE to a 64-bit application, but this is a larger effort, that will take extra time while we (and many of our customers) need to have a solution as soon as possible.

    9. Let’s have your feedback

We are soliciting feedback from any customer that has experienced similar out-of-memory errors, whether the solutions and workarounds in this document have helped (so we can underline them and suggest others to do the same) and which roadblocks you might have encountered.

This is an active work document we plan updating with more information from our support and R&D teams, and with more information coming from direct customer experiences. For feedback, feel free to email directly RAD Studio product manager team, contacting marco.cantu@embarcadero.com.