Detecting invalidated iterators in Visual Studio


When working with the STL you certainly are using iterators at one point or another. One of the common mishaps when working with iterators is to run into undefined behavior if the iterator is or becomes invalid. This can not only happen due to obvious reasons (like missing to initialize an iterator) but also when performing certain container operations.

Undefined behavior due to iterator invalidation

Take a look at the following code sample:

We define a simple class which stores an id and provide a Print()-function which outputs the id to the console.

We then construct 4 elements of these classes and add them (as pointers) to the vector.

Finally we loop over the elements, output the current item, and add two more elements in there.

If you run the code in release configuration, it might produce the output you more or less expected/intended, produce some garbled output, or might crash.
Doing a couple of successive test runs using VS 2015 Update 3, the following output was produced:

In another run the same application triggered this output however:

The issue in the code is most likely quite obvious to everybody. Inside the for-loop we push additional elements onto the vector. Vector iterators are invalidated however, if a reallocation occurs. [1]

So the result due to adding the vector::push_back() calls inside the for-loop is us causing undefined behavior. Bare in mind that while it’s an obvious issue in this tiny code sample, it can be a really hard to trace down these kind of issues in real world applications especially if these are hugely complex and/or lacking a proper system design.

One of the more nasty results these bugs can cause are memory corruptions which can be a headache to resolve especially if the conditions are strongly impacted by the exact runtime conditions between multiple threads which can make them close to impossible to trigger reliably (and even if they trigger, it’s not guaranteed that they cause a memory corruption in all runs as presented above already – that’s the nature of the behavior being undefined).


Fortunately, the developer is not left alone to trace down these issues. Visual Studio provides built-in checks in its STL implementation which detect such issues and trigger assertions/runtime exceptions if an issue is detected.

When you run the sample code above in debug configuration, you’ll get the following assertion:

The debugger will directly point you to the fact that using the ++-operator on the iterator triggered the assertion as the iterator became invalid after the previous push_back() call.

These detailed checks are part of Visual Studio’s Debug Iterator Support [2] which contain quite a bunch of checks to ensure proper/valid usage of STL iterators.

These checks are obviously not free and degrade performance and increase the memory footprint of an application. Therefore, these checks are disabled in the release configuration by default and only enabled in debug configuration.

Different levels for _ITERATOR_DEBUG_LEVEL

The checks come in two flavors: level 1 (aka: Checked Iterators) and level 2  (aka: Debug Iterator Support) checks.
While level 1 checks perform certain cheap checks on iterators which ensures that out of bound access is detected [3] (as it is the case in the sample code above), level 2 checks perform additional more costly checks and therefore can detect other type of programming mishaps.

The level can be set using the _ITERATOR_DEBUG_LEVEL macro. However, level 2 checks require the use of the debug versions of the Microsoft Visual Studio runtime (i.e. /MDd or /MTd) and therefore are practically limited to the debug configuration.

Level 1 checks on the other side are also supported in release builds and if we add

as the first line in the sample code above and run the program in release configuration we get the invalid parameter handler exception:

This makes it straight forward to detect such issues and prevents situations where you are spending days or even weeks tracing down memory corruption which are caused by such bugs.
Enabling iterator debug level can therefore be a real time safer.


The challenge to use the iterator checks is to enable the feature. While it’s enabled by default in debug configuration and therefore will basically work out of the box, debug configurations are not always something you can use in larger projects (f.e. in games). It also won’t cover situations where the conditions triggering iterator invalidation only occur in the release configuration (f.e. since the conditions are related to certain optimizations which change the runtime behavior of the program or when certain debug checks set up in the debug configurations are skipped).

You therefore might require to enable it in release builds – but this can imply a certain work overhead when 3rd-party libraries have to be taken into account.

The _ITERATOR_DEBUG_LEVEL setting must be set to the same value throughout all libraries linked with the program. Since the default setting for this is to have it disabled in release configurations this means that one has to (re-)build all 3rd-party libraries by himself and enable the setting for all libraries.

Fortunately, it’s mostly a thing of the past not being able to get access to 3rd-party source code when licensing  libraries/frameworks/SDKs. So technically this should be doable. Depending on the project, this however can still be quite an undertaking.

If you happen to work on such a project where using the debug configuration for your daily work is unfeasible due to performance limitations, you might wanna consider introducing a third project configuration which corresponds to your release configuration PLUS the addition of enabled iterator checks. There might be other settings worth changing for such a configuration too (f.e. disabling the whole program optimization to safe time in building your project for the daily work but still keep it close to the released version) and a separate “developer release” configuration will also enable you to add code only used for development while leaving it out of the shipped versions.

A historical side note

Checked iterators is nothing which was recently introduced and has been around in Visual Studio for over a decade. In versions prior to Visual Studio 2010 it was however using two other macros:


These have been deprecated in favor of the new _ITERATOR_DEBUG_LEVEL macro and should be replaced with that one.

Also if you previously tried out the iterator debugging functionality you might have deemed it unsuitable for your needs. Earlier versions of Visual Studio suffered certain bugs triggering false positive assertions and especially at the beginning the implementation wasn’t too performant. Microsoft has worked on these downsides over the past [4] and especially the level 1 setting has a very low performance overhead in the experience of the author so you might wanna give it another try.


Iterator checks can be a real time safer for the daily work. Especially in larger projects with multi threading and different developers of varying experience level its advantages can easily rectify the required effort of enabling it in the project.

Bare in mind that even if you are lucky and never do the mistake of writing code causing invalid iterator use, having enabled iterator checks can still safe you time, because when investigating certain issues (like memory corruptions) you can be quite certain that the cause of these sometimes really hard to trace down bugs are unlikely iterators which became invalid. Therefore, you can rather focus your effort on other potential root causes and should be able to trace down the issue much faster.


[1] = C++11 standard / Working Draft N3242 – “Remarks: Causes reallocation if the new size is greater than the old capacity […]” / “Remarks: Reallocation invalidates all the references, pointers, and iterators referring to the elements in the sequence. […]”
[2] =
[3] =
[4] =

Tracing down application freezes.


A not too uncommon issue with larger projects (especially when utilizing  multiple threads / for example: games) are freezes (or alternatively called: application hangs). From a user point of view these look like the app/game is not responding at all and just hangs.

As of Windows Vista a new functionality was added. The so called: “Window Ghosting”. [1] Without going into too much detail, this feature basically detects if an application responds within 5 seconds to the Windows message queue and if it doesn’t, marks the application as “not responding” and eventually provides a popup to the user allowing him to terminate the application. This “potential” freeze will then be uploaded to the Windows Error Reporting facility, where registered developers can access the details about the freeze conditions.

The caveat here is that your company might not want to invest the time/cost associated to get access to this data or might have intentionally disabled the Window Ghosting functionality, because it would cause issues with your particular application.

Retrieving user provided dumps

An alternative to the Windows Error Reporting facility is to ask users to provide you with two successive dumps at the time they experience the freeze/hang.

The idea is that you can then simply review the callstacks in the two dumps and directly see if a particular thread hangs inside a loop or might have run into a deadlock condition.

Retrieving two dumps is important, since a single dump just represents the current state of the app but doesn’t necessary proof that the dump shows a freeze/hang condition (i.e. the application might just be utterly slow in what it’s doing atm and you might misinterpret the current dump-state as a freeze). A second dump however provides you with the means to compare the two application states which usually should point you directly to a freeze or alternatively proofs that the app did not run into a freeze at all (in which case you might be looking at a performance bottleneck of the app which would have to be tackled to resolve the issue at hand).

Luckily starting with Windows Vista it became much easier for users to provide dump files for developers. [2] While before Windows Vista, users had to install additional tools (like ProcDump) which for a non-developer are not too intuitive to use, ever since Windows Vista the functionality to create dump files got directly built in into the task manager.

Creating dump files using the task manager

To create a dump file on Windows Vista or later, open the Task Manager (f.e. by right clicking the task bar and selecting: “Task Manager”), locate the unresponsive application in the list, right click the entry and select: “Create dump file”.

A popup will appear stating that the dump is being generated. Especially for larger applications like games, this can take up to several minutes. Eventually the dump will have been generated and another popup appears stating the location the dump was generated at:

As the user you should then wait a couple of seconds and repeat that step. Afterwards you can provide the developer/support with the two dump files so they can work out what caused the application/game to freeze/hang.

Be aware however, that these files are rather large in size. It’s not uncommon for these to be several GB large. Fortunately they can be compressed quite good (compression factors of 10-100 are not uncommon to achieve).

A word on privacy

Please be aware that these dump files are basically full dumps of the application state including the entire memory section the application uses (that’s the main reason for why the dump file is so large), all the modules currently loaded on your machine, and additional system details which might contain personalized information.

Since the dump contains the entire memory footprint the application has access to, it will also contain any personal data (even potentially unencrypted passwords) the application might have stored. Even if the application doesn’t store passwords/personal data itself, the dump might still contain personal information and passwords which other applications missed to properly cleanup before freeing the memory section (i.e. if another application is suffering a security issue). So be careful whom and how you send these dumps to.


[1] =
[2] =

Initialization/Termination order of globals and local statics.


The previous blog post got into details about how to control the order of initializing globals in Visual Studio (in regards to the order in different translation units which is undefined by the C++ standard).

However, the standard doesn’t leave things completely at the compiler’s discretion when it comes to the order within a single translation unit.

This second blog post will describe this and also a not so widely known behavior of when static local objects are terminated.

Initialization order within a single translation unit

While the standard doesn’t define the order to initialize global objects in different translation units, it’s quite specific about the cases when initializing globals within the same translation unit (with the single exception of class template static data members). [1]

The rule for the order is quite simple: Objects are initialized exactly in the order they are defined.

Good practice therefore is to put a list of all definitions of globals at a single place within the cpp file (f.e. at the end). The order these globals are defined then provides a direct overview/documentation of the order these objects will be initialized.

In this example, there are two global/static objects defined. Because of the order in the initialization list, g_String will be initialized before A::MyNumber.

Initialization order of static locals

Well known should be the fact that local statics get initialized only once and it’s ensured (by the standard) that the initialization is done prior to the local object being used. Hence, it’s common practice to write constructs like the following one, to ensure that costly operations are performed only on demand:

Assume the constructor of the class Bar would allocate some limited resource. Since Bar is defined as a local static, the developer surely expects that the resource is only allocated if foo() is called (and not, if the application doesn’t call foo() at runtime at all). Having dealt with quite a bunch of different build environments, that’s also the behavior the author always experienced in reality.

Regardless, the standard also allows a different behavior and explicitly permits implementations to perform the initialization of static local objects according to how globals are initialized. [2] In effect this means that localBar could get initialized by a certain compiler already when the application starts up.

Termination order of globals and static locals

If you ask 10 C++ developers in which order globals/statics are terminated, you most probably will have 9 out of these 10 tell you that these will be terminated in reverse order of how they were initialized. While this is true in most cases, it doesn’t stand for static locals in all situations.

Try out the following example:

Running this sample code, you’ll get the following output:

What’s unexpected to most developers here is to see that the local object A is destroyed before B, even though the initialization order was the same (i.e. A was constructed before B).

If you’d slightly modify the example and call Test() inside main() rather than in the ctor for A, you’ll get the “usual” termination order and will see that the dtor of A is called after B’s dtor.

The explanation for this quite specific behavior can be found in the C++ standard as well. [3] In easier words than used in the standard it means that if a static locals is initialized during the construction of a global object, it will be terminated after the object which called the function with the static local in its ctor.


[1] C++ 03 standard – 3.6.2 (1)
“[…]Other objects defined in namespace scope have ordered initialization. Objects defined within a single translation unit and with ordered initialization shall be initialized in the order of their definitions in the translation unit.[…]”
[2] C++ 03 standard – 6.7 (4)
“[…]An implementation is permitted to perform early initialization of other local objects with static storage duration […]. Otherwise such an object is initialized the first time control passes through its declaration;[…]”
[3] C++ 03 standard – 3.6.3 (1)
“[…] These objects [objects of static storage duration] are destroyed in the reverse order of the completion of their constructor […]. […] For an object of […] class type, all subobjects of that object are destroyed before any local object with static storage duration initialized during the construction of the subobject is destroyed. […]”


Initialization order of globals in Visual Studio.


Any developer sooner or later will stumble across the issue of the undefined order global and static objects are initialized at.

A not so uncommon example is when using a custom memory management system. Usually you want the memory management system to be initialized prior to any allocation which occurs and after all the allocated memory was freed again.
This is problematic, if global/static objects rely on memory allocations.

The wrong approach

Assume you would initialize the memory manager as the first call in your main()-function and shut it down as the last step prior to returning from main().

The issue you will end up with is that other global objects are initialized prior to your initialization call in main(). You might consider it being a solution to perform some implicit initialization of the memory manager. Besides this coming with added complexity and some (unavoidable) performance penalty, it won’t help with the issue that when you shut down the program, the corresponding destruction of these globals will happen after the main() function already returned and the memory manager was shut down.

You might think of handling this too then, but that won’t work (at least not in a sane/clean way) because your memory manager will certainly require some resources which need to be freed at shutdown.

How about atexit()?

So you might consider the alternative approach and use an atexit()-registered function (your shutdown function). This is however especially bad for a memory manager because:

  1. atexit()-registered functions are processed in LIFO order and so won’t change the behavior you faced above with calling the shutdown function last in your main() function
  2. atexit uses heap-allocated memory which you presumably directed through your memory manager

Let’s use a global

So the third idea comes to mind and put the initialization and termination handling of the memory manager in a global object’s constructor/destructor itself.

The problem you are facing here is the issue of how you’d be able to control that this particular global object is initialized before all other global objects and destroyed last.

The solution

A common approach to prevent problems caused by the undefined order is to stop using global and static objects altogether (f.e. by relying on pointers and defining an explicit initialization order in the app’s main() function). However, this approach is not always feasible and comes with certain drawbacks (which are outside the scope for this blog post). [2]

A different solution is provided in Visual Studio (with the MS CRT) by means of the “init_seg”-pragma which can be used to control the initialization order. [1]

To understand how this works, you should know that global objects are initialized as part of the CRT initialization. [3]
In particular, the CRT adds the initializers for all globals in the “.CRT$XCU” [4] linker section. The trick is now to use the “init_seg”-pragma to specify that the initialization of globals in the corresponding translation unit should go in a different section (i.e. one before the “.CRT$XCU” section but after “.CRT$XCA” [5]).

That can be done by adding the following pragma to the particular cpp file containing the global initialization:

#pragma init_seg(".CRT$XCT")

This ensures that your globals in the translation unit will be initialized prior to other globals of your application.

A word of warning

However, be careful with that approach and be aware that your global objects constructors will be called prior to other global objects (including potentially global objects used by the CRT itself!). [6]

Also bare in mind that this is kind of an advanced feature which is not too widely used and is (as far as the author is concerned) not an officially supported approach/functionality. That means that different CRT versions (even different flavors like debug vs. release runtime) can emit different behavior by putting initialization code in different sections. Your application might just work fine for years but suddenly stops working and experiences crashes (f.e. after a security update to the CRT was released or after you ported your application to a later VS version).

The second concern you need to be aware of are interactions with 3rd-party libraries. If you use different libraries these could also use the trick to put their own initialization related code in the CRT linker section and your code might then run after (or before) the other lib was initialized.

It’s therefore important to consider which section you put your initialization code in. In general it shouldn’t be a bad idea to put it into the “.CRT$XCT” section (i.e. closest reasonable section just before the “.CRT$XCU” section where other globals will be initialized in) rather than trying to put it in the earliest one (i.e. “.CRT$XCB”). That way you should be on the safer side with regards to a not yet completely initialized CRT which could cause quite a couple of sleepless nights tracing down some weird undefined behavior in your application.

On top of this, it’s also good practice to keep the constructor/destructor of such global objects as simple as possible and defer any initialization/termination code to be done as part of the normal program flow (i.e. during main()). This ensures that you are less likely to run into issues due to an incompletely initialized dependent global object (which could be part of the CRT or a dependent 3rd-party library).

Verifying whether you run into an issue with the global initialization order

If you run into a crash with the callstack pointing to the dynamic initializer list when starting your program which wasn’t present without the pragma statement, it probably means you did overlook such a global object dependency. To validate this, you can make use of the linker’s map output file and review which CRT-linker sections are used.

To do this, first you comment out the “init_seg”-pragma statement and then rebuild the program with the map output file. Using a text editor you should be able to locate the “.CRT$XC” sections at the top of the map file which could look like this:

These are sorted alphabetically and you’d see if there’s a section which unexpectedly comes before the section you put your global in. If so, simply change the section you use to a later one.

If you found this information interesting, you might also be interested in this follow-up blog post regarding further details related to the initialization order of globals.

References / Footnotes

[4] To be precise the section name is actually .CRT with XCU being the section group.
[5] The XCA group specifies the __xc_a pointer which marks the start of the global initialization list and therefore no initialization should be put into that group.

The trouble of separate module atexit-stacks

The demo project (Visual Studio 2015 solution) demonstrating the behavior in this article can be downloaded here.


Using atexit() to specify functions to be called if an application terminates is quite common practice. This is especially true for libraries since the C-standard specified atexit()-function is a way for the library to register its cleanup logic without relying on the 3rd-party application to properly call a specific cleanup function.

This is also what the library the author was working with did. Since the usage of the atexit()-function is nothing uncommon, it was quite surprising to observe that obviously the cleanup handling (which got registered via the atexit()-function) occurred after some resources were already freed when compiling the code with Microsoft’s Universal C runtime. In this particular case, this fact resulted in the cleanup function being stuck in an endless loop with the result of the app never terminating.

Well known behavior of atexit()

To understand the root cause of the problem, let’s first take a look at a simple case of using an atexit()-registered function to stop a thread and wait until the thread terminated before the hosting application closes cleanly:

(Sidenote on this code: The code is kept as simple as possible to demonstrate the actual problem. The fact that it’s not really thread-safe is not relevant for this topic.)

As we see, the test case is quite simple.
main() spawns a simple worker thread (dummy_worker()) which increments a threadCounter when it’s started, waits until running is set to false just to decrement the threadCounter again.
In main() we register the terminateThread()-function using atexit() so to make sure that we cleanly shut down the running thread.
To do that, terminateThread() sets running to false and waits until the thread got signaled (i.e. terminated) via WaitForSingleObject() just to print out the current thread counter value (which we certainly expect to be 0 at this point).
Right before we return from main() we give the thread some time to ensure it’s started.

Running this app, we see it behaves as we expected and get the output:
done waiting – counter is: 0

No big surprise here.

atexit() and DLLs

Now let’s make things a bit more interesting and move that code inside a DLL (into the startThread()-function) and call that from the application’s main()-function.

Certainly we expect to see the same behavior we saw before. So let’s get the console output:
“done waiting – counter is: 1”

This is not quite what we expected to see. In the end we did cleanly terminate the thread… Or didn’t we?

Understanding what’s going on

To get a better feeling of what’s going on here, let’s add some debug output.

  1. We add another atexit()-registered function (in the application’s main()-function).
  2. We add some output to DllMain() to see how attaching and detaching of threads/processes works.
  3. We print out the state of returning from main() right before it returns.
  4. We add some output at the start of the terminateThread()-function.

Running that code, we get the following output (numbers represent line numbers for reference):
1: process attach
2: thread attach
3: returning from process main
4: atExitMainProcess
5: process detach
6: terminating thread
7: done waiting – counter is: 1

We see that atExitMainProcess() gets called after main() returns, followed by the process detach signal the DLL got, followed by the call to terminateThread() which we registered in the DLL via atexit().

This gives us two interesting hints:

  1. there is no output for the detaching of the thread
  2. the atexit()-registered function of the DLL is called after the atexit()-registered function from the main process

Digging into the depths

To understand the first part, we’ve to know that terminating a process issues a call to ExitProcess() in the VS runtime if the process returned from main(). [1]
The first thing ExitProcess() does is to terminate all threads of the process (excluding the calling thread) WITHOUT receiving a DLL_THREAD_DETACH notification. [2]
That explains the fact that we do not receive the thread detach output.
Keep in mind the following additional facts to understand the conclusion further down:

  • after threads were terminated, they become signaled
  • for all DLLs the process-detach notification is sent (that corresponds to line 5 in the output)
    Note, that before that step in the ExitThread() processing, the atexit()-registered function in main() was called (output: line 4)

Let’s keep these facts in mind and take a look at the second part now:

We got the output from the process’ atexit()-registered function BEFORE the output of the function we registered via the atexit()-call in startThread(), even though atexit() is defined to run the registered functions in LIFO order [3]. So why did we not get the call to terminateThread() before atExitMainProcess() was called?

The explanation is that in the VC runtime each module (i.e. each DLL and each process) has its own separate atexit-stack (as Dough Harrison explains in these threads [4/8]). This minor detail makes a fundamental difference in this scenario because it means that the order of the registered atexit()-functions is not only dependent on the order of atexit()-calls, but also in which context (i.e. module) they got called.

Understanding the behavior

Now we got to the point of understanding what is going on here.

  1. Upon the process termination, the process’ atexit()-function stack is processed (ouput: line 4).
  2. ExitProcess() is called and terminates our thread without the thread-detach notification.
  3. The thread is signaled.
  4. The process detached notification is sent to the DLL (output: line 5).
  5. The DLL is unloaded and processes its own atexit()-function-stack which calls our terminateThread() function (output: line 6).
  6. The call to WaitForSingleObject() returns immediately (since the thread got signaled already).

Hence, we end up with threadCounter still being set to 1.

What the standard says

The question would arise whether this behavior actually violates the C or C++ standard.
As far as the author can determine there is no violation of the standard. Actually it turns out that the termination of threads prior to their atexit()-functions being called is to prevent undefined behavior as it’s specified in the standard itself [5] which explicitly states that threads can be terminated prior to the execution of std::atexit registered functions in order to prevent undefined behavior. This is particular noted to allow thread managers as static-storage-duration objects.

On the other side the specification of atexit() [6/7] doesn’t prevent the usage of different atexit()-function-stacks per module. So again, there’s no standard violation here.

That said: It’s an implementation detail that there are multiple different atexit-stacks and it’s also an implementation detail when the atexit-functions are called in relation to when threads are terminated.

How developers can deal with the facts

For library developers it seems that there are limited options to cope with the situation. Here’s a list of possible approaches to compensate for the difference in when atexit()-registered functions are called:

  • ensure your cleanup code actually handles the scenarios where resources were freed already prior to the cleanup function having been called
  • do not use atexit() at all (or at least not in the context of DLLs) but rather provide your own cleanup function which is documented to be required to be called by 3rd-party applications utilizing your library to ensure proper resource cleanup
  • do not provide means to do explicit cleanup, but rather leave that task with the OS (which implicitly will cleanup resources eventually)


The combination of using separate per module atexit-stacks and the fact that threads which are registered from a module are killed (without notifications) prior to the module’s atexit()-registered functions having been called, makes the usage of atexit()-registered functions kind of unsuitable in situations without complete control about how the code is utilized (i.e. in libraries).

The lack of explicit requirements from the C/C++ standard in this regards, which might have been intentional and done that way for completely valid and sound reasons (which however would be beyond the author’s knowledge) does not help much with the situation unfortunately. It also raises the question whether this behavior makes sense from a design point of view and whether such a behavior doesn’t defeat the purpose of the atexit-design/-purpose (and therefore could be argued to be a defect in the standard).

The usage of per module exit stacks is at least questionable in the opinion of the author, because as it stands, at least for platform and compiler independent library development the lack of an explicit requirement in the standard adds additional complexity to the design requirements of functions being utilized via atexit()-calls.


The author would like to thank Branko Čibej and Bert Huijben for their contributions in investigating the topic and sharing their own opinions on this matter.


[1] = Windows Kits 10.0.10240.0 source code: ucrt/startup/exit.cpp: exit_or_terminate_process()
[2] =
[3] =
[4] =
[5] = C++ Working Draft N3242=00-0012 – 3.6.3 paragraph 4
[6] = C++ Working Draft N3242=00-0012 – 18.5 paragraph 5-8
[7] = WG14/N1256 Cinnuttee Draft — September 7, 2007 ISO/IEC 9899:TC3 –
[8] =!msg/

The need for copyrights

Whoever starts working on adapting an existing open source project (or creates his own) eventually will get to the question about how to deal with existing copyright notes and whether and how to add ones own to existing (or new) files.

The first question is: Are copyright notes legally required?
The short answer to this question is: No. Legally the copyright notes carry no weight. They can actually be completely omitted from source code and ones own work, without impacting the fact that the work is still under the author’s copyright. However, copyright notices can help and are easy pointers for everyone to get the copyright information.

I found a really nice article while googling for the question from Ben Balter [1] who gets into some more detail on the topic.

I for myself have therefore decided to always add my own copyright markers to source code files [2]. Since I normally just take over the existing license of the original source code, I simply add my copyright just behind the existing copyright notes. (Note: this is due to my changes normally being just minor compared to the existing work and hence I want to support the idea of the original author by ensuring my modifications are covered by the same freedom he offered his own work for).

For source code not maintained in a publicly accessible version control system, I also add a note about the changes I did, so everybody can determine which part of the source code (or which modifications) are covered by my copyright in contrast to everything else, which is covered by the original author’s copyright.

Bear in mind that different licenses might however have different requirements on how to relicense your modified source and how to deal with existing copyrights.



Error 500 after upgrading Confluence to 5.2.x


After we upgraded our Confluence instance from 5.1.5 to 5.2.5 we got surprised when trying to access the newly upgraded instance.

Instead of being greeted with the login screen, the webpage showed an Error 500 and Confluence wasn’t accessible at all.


As usual the first step when running into problems with Confluence is to take a look at the Confluence log. This showed reoccurring errors like the following:

Searching the web pointed out a quite close issue described for the Support Tools Plugin (STP) “Upgrading to 3.5.20 results in a java.lang.AbstractMethodError”.

Support Tools Plugin is a bundled plugin in Confluence which provide some useful utilities for administrators like an automated log file scanner (which reports issues on a daily basis to admins) or a quick way to create support tickets.

The one issue with that report was that it was marked as resolved and was reported to having been occurred with version 3.5.20 of the plugin (while we were running 3.5.28). So did we really run into the same issue or was it simply a completely different problem we were facing?


After getting in touch with Lauretha Rura from Atlassian support and Deividi Luvison (who was assigned the Support Tools issue), Deividi confirmed that the compatible version number for STP versions > 3.5.20 was set incorrectly. While version 3.5.20 correctly stated it is compatible with Confluence >= 5.3 the following versions (including the one we use (3.5.28)) suggested they were compatible with Confluence >= 4.3. That’s how we ended up with that incompatible version of STP getting installed.

While this didn’t seem to be a problem with Confluence <= 5.1.5 with 5.2.5 it refused to work at all.

After having reported this to Deividi the incorrectly set compatibility versions of STP for Confluence were fixed, so that other users will no longer run into this problem.


In case you are suffering the problem, the easiest way is to uninstall your current (incompatible) version of STP and install the latest compatible one (for Confluence 5.1.5 this would be STP 3.5.10 for instance) and perform the upgrade to Confleucen 5.2.x afterwards.

Alternatively, you can also directly upgrade to Confluence > 5.3 in which case the issue should not surface as well.

Removing spam comments in JIRA


The company I’m working for I’m administrating a JIRA instance which is being used as an internal bugtracker.

Lately we’ve opened-up JIRA to the public and use it as a platform for part of our product.

There’s unfortunately one problem with that: Being a rather small company (with less than 20 employees) we develop our product for a large number of customers who only pay for the product once (no reoccurring costs). The cost is also quite low (around 10-50 Euro) if you compare this to the prices of larger products. If I had to guess, I’d assume we have a customer base which goes into the hundreds of thousands of users.

Compare this to other companies and you get a slight idea why we can not afford an unlimited JIRA license (which, at the time of writing this, would cost us $24,000 plus $12,000 every year, while our current 25 user license only costs $1,200 plus $600 per maintenance renewal).

Since the unlimited user license is out of question for us, we allowed anonymous access to our JIRA instance for some of the projects. That allows our user base to create and comment on issues directly in our bugtracker.

Unfortunately, allowing anonymous access in JIRA has one bad side effect: It also opens up the bugtracker to spammers, since it no-longer requires you to log-in before adding comments or creating issues.

For several months this worked out until a few days ago, when some spambot detected our instance and started created spam comments (around 1,500 the first day and another 5,000 the other day).

JIRA is really a great tool IMHO but understandably the product’s and company’s focus is directed towards larger companies. That’s also most likely the reason why there is almost no built-in protection against spam. Presumably most customers do not use the anonymous access and rather buy the unlimited license so that their users simply create their own accounts, while requests for improvements for anonymous spam protection have been on record for years already (see: JIRA issue 10236 and JIRA issue 8000).

But what do you do if you want to allow anonymous access and run into the situation of a spambot having created a shitload of comments on your instance? Deleting >5,000 comments manually is certainly not an option (that’s roughly 10,000 mouse-clicks to get rid of all the entries 🙂 ).

My first idea was obviously to alter the JIRA DB entries directly, but that certainly is not supported and bares a certain risk of breaking things, if you don’t know all the details of the DB structure.

Fortunately, I discovered a post from Henning Tietgens. Based on his post I was able to adjust his  provided script to get rid of all the comments in just a few hours work.

How to bulk remove comments in JIRA?

(The following instructions were tested on JIRA 6.0.8. They might however also work for any later (or even earlier) version of JIRA).

Make sure you have a backup of your JIRA instance to be on the safe side in-case anything goes wrong with the script. While the description worked for me, it was only tested on a single instance and I can’t give any warranty at all.
  1. In the JIRA instance go to the Add-Ons Manager (CogIcon -> Addons -> Find New Add-Ons) or use the following link: http://[yourJIRAInstanceURL]/plugins/servlet/upm/marketplace
  2. In the search box enter “Script Runner”. This should bring-up the “ScriptRunner for JIRA Standard Edition” as the first entry. Click on Install to install the add-on.
  3. On the admin panel you’d now see a new section (on the Add-Ons tab) called Script Runner. Click on Script Console.

  4. On this screen select Groovy as the Script engine, copy/paste the script provided below into the script frame, adjust the issueKey to the one which contains the spam comments, replace “Foo Bar Comment” with some entry in the spammer’s comment and click on Run Now.

Voila. That’s it. All comments containing the phrase you specified above in the given issue should be gone.

Following is also the script (updated 08/19/15) I ran on our instance to clean-up all the spammers comments (based on the URLs the spambot entered in the comments).

Bare in mind to double-check the URLs before running it against your instance. Since spambots tend to use also just completely normal URLs (so to hide which URLs they actually want to spam), it’s quite possible that in your case the script would remove absolutely fine comments as well.

STL and the <-operator


The Standard Template Library (STL) adds a lot of fundamental functionality to C++. One of its most prominent features are containers. Containers can be used to store any kind of objects. Various different containers are available for the different requirements a developer might have. Some of the containers are optimized for random access, while others are very efficient when it comes to sorting objects.
To be able to sort objects, the STL containers (and functions) make use of comparators and/or an object’s <-operator. That way it becomes quite easy for developers to create classes which can be stored in a container. But there are a couple of requirements for these comparators, as this paper lays out.

Strict Weak Ordering

Let’s assume we have a simple class called “Car”:

Next we define a <-operator for our “Car”-class by sorting it by its color and its type:

Now we create 2 instances of the class:

If we’d call: bool bsmaller = car1 < car2; // bsmaller = true the result would be as expected (since car1.m_Type < car2.m_Type).
Now let’s put these cars in a set:

So far, so good. We have a container with two cars, so what? — Let’s put another one into the container and see what happens:

Outch… That results in a runtime error at best, or undefines behavior at worst.

<-operator requirements

What went wrong?
Well, the problem lies within our defined <-operator and the fact that the set-container uses it to try to put our cars into an order. If we compare car2 with car3, we get contradictory results:

Therefore, the set doesn’t know how to sort these objects in its internal red/black-tree.
For most of the STL functions/template classes which require a comparator, a so called strict weak ordering comparator is required. Such a comparator is defined by fulfilling the following requirements:

  1. the <-operator imposes an order:
    if (a < b) then !(b < a)
  2. an object is never smaller than itself (i.e. it can’t be ordered before itself):
    a < a = false
  3. the <-operator can be used to check objects for equality:
    if (!(a < b)) && !(b < a)) then a == b
  4. the ordering is transitive:
    if (a < b) && (b < c) then (a < c)

So one might come to the following great solution to the problem and say: “Let’s sort objects by their memory address!”

Nice idea. That comparator meets all the above given requirements, since an object’s address is unique, if run on a single PC (at no time two objects can occupy the same memory address) plus this idea has the advantage that no additional memory (for instance for a unique identifier used to order the objects) is required.
As long as there is no special requirement to keep objects sorted in a special order within a container this can be a feasible solution. However, it’s not completely safe under all circumstances, as the following chapter will uncover.

Copy Constructor and =-operator

Some of the STL functions/containers make use of an object’s assignment-operator or its copy constructor. For instance there is a function called make_heap(). That function creates a copy of the first object and in addition uses the assignment operator of the class of the contained objects to swap objects. That way a heap is created. So why is this problematic?
Well, the functions are designed under the following assumption:
The <-operator compares objects based on their content AND neither the copy constructor nor the assignment operator alter the object’s order.
Given as a general example, the assertion in the following code is expected to be true: if (a < b) { c = a; a = b; b = c; assert(b < a); }
If we use the object’s address within our <-operator, that’s no longer true. Assume a and b have the following addresses: a = 0x1; b = 0x2;
To make it easier to see the problem further assume that each object stores one integer: a.i = 1; b.i = 2;
Before the swap, a is considered smaller than b (since 0x1 < 0x2). Now we swap the objects: c = a; a = b; b = c;
As you see, the objects changed their content: a.i = 2; b.i = 1; but switching should not have an impact on the comparator; hence, assert(b < a) should be true since b now contains the content of a and a contains the content of b, but it isn’t!
Remember, we wrote the <-operator to compare the objects based on their addresses — and these haven’t changed — so a is still smaller than b (since 0x1 < 0x2).
So we changed the order of the objects and the STL functions don’t know what to do about it (resulting in an error or undefined behavior). We need another way to come up with an implementation for our operator.

Comperator Template

Though our initial <-operator meets the first two requirements, it lacks transitivity and therefore can’t be used to put objects into a unique order. We can correct this, by using the following template to write a comparator which sorts an object based on comparing multiple member variables:
For an object of class “A” with “n” member variables m[0, n) where each member variable type provides a <-operator:

The function compares the object’s first member variable. If the current object’s first member variable is less than the second object’s one, it returns true. If it isn’t, it checks both member variables for equality by making use of the second requirement for <-operators (if (!(a < b)) && !(b < a)) then a == b).
Due to the order of the parentheses in the expression, only in case that the first member variable is equal, it compares the second member variable and returns true, if the first one’s is smaller than the second one’s. The procedure is then repeated for all remaining member variables.
Applying that template to our “Car”-class, would result in the following operator:

That’s it. We now have a working strict weak ordering operator.


Writing a <-operator helps a lot to more conveniently work with STL-containers. However, the developer has to be aware of the additional requirements for the implementation and must be careful to make sure that these requirements are met. Failure to do so can easily result in bugs introduced into the code which are really hard to trace down, since they can occur randomly and not all of the logical errors of an <-operator can be traced down by additional checks within the STL implementation.
Nevertheless, having a properly written <-operator at hand is the basis to make use of most of the STL-functions and improves productivity as well as increases code maintainability.


[1] S. Kuhlins, M. Schader, 2005. Die C++ Standardbibliothek. 4th ed. Berlin, Heidelberg, New York: Springer. Ch.1.3.
[2] Accredited Standards Committee WG21/N1043, 1996, Working Paper for Draft Proposed International Standard for Information Systems–Programming Language C++. [internet] Available at:
[Accessed 23 February 2009]. Ch.23.1.2.
[3] P. J. Plauger, A. Stepanov, M. Lee, D. R. Musser, 2001. The C++ Standard Template Library. ed. Upper Saddle River: Prentice-Hall p.134