Subsume Technologies, Inc.
fading fractal logo
Home

Subsume Technologies develops advanced tools that put you in control of your world. If that interests you, let us know how we can help.

[ICO]NameLast modifiedSize

[PARENTDIR]Parent Directory  -
[IMG]Basemark-M1.png2023-06-12 14:06 155K
[IMG]Basemark.png2023-06-12 14:06 38K
[IMG]Beermark.jpg2023-06-12 14:06 44K
[IMG]LongPressMenu.jpg2023-06-12 14:06 85K
[   ]STMQBase64Benchmark.m2023-06-12 14:06 4.8K
[   ]STMQBeerBenchmark.m2023-06-12 14:06 2.9K
[   ]node.xml2023-06-12 14:06 111
[TXT]text2023-06-12 14:06 15K
[TXT]text.md2023-06-12 14:06 13K

Overview

As most commonly understood and used by developers, a C-based language like Objective-C is a series of statements, usually messages to objects, that are performed one immediately after the other. For example:

NSFileManager   *fileManager = [NSFileManager defaultManager];
NSMutableArray  *bigResults = [NSMutableArray new];

for (NSString *filePath in self.bigFiles)
{
    NSData      *fileData = [fileManager contentsAtPath: filePath];
    NSString    *base64String = [fileData base64EncodedStringWithOptions: NSDataBase64Encoding64CharacterLineLength];
    NSArray     *encodedLines = [base64String componentsSeparatedByString: @"\n"];

    [bigResults addObject: [encodedLines lastObject]];
}

NSLog(@"There are %ld entries.", bigResults.count);
NSLog(@"The last one is:\n%@.", bigResults.lastObject);

It is all fundamentally serial/blocking/synchronous, with everything happening along one thread of execution. You must wait until the data is loaded before you can calculate its encoding, and only then can you break it into individual lines, which must itself be done before you can add the final result to the array. Only after you finish it all for one object can you move on to the next, even if the objects are unrelated and the operations have different bottlenecks (e.g., IO-bound vs. compute-bound). There are various things you can do to perform delayed operations or even do multi-threaded/concurrent operations, but most of them require you to do massive restructuring of your code to make it work.

Instead of doing that, STMQ allows you to instead selectively co-opt the object messaging process that is at the heart of Objective-C itself and substitute a message queue, allowing you to do asynchronous operations without having to change much of your code at all. Compare the following with the above and see if you can spot the differences:

#import <STMQ/STMQ.h>

NSFileManager   *fileManager = [[NSFileManager defaultManager] future];
NSMutableArray  *bigResults = [NSMutableArray new];

for (NSString *filePath in self.bigFiles)
{
    NSData      *fileData = [fileManager contentsAtPath: filePath];
    NSString    *base64String = [fileData base64EncodedStringWithOptions: NSDataBase64Encoding64CharacterLineLength];
    NSArray     *encodedLines = [base64String componentsSeparatedByString: @"\n"];

    [bigResults addObject: [encodedLines lastObject]];
}

NSLog(@"There are %ld entries.", bigResults.count);
NSLog(@"The last one is:\n%@.", [(STFutureProxy *)bigResults.lastObject present]);

Those two calls that STMQ provides, -future and -present, set up everything you need. With this addition of futures, we can think about objects as being “unstuck in time”. Of course we’ll need the return value from a method call at some point, but if we don’t really, really need it right now, we can move on to doing other things until we do. The mere promise of a result is, at the time, as good as having the result itself. Then, when you actually do need it, you assert that there is no time like the present, and then you’ll get the result you wanted all along. Behind the scenes, all the objects had been working on their individual message queues so that you could get that result with as little waiting as possible. The more time you’re doing other things before you need a result in the present, the more likely it is you’ll get it without any waiting at all!

Extra features

Easily disabled

When it comes to debugging a program, it can get complicated if dozens or even hundreds of objects are processing their message queues in the background. Because of how cleanly STMQ integrates with your code, it is just as simple to turn off as it is to turn on. Just put:

[STFutureProxy futureIsNotSet];

somewhere where it makes sense (e.g., the +initialize method of your app delegate). Thereafter, calls to -future and -present simply return self. Your program will then go back to running just as it used to without message queueing.

Deprecated: Newer versions of STMQ support entanglement limits for even more fine-grained control.

Logging waits

Ideally, you’ll use STMQ in such a way that, by the time you call -present, the message queue for the object is already empty. If not, you’ll have to wait until processing is finished. In order to expose those bottlenecks, by default STMQ will log when it has to wait for something to finish. If you are unable or unwilling to refactor your code to minimize or eliminate the need to wait, you can easily toggle the logging with:

[STFutureProxy logWhenWaitingIsRequired: NO];

Avoiding waits

If you’re willing to modify your code a little more, it is possible to avoid calls to -present even further. If you can put the code you want to run into a block, STMQ provides an additional method, -whenPresent:, that will add the block to the queue just like a message. So in our example we could delay the logging without waiting if we changed the last line to:

[bigResults[0] whenPresent: ^{ NSLog(@"The first one is %@.", bigResults[0].self) }];

Note the use of -self instead of -present to reference the future’s underlying object. Since arbitrary code can be placed in the block, -whenPresent: is a great way for your futures to synchronize with other parts of your program after they have completed the desired task.

Schrödinger’s app

While restructuring your code is not strictly necessary with STMQ, you may find that what futures make possible is “possible futures”. That is to say, often times developers write their code in such a way that it delays doing any calculating until the last possible moment, and then the user must wait after that until the results of their actions are finally ready. With STMQ, you’ll quickly find that you can easily speed things up even more if you start doing things as soon as possible; even if they are things that never ultimately occur! Think: superpositions.

screenshot

As an example, here is something we did for our iOS calendaring app My Busy Day. One of the actions a user can take is a long press to get a menu, at which point they can select to either hide a single event or an entire calendar. The result is a complete recalculation of the layout of all the views to fill in the gaps left by the hidden event(s). Either choice can lead to an unwelcome pause in the operation of the app. So, using STMQ, what we did was move the calculation from after the menu selection to before it. As soon as the user’s action is detected, we start processing the two alternate layouts. By the time the relatively-slow human has chosen one of the menu items, the results are usually already finished and waiting for an immediate update of the UI, regardless of their eventual choice.

Simplifies Threading Issues

Because futures are inherently self-contained, there are very few multithreading issues you need to deal with directly. However, there are still many issues with how other objects, especially in the Cocoa frameworks, expect to be called. Some of them expect to be used on a single thread, whereas it’s possible for a future-heavy program to have many, many threads going at the same time. Worse, some objects (especially UI objects) need to be used only in the main thread, which can greatly complicate all forms of concurrency.

STMQ simplifies this by allowing classes to declare what underlying NSOperationQueue should be used when creating futures for their instances. By implementing the +futureQueue method, you can return a single queue that all instances use (and thus coordinate through), or even the main thread’s queue, allowing maximum safety. Better still, STMQ has built-in support for all of the classes that Apple has documented as having threading issues, so just use a -future for any object you’re not sure about (including doing those UI updates), and leave it to STMQ to do the right thing.

Of course, when you have futures “sharing” a queue in that way, it doesn’t only limit the concurrency that can be gained from those objects. It also puts them in a potential deadlock situation when one of them tries to call -present on another object that has the same underlying queue as the caller. It is something that rarely happens in practice, because objects sharing the same thread rarely have dependent operations that interleave one another in that fashion. It is another thing that is easily mitigated by avoiding waits.

Limited entanglement

As implemented by default, a call to -future sets up the potential for a near-endless chain of resulting future proxies. In our example, [[NSFileManager defaultManager] future] starts with the explicit proxy creation for fileManager, but the result of -contentsAtPath: is also provided as a future, so that it too may execute -base64EncodedStringWithOptions: and -componentsSeparatedByString: in the background. These future objects are all “entangled” via their message queues.

If your code starts passing around these future proxies, they may get into a place where you aren’t able to call -present to get back a regular object. To prevent that, you can set up future proxies to limit message queueing, or turn it off completely. This is essentially what is done implicitly to deal with non-object return types. You do it explicitly for any object by setting its entanglementLevel property to how deeply returned objects should queue messages.

Any time a method is called on a future that returns a future object, it sets that result’s entanglementLevel to 1 less than its own. At entanglementLevel 0, the new future is effectively in the present; all method calls will happen immediately and return their actual object result so that no further entanglement is possible.

You can control that highest level for any individual future by setting its entanglementLevel property, but you can also set the default starting level for all futures by calling the class method:

[STFutureProxy setDefaultEntanglementLevel: 1]; // Directly resulting futures only!

You can change the class default at any time, but it will only affect the entanglementLevel of newly created instances. Set the default 0 and you’ll disable the creation of futures completely.

Caveats

Objects as actors

While STMQ allows you to follow an Actor pattern, the nature of OO development doesn’t often lend itself to seamless implementation. This is usually due to the nature of objects maintaining a state that has a dependency on serial execution, unlike pure Actors which are inherently concurrent and handle messages in no particular order. In our example, for instance, the bigResults array depends on the -addObject: calls happening in a particular order, and the developer likewise expects that the -count method will be called last and return 2, not 1 or 0.

To that end, STMQ defaults to each object’s message queue being run serially in order to more safely preserve the expected state of an object. The underlying queue does support out-of-order execution, though, so you can always use +futureQueue to add support for a pure Actor model for objects of yours that don’t have side-effects for method execution.

Objective-C as a hybrid language

Not all methods return objects, so it is not always possible to return a future proxy. The -count method in our example, for instance, should function as expected (returning a long integer) even if the array had been used as a future. To accomplish this, STMQ is pretty much forced to call -present implicitly any time you want an object to return a C-based type (other than void, of course). For additional concurrency in this situation consider:

Example Benchmark

To demonstrate, here is an XCTestCase file that shows the dramatic improvements for the example code given above:

screenshot

The normal version runs on the selected files in just under 13 seconds. Modified to use STMQ, the execution time was less than 3.5 seconds. That’s an over 3.7x speed-up without having to change the core code at all. If you have even more RAM and CPUs to spare, you’ll see even greater improvements.