Wednesday, June 19, 2024

On the time complexity of functional collections

Clojure made functional collections popular.  Rich Hickey, its inventor, deserves a lot of credit for that.  However, he also propagated an inaccurate way of describing their time complexity on several common operations such as looking up a key in a map.  I don't know exactly what phrase he used at first, but I've seen people describe the time complexity of these operations as "near-constant" or "effectively constant", or sometimes shouting: "effectively constant".  He also seems to have originated the practice I see in the Clojure community of speaking as if the base of the logarithm mattered: "O(log32 n)".  (The "32" should be a subscript, but I don't see an affordance for subscripts in this Blogger UI.)

All of these locutions are wrong.  The only correct way to describe the time complexity of the operations in question is as "O(log n)" or "logarithmic time" ("log time" for short).  Time complexity describes how the time to perform the operation grows as the size of the input (in this case, the collection) grows without bound.  Because the Hash Array-Mapped Trie (HAMT) — the very clever data structure invented by Phil Bagwell — is a tree, the worst-case time to access an element in the tree must be proportional to the depth of the tree, which is proportional to the logarithm of the number of elements (provided that the tree is balanced, which it will be if the hash function is well distributed).  The base of the logarithm is the radix (branching factor) of the tree, which in Clojure's case is 32, but this has no bearing on its time complexity; as everyone knows, logarithms of different bases differ only by a constant factor, and big-O notation ignores constant factors.

I think part of what is going on here is a bit of confusion between the time complexity of an algorithm and its real-world performance.  Consider this sentence from Hickey's HOPL 2020 paper, A History of Clojure:

Performance was excellent, more akin to O(1) than the theoretical bounds of O(logN).

You don't find the time complexity of an algorithm by measurement, but by analyzing the algorithm.  While it's not 100% clear, this sentence certainly gives the impression that he didn't quite understand that.

Let me speculate a little.  The performance of a lookup on a map, implemented as an HAMT, using string keys, has two components: the time to hash the key, and the time to walk the HAMT, using the hash value, to find the map entry containing that key.  I'm going to guess that for the string keys that Rich tried in his testing, the tree-walking time was less than or comparable to the string-hashing time up to a depth of maybe 3 or 4, or maybe larger.  32^4 is 1,048,576, which might be larger than any map he tested; so it's entirely plausible that he just didn't test any collections large enough to see the logarithmic behavior emerge.

If that's right, it certainly speaks well for the performance of the HAMT design.  Let me acknowledge at this point that Rich also had a marketing problem to deal with: he had to convince potential Clojure users that its functional collections would not make their programs unusably slow.  O(1) or "near-constant" certainly sounds better than O(log n).  I can understand the temptation he faced.

But again: time complexity is about how the time grows as the size of the input grows without bound.  And clearly, in this case, there will be some point at which the tree-walking time will begin to be larger than the hashing time.  This will happen sooner for short keys than long ones, and soonest if the keys are integers hashed by the identity function (or maybe by folding a 64-bit integer into a 32-bit hash; probably one or two instructions).  But it will happen.

— That is, it will happen as long as the algorithm doesn't run out of hash bits.  Clojure uses a 32-bit hash; since each tree level consumes 5 bits, that gives it 6.4 levels.  As the tree starts to fill up, the number of collisions will begin to become significant.  I'm not sure what Clojure does with collisions.  Bagwell suggested rehashing to obtain more bits, but I don't know that Clojure does that; it might just do linear search over collision buckets.  In the latter case, the time complexity would actually be linear (O(n)) rather than logarithmic; the linear behavior won't begin to emerge until the map has billions of entries, but again, time complexity isn't about that.

The other point worth making here is that while time complexity is an important fact about the performance of an algorithm, it is not the only important fact.  The amount of time it takes on small instances can also matter; depending on the use case, it can be more important than the time complexity.  There are algorithms in the CS literature (called "galactic algorithms"; TIL!) which have state-of-the-art time complexity, but are not used in practice because their constant factors are too large (I guess in practice this means they have complicated initializations to perform before getting to the meat of the computation).

None of this is intended as a criticism of Hickey's choice of HAMTs for Clojure.  The only reason FSet doesn't use HAMTs is that I wasn't aware of their existence when I was writing it.  Probably I will rectify this at some point, though that's not a trivial thing to do because the change can't be perfectly compatible; FSet's trees are comparison-based, while HAMTs are hash-based, requiring a change to how user-defined classes are interfaced to the library.  Still, I expect HAMTs would be substantially faster in many applications.

Saturday, June 8, 2024

FSet 1.4.0 Released

Greetings FSet users,

For several years I was too busy to do much with Common Lisp, but having left my last job a few months ago, I am now working on a project in CL.  I'm using FSet, of course, and so I've been reminded that it needed some TLC; there were some bugs to fix, and the documentation was very old and possibly hard to find.  So I've put some time into it and prepared a new release.

The first thing I did was to review all the commits Paul Dietz made back in 2020.  These were more extensive than I had realized; he greatly expanded the test suite and fixed a number of bugs.  I have tried to thank him for his work, but he seems to have retired from GrammaTech; I have not been able to reach him.  If anyone is in touch with him. please convey my thanks.

One bug Paul noticed but didn't fix, probably because he thought someone might be depending on the current behavior, was that compare on maps and seqs was not comparing the default; if two maps or seqs had the same contents but different defaults, they would nonetheless be reported as equal.  There is indeed a chance of breaking existing code by fixing this, but I think it's small; in any case, I've decided to risk it — the behavior was clearly a bug.

The only other possibly breaking change I've made is to revamp the APIs of list-relation and query-registry.  I wrote these classes some time ago, specifically for the project I was working on (and have now resumed); they're not well documented, and I'll be surprised if anyone is using them, especially in the case of query-registry.  If I'm wrong and you are using them. post a comment and I'll explain how to convert your code, if it's not obvious.  (I did remove some methods from query-registry that I was no longer using; I can restore them if necessary.)

I've also collected the FSet documentation into one place, and freshened it a little.

As part of this work I have also updated Misc-Extensions, which contains some macros that I like to use (and are used in FSet).  In particular, I made some improvements to GMap, my iteration macro (we all have our own iteration macros, right?), and wrote a README for the system, that should make it a lot easier for people to see what's in it.

 

Sunday, June 2, 2024

Functional Collections in Programming Languages

As I was updating the documentation for FSet, my functional collections library for Common Lisp, I wondered about the history of functional collections in programming languages.  I have found a couple of interesting early examples, but before I present those, I want to address this question: exactly what does it mean for a collection type (or indeed, any type) to be "functional" in an imperative language with assignment?

When I speak of "functional" types in imperative languages, I mean more specifically types with value semantics as opposed to reference semantics.  The distinction is more subtle than that between mutable and immutable types, which it is often conflated with.  Let's consider for a moment a simple type that we all understand: integers in C.  For example:

int a = 2;
int b = a;
++a;

After execution of these statements, a will be 3 and b will be 2.  Note in particular that the assignment of a to b is a value assignment: it copies the value of a into b; it does not make b an alias of a.  If it had made b an alias of a, then b would also have been 3 afterwards.

For contrast, consider this example in Java:

int[] a = {15, 37};
int[] b = a;
a[0] = 42;

After this, both a and b will contain {42, 37}.  Here the assignment is a reference assignment: b doesn't just wind up with the same array value as a; instead, it gets aliased to a, so that changes made to either one are reflected in the other.

These observations suggest a definition of the distinction between value semantics and reference semantics: if a type has value semantics, then assignment of it does not cause aliasing, while with reference semantics, assignment does cause aliasing.

Now let's look at a C++ example:

std::string a = "foo";
std::string b = a;
a.insert(1, "x");

Particularly if you don't know C++, I invite you to puzzle over this for a moment.  What is the value of b?  In fact, it is "foo"; the assignment is value assignment, implemented by copying the contents of the string, so the a.insert operation affects only a.  Strings in C++ have value semantics, as indeed do the STL collection types (vector etc.).  Of course, in C++, you can create a reference or pointer to any type, so you can always get reference semantics when you want it, even for built-in types such as integers.

So now I have a couple of questions for you.  First, are C++ strings mutable?  Given that they have operations like insert, one would be inclined to call them mutable, don't you think?  Let's say that they are.  Okay, then are C/C++ integers also mutable?  We don't usually think of integers as being mutable; we usually think of an operation like ++a as assigning a new value to a, not as incrementing a mutable integer object.  But as we see here, integers and strings have the same behavior vis-a-vis assignment and modification; if we consider strings to be mutable, seems like we have to consider integers mutable as well.

But my purpose isn't really to get you to call integers mutable.  My point is, rather, that the mutable/immutable distinction doesn't capture everything that's going on here.  The more useful distinction is between value semantics and reference semantics.  When I speak of "functional collections", I mean that they have value semantics, not necessarily that they have nothing that looks like a mutating operation.

So the question I wondered about was, what were the first programming languages to provide collections with value semantics?

A case can be made that Lisp was the first, in 1958.  Although Lisp lists are mutable, they are usually treated as immutable once fully constructed.  For example, a function constructing and returning a list might call nreverse on it just before returning it, to destructively reverse it (a common idiom, because constructing a list using cons starts at the end); but usually, the caller of such a function, having received the returned list, would not subsequently mutate it.  Certainly there are and have always been exceptions, but my impression is that, even in Lisp's early years, significant bodies of code were written in which the vast majority of lists were treated functionally (once fully constructed).

But the first language in which collections were treated functionally by definition, rather than by usual convention, appears to have been APL in 1966.  Interestingly, given the very great difference the choice of value or reference semantics makes to the way in which one writes programs in a language, I can't find a clear statement in the APL manual that I'm looking at as to which semantics APL uses for its arrays.  It seems to be something that people expect to go without saying (one reason I'm writing about it).  But I downloaded and compiled GNU APL and tried it out, and, sure enough, it uses value semantics (the left arrow is the assignment operator; user input is in bold):

V←1 2 3 4
V
1 2 3 4
W←V
V[2]←7
V
1 7 3 4
W
1 2 3 4

Another early value-semantics language was SETL.  From Robert B. K. Dewar's The SETL Programming Language (1979):

One important point is that SETL treats tuples as values when it comes to assignments.  Consider the following sections of code:
abc := 12;
cde := abc;
abc := abc + 2; $ cde
still = 12

abc := [1,2,3];
cde := abc;
abc(2) := 0; $ cde
still = [1,2,3]

In SETL the two sequences have similar effects.  If you expected cde to change in the second
sequence, then study it carefully.  If not, then you have the correct idea already.

Here, just as I have done above, Dewar is demonstrating how the value semantics of SETL tuples is like that of integers.

SETL strongly influenced a little-known, proprietary language called Refine, which I worked in from 1987 to 2003.  Refine was originally developed at Kestrel Institute; it had functional collections and was implemented on top of Common Lisp.  It was my experience with Refine that motivated me to write FSet.

Other languages with functional collections appeared in the 1980s and 1990s, including the ML family (primarily Standard ML and OCaml) and Haskell.  No doubt there were others of which I am not aware.  But the language that has probably done the most to popularize functional collections is Rich Hickey's Clojure.

All of which brings me back to FSet.  FSet, of course, has value semantics:

FSET-USER> (isetq x (map ('a (seq 47 33)) ('b (seq 17 3 99))))
#{| (A #[ 47 33 ]) (B #[ 17 3 99 ]) |} ;
a map whose range values are seqs
FSET-USER> (isetq y x)
#{| (A #[ 47 33 ]) (B #[ 17 3 99 ]) |}
FSET-USER> (setf (@ (@ x 'a) 1) 1)     ;
sets element 1 of the seq for 'a
1
FSET-USER> x
#{| (A #[ 47 1 ]) (B #[ 17 3 99 ]) |}
FSET-USER> y
#{| (A #[ 47 33 ]) (B #[ 17 3 99 ]) |}

(Here isetq is an "interactive setq" that suppresses undefined-variable warnings some Lisp implementations issue when you interactively set a previously unknown variable.)  You can see that the update to x doesn't change y; but how does FSet manage this?  Common Lisp's setf macro was designed to support such cases.  In the CLHS sec. 5.1.2.2, we see:

A function form can be used as a place [the first operand of setf] if it falls into one of the following categories:
[...]
• A function call form whose first element is the name of any one of the functions in the next figure [which are ldb, mask-field, and getf], provided that the supplied argument to that function is in turn a place form; in this case, the new place has stored back into it the result of applying the supplied "update" function.

So for instance, you can update bitfields of integer variables:

CL-USER> (setq *print-base* 16)  ; hexadecimal output makes this easier to understand
10
CL-USER> (setq x #x1000)
1000
CL-USER> (setf (ldb (byte 8 4) x) #x32)
32
CL-USER> x
1320

Since it's an integer that's being updated here, clearly setf has the ability to handle updates to types with value semantics.  If you're curious how that works, see the CLHS sec. 5.1.1.2.