Sequences

This functions are aimed at manipulating finite and infinite sequences of values. Some functions have two flavors: one returning list and other returning possibly infinite iterator, the latter ones follow convention of prepending i before list-returning function name.

When working with sequences, see also itertools standard module. Funcy reexports and aliases some functions from it.

Generate

repeat(item[, n])

Makes an iterator yielding item for n times or indefinitely if n is omitted. repeat simply repeats given value, when you need to reevaluate something repeatedly use repeatedly() instead.

When you just need a length n list or tuple of item you can use:

[item] * n
# or
(item,) * n
count(start=0, step=1)

Makes infinite iterator of values: start, start + step, start + 2*step, ....

Could be used to generate sequence:

map(lambda x: x ** 2, count(1))
# -> 1, 4, 9, 16, ...

Or annotate sequence using zip():

zip(count(), 'abcd')
# -> (0, 'a'), (1, 'b'), (2, 'c'), (3, 'd')

# print code with BASIC-style numbered lines
for line in zip(count(10, 10), code.splitlines()):
    print '%d %s' % line

See also enumerate() and original itertools.count() documentation.

cycle(seq)

Cycles passed seq indefinitely returning its elements one by one.

Useful when you need to cyclically decorate some sequence:

for n, parity in zip(count(), cycle(['even', 'odd'])):
    print '%d is %s' % (n, parity)
repeatedly(f[, n])

Takes a function of no args, presumably with side effects, and returns an infinite (or length n if supplied) iterator of calls to it.

For example, this call can be used to generate 10 random numbers:

repeatedly(random.random, 10)

Or one can create a length n list of freshly-created objects of same type:

repeatedly(list, n)
iterate(f, x)

Returns an infinite iterator of x, f(x), f(f(x)), ... etc.

Most common use is to generate some recursive sequence:

iterate(inc, 5)
# -> 5, 6, 7, 8, 9, ...

iterate(lambda x: x * 2, 1)
# -> 1, 2, 4, 8, 16, ...

step = lambda p: (p[1], p[0] + p[1])
map(first, iterate(step, (0, 1)))
# -> 0, 1, 1, 2, 3, 5, 8, ... (Fibonacci sequence)

Manipulate

This section provides some robust tools for sequence slicing. Consider Slicings or itertools.islice() for more generic cases.

take(n, seq)

Returns a list of the first n items in the sequence, or all items if there are fewer than n.

take(3, [2, 3, 4, 5]) # [2, 3, 4]
take(3, count(5))     # [5, 6, 7]
take(3, 'ab')         # ['a', 'b']
drop(n, seq)

Skips first n items in the sequence, returning iterator yielding rest of its items.

drop(3, [2, 3, 4, 5]) # iter([5])
drop(3, count(5))     # count(8)
drop(3, 'ab')         # empty iterator
first(seq)

Returns the first item in the sequence. Returns None if the sequence is empty. Typical usage is choosing first of some generated variants:

# Get a text message of first failed validation rule
fail = first(rule.text for rule in rules if not rule.test(instance))

# Use simple pattern matching to construct form field widget
TYPE_TO_WIDGET = (
    [lambda f: f.choices,           lambda f: Select(choices=f.choices)],
    [lambda f: f.type == 'int',     lambda f: TextInput(coerce=int)],
    [lambda f: f.type == 'string',  lambda f: TextInput()],
    [lambda f: f.type == 'text',    lambda f: Textarea()],
    [lambda f: f.type == 'boolean', lambda f: Checkbox(f.label)],
)
return first(do(field) for cond, do in TYPE_TO_WIDGET if cond(field))

Other common use case is passing to map() or lmap(). See last example in iterate() for such example.

second(seq)

Returns the second item in given sequence. Returns None if there are less than two items in it.

Could come in handy with sequences of pairs, e.g. dict.items(). Following code extract values of a dict sorted by keys:

map(second, sorted(some_dict.items()))

And this line constructs an ordered by value dict from a plain one:

OrderedDict(sorted(plain_dict.items(), key=second))
nth(n, seq)

Returns nth item in sequence or None if no one exists. Items are counted from 0, so it’s like indexed access but works for iterators. E.g. here is how one can get 6th line of some_file:

nth(5, repeatedly(open('some_file').readline))
last(seq)

Returns the last item in the sequence. Returns None if the sequence is empty. Tries to be efficient when sequence supports indexed or reversed access and fallbacks to iterating over it if not.

rest(seq)

Skips first item in the sequence, returning iterator starting just after it. A shortcut for drop(1, seq).

butlast(seq)

Returns an iterator of all elements of the sequence but last.

ilen(seq)

Calculates length of iterator. Will consume it or hang up if it’s infinite.

Especially useful in conjunction with filtering or slicing functions, for example, this way one can find common start length of two strings:

ilen(takewhile(lambda (x, y): x == y, zip(s1, s2)))

Unite

concat(*seqs)
lconcat(*seqs)

Concats several sequences into single iterator or list.

concat() is an alias for itertools.chain().

cat(seqs)
lcat(seqs)

Concatenates passed sequences. Useful when dealing with sequence of sequences, see concat() or lconcat() to join just a few sequences.

Flattening of various nested sequences is most common use:

# Flatten two level deep list
lcat(list_of_lists)

# Get a flat html of errors of a form
errors = cat(inline.errors() for inline in form)
error_text = '<br>'.join(errors)

# Brace expansion on product of sums
# (a + b)(t + pq)x == atx + apqx + btx + bpqx
terms = [['a', 'b'], ['t', 'pq'], ['x']]
lmap(lcat, product(*terms))
# [list('atx'), list('apqx'), list('btx'), list('bpqx')]

cat() is an alias for itertools.chain.from_iterable().

flatten(seq, follow=is_seqcont)
lflatten(seq, follow=is_seqcont)

Flattens arbitrary nested sequence of values and other sequences. follow argument determines whether to unpack each item. By default it dives into lists, tuples and iterators, see is_seqcont() for further explanation.

See also cat() or lcat() if you need to flatten strictly two-level sequence of sequences.

tree_leaves(root, follow=is_seqcont, children=iter)
ltree_leaves(root, follow=is_seqcont, children=iter)

A way to iterate or list over all the tree leaves. E.g. this is how you can list all descendants of a class:

ltree_leaves(Base, children=type.__subclasses__, follow=type.__subclasses__)
tree_nodes(root, follow=is_seqcont, children=iter)
ltree_nodes(root, follow=is_seqcont, children=iter)

A way to iterate or list over all the tree nodes. E.g. this is how you can iterate over all classes in hierarchy:

tree_nodes(Base, children=type.__subclasses__, follow=type.__subclasses__)
interleave(*seqs)

Returns an iterator yielding first item in each sequence, then second and so on until some sequence ends. Numbers of items taken from all sequences are always equal.

interpose(sep, seq)

Returns an iterator yielding elements of seq separated by sep.

This is like str.join() for lists. This code is a part of a translator working with operation node:

def visit_BoolOp(self, node):
    # ... do generic visit
    node.code = lmapcat(translate, interpose(node.op, node.values))
lzip(*seqs, strict=False)

Joins given sequences into a list of tuples of corresponding first, second and later values. Essentially a list version of zip() for Python 3.

Transform and filter

Most of functions in this section support Extended function semantics. Among other things it allows to rewrite examples using re_tester() and re_finder() tighter.

map(f, seq)
lmap(f, seq)

Extended versions of map() and its list version.

filter(pred, seq)
lfilter(pred, seq)

Extended versions of filter() and its list version.

remove(pred, seq)
lremove(pred, seq)

Returns an iterator or a list of items of seq that result in false when passed to pred. The results of this functions complement results of filter() and lfilter().

A handy use is passing re_tester() result as pred. For example, this code removes any whitespace-only lines from list:

remove(re_tester('^\s+$'), lines)

Note, you can rewrite it shorter using Extended function semantics:

remove('^\s+$', lines)
keep([f, ]seq)
lkeep([f, ]seq)

Maps seq with given function and then filters out falsy elements. Simply removes falsy items when f is absent. In fact these functions are just handy shortcuts:

keep(f, seq)  == filter(bool, map(f, seq))
keep(seq)     == filter(bool, seq)

lkeep(f, seq) == lfilter(bool, map(f, seq))
lkeep(seq)    == lfilter(bool, seq)

Natural use case for keep() is data extraction or recognition that could eventually fail:

# Extract numbers from words
lkeep(re_finder(r'\d+'), words)

# Recognize as many colors by name as possible
lkeep(COLOR_BY_NAME.get, color_names)

An iterator version can be useful when you don’t need or not sure you need the whole sequence. For example, you can use first() - keep() combo to find out first match:

first(keep(COLOR_BY_NAME.get, color_name_candidates))

Alternatively, you can do the same with some() and map().

One argument variant is a simple tool to keep your data free of falsy junk. This one returns non-empty description lines:

keep(description.splitlines())

Other common case is using generator expression instead of mapping function. Consider these two lines:

keep(f.name for f in fields)     # sugar generator expression
keep(attrgetter('name'), fields) # pure functions
mapcat(f, *seqs)
lmapcat(f, *seqs)

Maps given sequence(s) and then concatenates results, essentially a shortcut for cat(map(f, *seqs)). Come in handy when extracting multiple values from every sequence item or transforming nested sequences:

# Get all the lines of all the texts in single flat list
mapcat(str.splitlines, bunch_of_texts)

# Extract all numbers from strings
mapcat(partial(re_all, r'\d+'), bunch_of_strings)
without(seq, *items)
lwithout(seq, *items)

Returns sequence with items removed, preserves order. Designed to work with a few items, this allows removing unhashable objects:

non_empty_lists = without(lists, [])

In case of large amount of unwanted elements one can use remove():

remove(set(unwanted_elements), seq)

Or simple set difference if order of sequence is irrelevant.

Split and chunk

split(pred, seq)
lsplit(pred, seq)

Splits sequence items which pass predicate from the ones that don’t, essentially returning a tuple filter(pred, seq), remove(pred, seq).

For example, this way one can separate private attributes of an instance from public ones:

private, public = lsplit(re_tester('^_'), dir(instance))

Split absolute and relative urls using extended predicate semantics:

absolute, relative = lsplit(r'^http://', urls)
split_at(n, seq)
lsplit_at(n, seq)

Splits sequence at given position, returning a tuple of its start and tail.

split_by(pred, seq)
lsplit_by(pred, seq)

Splits start of sequence, consisting of items passing predicate, from the rest of it. Works similar to takewhile(pred, seq), dropwhile(pred, seq), but works with iterator seq correctly:

lsplit_by(bool, iter([-2, -1, 0, 1, 2]))
# [-2, -1], [0, 1, 2]
takewhile([pred, ]seq)

Yeilds elements of seq as long as they pass pred. Stops on first one which makes predicate falsy:

# Extract first paragraph of text
takewhile(re_tester(r'\S'), text.splitlines())

# Build path from node to tree root
takewhile(bool, iterate(attrgetter('parent'), node))
dropwhile([pred, ]seq)

This is a mirror of takewhile(). Skips elements of given sequence while pred is true and yields the rest of it:

# Skip leading whitespace-only lines
dropwhile(re_tester('^\s*$'), text_lines)
group_by(f, seq)

Groups elements of seq keyed by the result of f. The value at each key will be a list of the corresponding elements, in the order they appear in seq. Returns defaultdict(list).

stats = group_by(len, ['a', 'ab', 'b'])
stats[1] # -> ['a', 'b']
stats[2] # -> ['ab']
stats[3] # -> [], since stats is defaultdict

One can use split() when grouping by boolean predicate. See also itertools.groupby().

group_by_keys(get_keys, seq)

Groups elements of seq having multiple keys each into defaultdict(list). Can be used to reverse grouping:

posts_by_tag = group_by_keys(attrgetter('tags'), posts)
sentences_with_word = group_by_keys(str.split, sentences)
group_values(seq)

Groups values of (key, value) pairs. May think of it like dict() but collecting collisions:

group_values(keep(r'^--(\w+)=(.+)', sys.argv))
partition(n, [step, ]seq)
lpartition(n, [step, ]seq)

Iterates or lists over partitions of n items, at offsets step apart. If step is not supplied, defaults to n, i.e. the partitions do not overlap. Returns only full length-n partitions, in case there are not enough elements for last partition they are ignored.

Most common use is deflattening data:

# Make a dict from flat list of pairs
dict(partition(2, flat_list_of_pairs))

# Structure user credentials
{id: (name, password) for id, name, password in partition(3, users)}

A three argument variant of partition() can be used to process sequence items in context of their neighbors:

# Smooth data by averaging out with a sliding window
[sum(window) / n for window in partition(n, 1, data_points)]

Also look at pairwise() for similar use. Other use of partition() is processing sequence of data elements or jobs in chunks, but take a look at chunks() for that.

chunks(n, [step, ]seq)
lchunks(n, [step, ]seq)

Like partition(), but may include partitions with fewer than n items at the end:

chunks(2, 'abcde')
# -> 'ab', 'cd', 'e'

chunks(2, 4, 'abcde')
# -> 'ab', 'e'

Handy for batch processing.

partition_by(f, seq)
lpartition_by(f, seq)

Partition seq into list of lists or iterator of iterators splitting at f(item) change.

Data handling

distinct(seq, key=identity)
ldistinct(seq, key=identity)

Returns unique items of the sequence with order preserved. If key is supplied then distinguishes values by comparing their keys.

Note

Elements of a sequence or their keys should be hashable.

with_prev(seq, fill=None)

Returns an iterator of a pair of each item with one preceding it. Yields fill or None as preceding element for first item.

Great for getting rid of clunky prev housekeeping in for loops. This way one can indent first line of each paragraph while printing text:

for line, prev in with_prev(text.splitlines()):
    if not prev:
        print '    ',
    print line

Use pairwise() to iterate only on full pairs.

with_next(seq, fill=None)

Returns an iterator of a pair of each item with one next to it. Yields fill or None as next element for last item. See also with_prev() and pairwise().

pairwise(seq)

Yields pairs of items in seq like (item0, item1), (item1, item2), .... A great way to process sequence items in a context of each neighbor:

# Check if seq is non-descending
all(left <= right for left, right in pairwise(seq))
count_by(f, seq)

Counts numbers of occurrences of values of f on elements of seq. Returns defaultdict(int) of counts.

Calculating a histogram is one common use:

# Get a length histogram of given words
count_by(len, words)
count_reps(seq)

Counts number of repetitions of each value in seq. Returns defaultdict(int) of counts. This is faster and shorter alternative to count_by(identity, ...)

reductions(f, seq[, acc])
lreductions(f, seq[, acc])

Returns a sequence of the intermediate values of the reduction of seq by f. In other words it yields a sequence like:

reduce(f, seq[:1], [acc]), reduce(f, seq[:2], [acc]), ...

You can use sums() or lsums() for a common use of getting list of partial sums.

sums(seq[, acc])
lsums(seq[, acc])

Same as reductions() or lreductions() with reduce function fixed to addition.

Find out which straw will break camels back:

first(i for i, total in enumerate(sums(straw_weights))
        if total > camel_toughness)
count(start=0, step=1)

Makes infinite iterator of values:
start, start + step, start + 2*step, ...
cycle(seq)

Cycles passed sequence indefinitely
yielding its elements one by one.
repeat(item[, n])

Makes an iterator yielding item for n times
or indefinitely if n is omitted.
repeatedly(f[, n])

Takes a function of no args, presumably with side effects,
and returns an infinite (or length n) iterator of calls to it.
iterate(f, x)

Returns an infinite iterator of x, f(x), f(f(x)), ...
re_all(regex, s, flags=0)

Lists all matches of regex in s.
re_iter(regex, s, flags=0)

Iterates over matches of regex in s.
first(seq)

Returns the first item in the sequence.
Returns None if the sequence is empty.
second(seq)

Returns second item in the sequence.
Returns None if there are less than two items in it.
last(seq)

Returns the last item in the sequence.
Returns None if the sequence is empty.
nth(n, seq)

Returns nth item in the sequence
or None if no such item exists.
some([pred, ]seq)

Finds first item in seq passing pred
or first that is true if pred is omitted.
take(n, seq)

Returns a list of first n items in the sequence,
or all items if there are fewer than n.
drop(n, seq)

Skips first n items in the sequence,
yields the rest.
rest(seq)

Skips first item in the sequence, yields the rest.
butlast(seq)

Yields all elements of the sequence but last.
takewhile([pred, ]seq)

Yields seq items as long as they pass pred.
dropwhile([pred, ]seq)

Skips elements of seq while pred passes
and then yields the rest.
split_at(n, seq)
lsplit_at(n, seq)


Splits the sequence at given position,
returning a tuple of its start and tail.
split_by(pred, seq)
lsplit_by(pred, seq)


Splits the start of the sequence,
consisting of items passing pred,
from the rest of it.
map(f, *seqs)
lmap(f, *seqs)


Extended versions of map() and list(map())
mapcat(f, *seqs)
lmapcat(f, *seqs)


Maps given sequence(s) and concatenates the results.
keep([f, ]*seqs)
lkeep([f, ]*seqs)


Maps seq with f and filters out falsy results.
Simply removes falsy values in one argument version.
pluck(key, mappings)
lpluck(key, mappings)


Yields or lists values for key in each mapping.
pluck_attr(attr, objects)
lpluck_attr(attr, objects)


Yields or lists values of attr of each object.
invoke(objects, name, *args, **kwargs)
linvoke(objects, name, *args, **kwargs)


Yields or lists results of the given method call
for each object in objects.
@wrap_prop(ctx)

Wrap a property accessors with ctx.
filter(pred, seq)
lfilter(pred, seq)


Extended versions of filter() and list(filter()).
remove(pred, seq)
lremove(pred, seq)


Removes items from seq passing given predicate.
distinct(seq, key=identity)
ldistinct(seq, key=identity)


Removes items having same key from seq.
Preserves order.
where(mappings, **cond)
lwhere(mappings, **cond)


Selects mappings containing all pairs in cond.
without(seq, *items)
lwithout(seq, *items)


Returns sequence without items,
preserves order.
cat(seqs)
lcat(seqs)


Concatenates passed sequences.
concat(*seqs)
lconcat(*seqs)


Concatenates several sequences.
flatten(seq, follow=is_seqcont)
lflatten(seq, follow=is_seqcont)


Flattens arbitrary nested sequence,
dives into when follow(item) is truthy.
interleave(*seqs)

Yields first item of each sequence,
then second one and so on.
interpose(sep, seq)

Yields items of seq separated by sep.
lzip(*seqs)

List version of zip()
chunks(n, [step, ]seq)
lchunks(n, [step, ]seq)


Chunks seq into parts of length n or less.
Skips step items between chunks.
partition(n, [step, ]seq)
lpartition(n, [step, ]seq)


Partitions seq into parts of length n.
Skips step items between parts.
Non-fitting tail is ignored.
partition_by(f, seq)
lpartition_by(f, seq)


Partition seq into continuous chunks
with constant value of f.
split(pred, seq)
lsplit(pred, seq)


Splits seq items which pass pred
from the ones that don't.
count_by(f, seq)

Counts numbers of occurrences of values of f
on elements of seq.
count_reps(seq)

Counts repetitions of each value in seq.
group_by(f, seq)

Groups items of seq by f(item).
group_by_keys(get_keys, seq)

Groups elements of seq by multiple keys.
group_values(seq)

Groups values of (key, value) pairs by keys.
ilen(seq)

Consumes the given iterator and returns its length.
reductions(f, seq[, acc])
lreductions(f, seq[, acc])


Constructs intermediate reductions of seq by f.
sums(seq[, acc])
lsums(seq[, acc])


Returns a sequence of partial sums of seq.
all([pred, ]seq)

Checks if all items in seq pass pred.
any([pred, ]seq)

Checks if any item in seq passes pred.
none([pred, ]seq)

Checks if none of the items in seq pass pred.
one([pred, ]seq)

Checks if exactly one item in seq passes pred.
pairwise(seq)

Yields all pairs of neighboring items in seq.
with_prev(seq, fill=None)

Yields each item from seq with the one preceding it.
with_next(seq, fill=None)

Yields each item from seq with the next one.
zip_values(*dicts)

Yields tuples of corresponding values of given dicts.
zip_dicts(*dicts)

Yields tuples like (key, val1, val2, ...)
for each common key in all given dicts.
tree_leaves(root, follow=is_seqcont, children=iter)
ltree_leaves(root, follow=is_seqcont, children=iter)


Lists or iterates over tree leaves.
tree_nodes(root, follow=is_seqcont, children=iter)
ltree_nodes(root, follow=is_seqcont, children=iter)


Lists or iterates over tree nodes.
merge(*colls)

Merges several collections of same type into one:
dicts, sets, lists, tuples, iterators or strings
For dicts later values take precedence.
merge_with(f, *dicts)

Merges several dicts combining values with given function.
join(colls)

Joins several collections of same type into one.
Same as merge() but accepts sequence of collections.
join_with(f, *dicts)

Joins several dicts combining values with given function.
walk(f, coll)

Maps coll with f, but preserves collection type.
walk_keys(f, coll)

Walks keys of coll, mapping them with f.
Works with dicts and collections of pairs.
walk_values(f, coll)

Walks values of coll, mapping them with f.
Works with dicts and collections of pairs.
select(pred, coll)

Filters elements of coll by pred
constructing a collection of same type.
select_keys(pred, coll)

Select part of coll with keys passing pred.
Works with dicts and collections of pairs.
select_values(pred, coll)

Select part of coll with values passing pred.
Works with dicts and collections of pairs.
compact(coll)

Removes falsy values from given collection.
All collections functions work with dicts.
These are targeted specifically at them.
flip(mapping)

Flip passed dict swapping its keys and values.
zipdict(keys, vals)

Creates a dict with keys mapped to the corresponding vals.
itervalues(coll)

Yields values of the given collection.
iteritems(coll)

Yields (key, value) pairs of the given collection.
project(mapping, keys)

Leaves only given keys in mapping.
omit(mapping, keys)

Removes given keys from mapping.
empty(coll)

Returns an empty collection of the same type as coll.
get_in(coll, path, default=None)

Returns a value at path in the given nested collection.
get_lax(coll, path, default=None)

Returns a value at path in the given nested collection.
Ignores TypeErrors.
set_in(coll, path, value)

Creates a copy of coll with the value set at path.
update_in(coll, path, update, default=None)

Creates a copy of coll with a value updated at path.
del_in(coll, path)

Creates a copy of coll with path removed.
has_path(coll, path)

Tests whether path exists in a nested coll.
Most of functions in this section support extended semantics.
identity(x)

Returns its argument.
constantly(x)

Creates a function accepting any args, but always returning x.
func_partial(func, *args, **kwargs)

Like partial() but returns a real function.
partial(func, *args, **kwargs)

Returns partial application of func.
rpartial(func, *args)

Partially applies last arguments to func.
iffy([pred, ]action[, default=identity])

Creates a function, which conditionally applies action or default.
caller(*args, **kwargs)

Creates a function calling its argument with passed arguments.
re_finder(regex, flags=0)

Creates a function finding regex in passed string.
re_tester(regex, flags=0)

Creates a predicate testing passed strings with regex.
complement(pred)

Constructs a complementary predicate.
autocurry(func)

Creates a version of func returning its partial applications
until sufficient arguments are passed.
curry(func[, n])

Curries func into a chain of one argument functions.
Arguments are passed from left to right.
rcurry(func[, n])

Curries func from right to left.
compose(*fs)

Composes passed functions.
rcompose(*fs)

Composes fs, calling them from left to right.
juxt(*fs)
ljuxt(*fs)


Constructs a juxtaposition of the given functions.
Result returns a list or an iterator of results of fs.
all_fn(*fs)

Constructs a predicate,
which holds when all fs hold.
any_fn(*fs)

Constructs a predicate,
which holds when any of fs holds.
none_fn(*fs)

Constructs a predicate,
which holds when none of fs hold.
one_fn(*fs)

Constructs a predicate,
which holds when exactly one of fs holds.
some_fn(*fs)

Constructs a function, which calls fs one by one
and returns first truthy result.
is_distinct(*fs)

Checks if all elements in the collection are different.
isa(*types)

Creates a function checking if its argument
is of any of given types.
is_iter(value)

Checks whether value is an iterator.
is_mapping(value)

Checks whether value is a mapping.
is_set(value)

Checks whether value is a set.
is_list(value)

Checks whether value is a list.
is_tuple(value)

Checks whether value is a tuple.
is_seq(value)

Checks whether value is a Sequence.
is_mapping(value)

Checks whether value is a mapping.
is_seqcoll(value)

Checks whether value is a list or a tuple,
which are both sequences and collections.
is_seqcont(value)

Checks whether value is a list, a tuple or an iterator,
which are both sequences and containers.
iterable(value)

Checks whether value is iterable.
@decorator

Transforms a flat wrapper into a decorator.
@wraps

An utility to pass function metadata
from wrapped function to a wrapper.
unwrap(func)

Get the object wrapped by func.
@once

Let function execute only once.
Noop all subsequent calls.
@once_per(*argnames)

Call function only once for every combination
of the given arguments.
@once_per_args

Call function only once for every combination
of values of its arguments.
@collecting

Transforms a generator into list returning function.
@joining(sep)

Joins decorated function results with sep.
@post_processing(func)

Post processes decorated function result with func.
@throttle(period)

Only runs a decorated function once per period.
@wrap_with(ctx)

Turn context manager into a decorator.
nullcontext(enter_result=None)

A noop context manager.
@retry(tries, errors=Exception, timeout=0, filter_errors=None)

Tries decorated function up to tries times.
Retries only on specified errors.
@silent

Alters function to ignore all exceptions.
@ignore(errors, default=None)

Alters function to ignore errors,
returning default instead.
suppress(*errors)

The context manager suppressing errors in its block.
@limit_error_rate(fails, timeout, ...)

If function fails to complete fails times in a row,
calls to it will be blocked for timeout seconds.
fallback(*approaches)

Tries several approaches until one works.
Each approach has a form of (callable, errors).
raiser(exception=Exception, *args, **kwargs)

Constructs function that raises the given exception
with given arguments on any invocation.
@reraise(errors, into)

Intercepts errors and reraises them as into exception.
tap(x, label=None)

Prints x and then returns it.
@log_calls(print_func, errors=True, stack=True)
@print_calls(errors=True, stack=True)


Logs or prints all function calls,
including arguments, results and raised exceptions.
@log_enters(print_func)
@print_enters


Logs or prints on each enter to a function.
@log_exits(print_func, errors=True, stack=True)
@print_exits(errors=True, stack=True)


Logs or prints on each exit from a function.
@log_errors(print_func, label=None, stack=True)
@print_errors(label=None, stack=True)


Logs or prints all errors within a function or block.
@log_durations(print_func, label=None)
@print_durations(label=None)


Times each function call or block execution.
log_iter_durations(seq, print_func, label=None)
print_iter_durations(seq, label=None)


Times processing of each item in seq.
@memoize(*, key_func=None)

Memoizes a decorated function results.
@cache(timeout, *, key_func=None)

Caches a function results for timeout seconds.
@cached_property

Creates a property caching its result.
@cached_readonly

Creates a read-only property caching its result.
@make_lookuper

Creates a cached function with prefilled memory.
@silent_lookuper

Creates a cached function with prefilled memory.
Ignores memory misses, returning None.
re_find(regex, s, flags=0)

Matches regex against the given string,
returns the match in the simplest possible form.
re_test(regex, s, flags=0)

Tests whether regex matches against s.
cut_prefix(s, prefix)

Cuts prefix from given string if it's present.
cut_suffix(s, suffix)

Cuts suffix from given string if it's present.
str_join([sep="", ]seq)

Joins the given sequence with sep.
Forces stringification of seq items.
@monkey(cls_or_module, name=None)

Monkey-patches class or module.
class LazyObject(init)

Creates an object setting itself up on first use.
isnone(x)

Checks if x is None.
notnone(x)

Checks if x is not None.
inc(x)

Increments its argument by 1.
dec(x)

Decrements its argument by 1.
even(x)

Checks if x is even.
odd(x)

Checks if x is odd.