|
|
Subscribe / Log in / New account

A revamped Python string-formatting proposal

By Jake Edge
January 22, 2025

The proposal to add a more general facility for string formatting to Python, which we looked at in August 2024, has changed a great deal since, so it merits another look. The changes take multiple forms: a new title for PEP 750 ("Template Strings"), a different mechanism for creating and using templates, a new Template type to hold them, and several additional authors for the PEP. Meanwhile, one controversial part of the original proposal, lazy evaluation of the interpolated values, has been changed so that it requires an explicit opt-in (via lambda); template strings are a generalization of f-strings and lazy evaluation was seen by some as a potentially confusing departure from their behavior.

There are a wide variety of use cases for template strings; the previous title of the PEP referred to creating domain-specific languages using them. Obvious examples are safely handling SQL queries or HTML output with user-supplied input. The PEP also has an example with two different approaches to structured logging using template strings.

Template strings use a character tag before their opening quote that modify the way they are interpreted, much like f-strings do, though there are two main differences. The first is the character tag used to denote them; instead of "f", template strings use a "t". A more fundamental difference is that template strings (also known as "t-strings" for obvious reasons) do not return a string, as f-strings do, but instead return a Template object:

    name = 'world'
    
    fstr = f'Hello, {name}'  # fstr is "Hello, world"
    tstr = t'Hello, {name}'  # tstr is an object of type Template

One of the complaints about the original proposal was that it would have allowed arbitrary function names as tags on a string. Given that people would likely want to use short tags, that would tend to pollute the program namespace with short function names. It would have also precluded adding any other tags to Python down the road; currently, the language has others, such as r"" for raw strings and b"" for byte strings. Had the earlier proposal been adopted, no others could ever be added since some program might be using it for its template strings.

The PEP has been revised several times since we covered it in August; it was updated twice in that original thread in mid-October, on October 17 and then a few days later. That turned an already lengthy thread into something approaching a mega-thread, so the most recent update was posted in its own thread in mid-November. Even there, the PEP has continued to evolve based on suggestions and comments; the version at the time of this writing is covered below, but it could change further before it gets formally proposed to the steering council.

Template

The immutable Template type contains two tuples, one each for the static (strings) and interpolated (interpolations) parts of the string. Entries in the interpolations tuple are Interpolation objects, which store the expression to be evaluated, name in the example above, and its value ('world' from above), along with any format specifications given for the interpolation. Those are the conversion function to be used (e.g. 'r' for repr()) and any output-format specifier (e.g. '.2f' for rounding to two places). The following example, adapted from the PEP, demonstrates how it is meant to work:

    name = "World"
    template = t"Hello {name}"

    assert template.strings[0] == "Hello "
    assert template.interpolations[0].expr == "name"
    assert template.interpolations[0].value == "World"
    
    assert template.strings[1] == ""

The interpolations tuple will have an entry per interpolation site in the template, so it may be empty. The strings tuple will have one entry per interpolation site plus an extra; empty strings will be used for places where there is no static data, such as the end of the string or between two interpolation sites with no spaces between them. For example:

    a = b = 2
    template = t"{a} + {b} = {a+b}"

    assert template.strings[0] == ""
    assert template.interpolations[0].expr == "a"
    assert template.interpolations[0].value == 2

    assert template.strings[1] == " + "
    assert template.interpolations[1].expr == "b"
    assert template.interpolations[1].value == 2

    assert template.strings[2] == " = "
    assert template.interpolations[2].expr == "a+b"
    assert template.interpolations[2].value == 4

    assert template.strings[3] == ""    

The two tuples are meant to be processed in alternating fashion, starting with the first element of strings. An easy way to do that is to iterate over the template; the class provides an __iter__() method so template objects can be used in for loops, for example. It will return strings and interpolations in the order they appear in the template, without any empty strings to enforce the alternation. There is also a values() convenience method that returns a tuple of the value attributes of all interpolations in the template.

The programmer can then provide any sort of template processing that they want by creating a function which operates on a Template. For example, an html() function could provide input sanitizing and allow adding HTML attributes via a dictionary, neither of which is possible using f-strings:

    evil = "<script>alert('evil')</script>"
    template = t"<p>{evil}</p>"
    assert html(template) ==
           "<p>&lt;script&gt;alert('evil')&lt;/script&gt;</p>"

    attributes = {"src": "shrubbery.jpg", "alt": "looks nice"}
    template = t"<img {attributes} />"
    assert html(template) ==
           '<img src="shrubbery.jpg" alt="looks nice" />'

The PEP has a section that shows how f-strings could be implemented using template strings. As with most of the examples, it uses the match-based processing that the PEP authors see as "the expected best practice for many template function implementations". The skeleton of that is as follows:

    for item in template:
        match item:
            case str() as s:
                ... # handle each string part
            case Interpolation() as interpolation:
                ... # handle each interpolation

Originally, the expression for an Interpolation was not evaluated until the template-processing function called a getvalue() method, which was a form of lazy evaluation. In contrast, the interpolations for an f-string are evaluated when it is. Lazy evaluation was removed from the proposal back in October, because of that behavioral difference. Most people, including the PEP authors, think that f-strings should be the starting point for understanding template strings. Template-processing functions can be written to do their own form of lazy evaluation, as the PEP describes; if the function is written to handle a callable as an interpolation, a lambda can be used as the expression. Similarly, asynchronous evaluation can be supported for template strings.

Filters

There has been a lot of discussion over the last six months or so, but there is a sense that most of the objections and suggestions have been handled in one form or another at this point. One that was passed over was Larry Hasting's strong desire for adding a filter mechanism. He noted that several Python template libraries, including Jinja, Django Template Language, and Mako, all have the concept of a filter and, interestingly, all use the pipe ("|") symbol for filters. The basic idea is to be able to process the strings in an interpolation by feeding them to expressions or functions that modify them. A classic use case in the existing libraries is to escape HTML in the interpolated string. He said that it would be a "misstep" not to include filter syntax in the PEP.

Guido van Rossum, who is one of the PEP authors, disagreed with using "|", since it already has established meanings in Python expressions (bitwise or and set union). The interpolations are already Python expressions, however, so filtering "should be part of the expression syntax". He suggested: "If you want a filter operator it should be a proposal for an additional expression operator, not just in the context of t-strings."

Hastings pointed out that the operator is already overloaded, but acknowledged some ambiguity when using it unadorned in an interpolation expression. He had other ideas for ways to use the pipe symbol, but Van Rossum continued to push back:

If we're looking for a filter operator that doesn't conflict with expressions, we could extend !r, !s, and !a with !identifier, keeping the original three as shorthands for !repr, !str and !ascii, respectively.

That would work in f-strings too. But I would recommend making that a separate PEP.

Hastings seemed to like that idea, but had some follow-up questions. Van Rossum wryly pointed out: "Those are all excellent questions for the team working on that PEP — not for me nor for the PEP 750 team. :)" On the other hand, Jinja maintainer David Lord said that he is "not convinced I would add filter syntax if I was rewriting Jinja today"; it has caused confusion with regard to operator precedence and the readability gains are minimal.

Next steps

On January 16, Dave Peck, posted the latest updated version of the PEP. Over the past few months, Peck has been doing the editing on the PEP, as well as being one of the more active participants, among the PEP authors, in the discussions. The next day, another PEP author, Paul Everitt, thought that it was likely that the time had come to "start the process of getting on the steering council's radar, as we've integrated multiple rounds of review and feedback"

PEP 750 is an extensive proposal that has seen a lot of effort, some of it going back well beyond the PEP itself. Beyond just the specification, the PEP contains examples, patterns for processing templates, an extensive collection of rejected ideas, and more. These days, Python has no shortage of string-formatting tools, both in the language itself and in libraries and frameworks of various sorts, but the PEP authors clearly see value in another. Before too long, we will presumably see if the steering council shares their enthusiasm.


Index entries for this article
PythonPython Enhancement Proposals (PEP)/PEP 750
PythonStrings


to post comments

You can still do filters, just do it yourself

Posted Jan 22, 2025 19:08 UTC (Wed) by NYKevin (subscriber, #129325) [Link]

I'm not sure why filters would need to be part of the PEP in the first place. You can define a custom type that overrides __or__() and __ror__() with the desired semantics. As far as I can tell, that's already legal in f-strings today, and there is no obvious reason it would fail to work in t-strings.

Of course, whether that's a good idea is neither here nor there. I'm just saying you don't need language support to make it happen.

How is this advantageous over str.format?

Posted Jan 22, 2025 20:14 UTC (Wed) by siddh (subscriber, #169663) [Link] (13 responses)

From a cursory reading, it seems to me the only thing it has over str.format is that it detects the variable from the name, so code becomes shorter.

We anyways need to create a function to process the template, which is just equivalent to processing a format string but without the explicit argument passing (which can be just a dict).

What am I missing?

How is this advantageous over str.format?

Posted Jan 23, 2025 13:29 UTC (Thu) by rrolls (subscriber, #151126) [Link] (12 responses)

What you're missing is the paragraph starting "The programmer can then provide any sort of template processing" and the code example following it.

f-strings (and str.format, and so on) are for producing strings directly; whoever writes the f-string or the call to str.format (say, Alice) takes responsibility for making sure every substitution is formatted (e.g. escaped) correctly. And Alice has to remember to escape every single substitution individually. There is a wealth of experience now on SQL injections, JS injections, and the like, which all ultimately result from Alice forgetting to call an escape function, or forgetting to sanitise, or misunderstanding which mechanism should be used where, or not knowing about other solutions such as parameterised SQL queries.

t-strings on the other hand are a tool that is designed to work in conjunction with library code. Someone - Bob - writes a library - LibFoo - for producing perfectly escaped HTML, or perfectly escaped SQL, or whatnot. Alice then imports LibFoo, and writes a call to it, passing a t-string with some substitutions. If an f-string, or str.format was used, then LibFoo only gets the fully formatted output string, and when it sees syntax, it can't tell whether Alice intended that to be treated as syntax, or whether it was in some user-supplied input that Alice fed to it via a substitution. But with a t-string, LibFoo knows exactly which parts were written by Alice and intended to be treated as syntax, and which parts were passed in via substitutions: it can then escape the substituted values (in the case of SQL, it can either escape them, or pull them out entirely and turn the whole thing into a parameterised query).

There's a couple of other advantages too. First, it's possible to write a t-string once and then pass it to several different library functions. Each function could do something different with it; with an f-string, that's simply not possible (you'd have to write the f-string repeatedly). One use case for this is in logging SQL queries: say you make a tool that needs to a) execute parameterised queries in a real database, and b) log the same queries for debugging, with parameters filled in in-place for human reading. Without t-strings, there are certainly ways to achieve this, but they're not simple nor do they produce particularly readable code.

Second, it becomes possible for code assist tools to detect the language that's actually being used inside the t-string. This means that if you write some HTML inside a t-string, and pass it to a function that declares (via typing) that it will process the t-string as HTML, syntax highlighting for HTML can be applied to the code inside the t-string, rather than it just being all one color; the same would work for any other language. If in future regular expressions move to using t-strings, this'll be immensely helpful: currently, some editors just basically hard-code raw strings (`r"..."`) to be rendered with regex highlighting, which is useful when you're writing a regex, but is annoying when you're using a raw string for any other reason.

I think t-strings are best seen as a way to embed any other language within Python code - without Python needing to know anything about that language. IMO, it's an incredibly powerful tool that every language should have, and yet I've never previously come across one that does, which is why I was so excited to see this PEP appear. You can put CSS and JS inside HTML, yes, but HTML, and editors and tools that work with it, are all specifically designed to support this - and then it's a similar argument for the case of putting HTML/CSS/JS inside PHP. None of these are generic - but t-strings are.

How is this advantageous over str.format?

Posted Jan 23, 2025 15:16 UTC (Thu) by siddh (subscriber, #169663) [Link] (8 responses)

What I meant was

    def html(t_str):
        final = ""
        
        for item in template:
            match item:
                case str() as s:
                    final += s
                case Interpolation() as i:
                    final += sanitise(i)

        return final

    evil = "[script]alert('evil')[script]"
    template = t"[p]{evil}[/p]"
    sanitised = html(template)


is equivalent to

    def html(given, args):
        for k, v in args.items():
            args[k] = sanitise(v)

        return given_str.format(**args)

    args = {"evil": "[script]alert('evil')[/script]"}
    template = "[p]{evil}[/p]"
    sanitised = html(template, args)


Whatever you said is equally applicable to the second case IIUC.

(Used [] instead of <> since LWN HTML comment parses it...)

How is this advantageous over str.format?

Posted Jan 23, 2025 15:56 UTC (Thu) by daroc (editor, #160859) [Link] (4 responses)

You can use HTML entities (&lt; and &gt;) if you want to type < and > in comments. It is a little unwieldy to do the escaping yourself, but you get used to it.

How is this advantageous over str.format?

Posted Jan 23, 2025 19:06 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link] (3 responses)

Can we get an MD mode for comments, please?

How is this advantageous over str.format?

Posted Jan 27, 2025 6:49 UTC (Mon) by rsidd (subscriber, #2582) [Link]

For the case of code, just working support for the PRE tag would be an improvement. > and < are generally permitted within PRE tags and verbatim code with unescaped symbols is allowed there. LWN supports PRE tags but doesn't allow these unescaped symbols.

MD mode

Posted Jan 29, 2025 10:05 UTC (Wed) by smurf (subscriber, #17840) [Link] (1 responses)

*Strongly* seconded.

But somewhat off topic here. ;-)

MD mode

Posted Jan 29, 2025 13:52 UTC (Wed) by daroc (editor, #160859) [Link]

Sorry, I probably should have replied to Cyberax — it's on my list, but our site-development time is currently being spent on a number of anti-bot measures, because we've seen a big surge in the new year.

We'll get to it at some point, though!

How is this advantageous over str.format?

Posted Jan 23, 2025 18:58 UTC (Thu) by rrolls (subscriber, #151126) [Link] (2 responses)

It doesn't look particularly exciting in these textbook cases, but the benefits will quickly show themselves once you have more complicated real-world cases.

With t-strings, you could easily have something like this:

def render_item_row(item: Item) -> HTML:
  return HTML(t"<tr><td>{item.name}</td><td>{item.date}</td><td>{item.author}</td></tr>")

Your "equivalent" would look like this:

def render_item_row(item: Item) -> HTML:
  return HTML(
    "<tr><td>{name}</td><td>{date}</td><td>{author}</td></tr>",
    {"name": item.name, "date": item.date, "author": item.author}
  )

which is unwieldy, unintuitive, and has a lot of boilerplate... or perhaps you might do

def render_item_row(item: Item) -> HTML:
  return HTML(
    "<tr><td>{name}</td><td>{date}</td><td>{author}</td></tr>",
    dataclasses.asdict(item)
  )

if Item happened to be a dataclass, but then as well as that requirement, the code doesn't make it obvious that those fields actually exist in the class.

Additionally, if you are using code assist, the 1st code block above will immediately get a red squiggly under any nonexistent field. But in both the 2nd and 3rd cases, code assist would not be able to help you if the text between { } was incorrect, or (in the 2nd case) if the dict keys were incorrect.

How is this advantageous over str.format?

Posted Jan 23, 2025 20:33 UTC (Thu) by siddh (subscriber, #169663) [Link] (1 responses)

> which is unwieldy, unintuitive, and has a lot of boilerplate...

It's not IMO. In fact for complex code you'd anyways have to create intermediary variables to make the code cleaner (what if name has to be item.original_name sometimes?).

I do agree t-strings looks cleaner, but from the passing to function POV, it just feels like new syntactic sugar to me for a function call.

Also, it's named "template" but IIUC it is just a locally bound object invalid outside of its scope, like you can't pass it around generically and have the vars substituted without wrapping in a function call which then leads to same thing.

So all-in-all the advantage just boils down to introducing the new type so str can be explicitly disallowed? For eg. sqlite3 could throw exception on using str. That will indeed force less mistakes in processing unsantised input where it's done, but may also make the already-assumed dumb programmer more careless...

But I do get your point now and appreciate it more. Thanks!

How is this advantageous over str.format?

Posted Jan 28, 2025 17:31 UTC (Tue) by NYKevin (subscriber, #129325) [Link]

> IIUC it is just a locally bound object invalid outside of its scope, like you can't pass it around generically and have the vars substituted without wrapping in a function call which then leads to same thing.

You can indeed pass t-strings around arbitrarily, it's just that the vars are eagerly evaluated at construction time. Which is fine IMHO, because a lazy t-string would just be way too much magic for one bit of syntax (and you can always use a lambda if that's really what you wanted).

How is this advantageous over str.format?

Posted Jan 27, 2025 9:59 UTC (Mon) by epa (subscriber, #39769) [Link] (2 responses)

Agreed. This is something I have often found missing from Perl, too: how can I make a string with variable interpolation, but do my own custom interpolation?

It could bridge the gap between the safe but clunky way of creating an SQL statement with bind parameters, and the much more convenient and readable, yet dangerous, approach of pasting together an SQL string with variable interpolation. Most programming languages create too much "temptation" here. If you can say t"select * from mytable where id = {x}" and have it executed safely (which could be as simple as checking that x contains a number, not a string) then you won't have to choose between safety and comfort.

I would also use it for filenames. Writing f"/foo/bar/{leafname}" is handy but can be dangerous if leafname contains "..". You can use various libraries to build up a path from components, but again they're awkward compared to just writing out the path with interpolated parts. With t-strings, make_path(t"/foo/bar/{leafname}") could check that the value you're interpolating is a single path component, unless you explicitly permit otherwise.

How is this advantageous over str.format?

Posted Jan 28, 2025 11:47 UTC (Tue) by taladar (subscriber, #68407) [Link] (1 responses)

SQL in particular might not be the best use case, not only are there statements you can create with string interpolation that can't be handled by parameter binding (e.g. anything that changes the structure of the statement), preparing statements also has other performance benefits over ad-hoc queries.

How is this advantageous over str.format?

Posted Jan 28, 2025 17:26 UTC (Tue) by epa (subscriber, #39769) [Link]

It does often perform better to make a prepared statement and then call it several times with different values of the bind parameters. But just as often, the difference is negligible. The query may be a one-off in the lifetime of the script. But anyway, if you have a t-string based interface

t'select name from mytable where id = {myid}'

then the SQL library is free to transform that into a prepared statement which it can execute hygienically and even cache for future calls of the same t-string.

-_-

Posted Jan 24, 2025 7:45 UTC (Fri) by LtWorf (subscriber, #124958) [Link] (6 responses)

What is the point for this when perfectly good templating libraries exist?

The batteries included thing no longer applies since python developers have decided this is no longer the case for python. So it looks to me as developers finding something to keep busy, rather than something users of the language actually need.

It's actually one of the most powerful enhancements one can make to a programming language

Posted Jan 24, 2025 14:00 UTC (Fri) by rrolls (subscriber, #151126) [Link] (5 responses)

My reply to siddh above should explain this, since without being able to build interpolations natively within the syntax of a language (which is what t-strings will do for Python), any "perfectly good templating library" is ultimately just a particularly complex string formatting technique, so it'll fall foul of the same problems once your use case gets sufficiently complex itself.

t-strings is far more than "developers finding something to keep busy" - I would put it in the same category of powerful, fundamental enhancement as Ruby/PHP fibers (which are more powerful than Python's asyncio - I wish we had fibers in Python).

It's a shame that t-strings seem to be so easy to misunderstand as "just more syntactic sugar for str.format" because saying that that is selling them short is a severe understatement.

It's actually one of the most powerful enhancements one can make to a programming language

Posted Jan 26, 2025 23:59 UTC (Sun) by marcH (subscriber, #57642) [Link] (4 responses)

> It's a shame that t-strings seem to be so easy to misunderstand as "just more syntactic sugar for str.format" because saying that that is selling them short is a severe understatement.

In these situations there is really only one way: finding good _demoes_ as you just did above. They have to be short and simple enough to be in reach of "casual" readers with limited time, while being complex enough to demonstrate real value. It can be pretty hard and also very dependent on the new feature itself.

From https://lwn.net/Articles/1005903/ above
> There is a wealth of experience now on SQL injections, JS injections, and the like, ...
> But with a t-string, LibFoo knows exactly which parts were written by Alice and intended to be treated as syntax, and which parts were passed in via substitutions.

This looks pretty much like SQL's "PreparedStatements" which is the (universal?) cure for SQL injections. Are these new templates following the same logic? That is: _not_ "serializing"/flattening but preserving structure? In a more generic way and in a more generic language.

So, maybe rewriting this with t-strings would make a good demo?
https://en.wikipedia.org/wiki/Prepared_statement#Python_D...

Not because it would show anything significantly new, but because it would show how more generic t-strings are?

It's actually one of the most powerful enhancements one can make to a programming language

Posted Jan 28, 2025 17:46 UTC (Tue) by NYKevin (subscriber, #129325) [Link] (3 responses)

Unfortunately, that Wikipedia example is probably not the easiest thing to translate into t-strings, because it uses executemany, and it is not immediately obvious how you would implement a t-string version of that.

Here is an exercise that might help illustrate the power of t-strings: Write an example of a "classic" SQL injection vulnerability, using f-strings for naive interpolation, and then change the f to a t. In principle, it is possible for an SQL library to turn the resulting t-string into an entirely safe prepared statement and execute it correctly (at least for the vast majority of real-world interpolations). However, this does require the library to support it, which is an issue. Another issue is that f-strings will serve as an attractive nuisance for less experienced developers, so it is likely wise to introduce a new method or function that *only* accepts t-strings, even for commands that have no interpolations (so that you can't pass it an f-string at all). You could then train developers to only use the new method/function for all SQL execution, and introduce a linter to find uses of the old method/function.

It's actually one of the most powerful enhancements one can make to a programming language

Posted Jan 28, 2025 22:32 UTC (Tue) by LtWorf (subscriber, #124958) [Link] (2 responses)

I don't think your example is great either. If you make a function that reject strings, then a "SELECT name FROM names;" will fail because it's a string. Remember that fstrings do not exist at runtime so you have no way of knowing if it's an fstring or just a regular string with no parameters.

It's actually one of the most powerful enhancements one can make to a programming language

Posted Jan 29, 2025 0:28 UTC (Wed) by NYKevin (subscriber, #129325) [Link] (1 responses)

> I don't think your example is great either. If you make a function that reject strings, then a "SELECT name FROM names;" will fail because it's a string.

Yes, that is the intention.

> Remember that fstrings do not exist at runtime so you have no way of knowing if it's an fstring or just a regular string with no parameters.

And that is why it is the intention - because there is no other way to reject f-strings.

It's actually one of the most powerful enhancements one can make to a programming language

Posted Jan 29, 2025 0:28 UTC (Wed) by NYKevin (subscriber, #129325) [Link]

(In case anyone didn't read the PEP: You can just prefix a literal string with no interpolations with t, and it still gives you a template and not a literal string.)

Don't forget the i18n use case

Posted Jan 24, 2025 14:48 UTC (Fri) by mgedmin (subscriber, #34497) [Link] (1 responses)

Personally, I'm waiting for template strings because you cannot use f-strings with gettext.

from gettext import gettext as _
name = 'world'
print(_(f"Hello, {name.capitalize()}!")) # WRONG: interpolates first, calls gettext on the result

from gettext import gettext as _
name = 'world'
print(_("Hello, {name}").format(name=name.capitalize())) # what you have to use today

from sometranslationhelper import _ # I'll probably have to write it
name = 'world'
print(_(t"Hello, {name.capitalize()}") # now it's possible to glue back the original template string and pass it to gettext, then interpolate afterwards

Don't forget the i18n use case

Posted Feb 3, 2025 0:29 UTC (Mon) by zahlman (guest, #175387) [Link]

Neat idea. I guess the implementation for your helper is something like:
from itertools import count
from gettext import gettext

def escape_braces(s: str):
    return s.replace('{', '{{').replace('}', '}}')

def _lookup_pieces(t: Template):
    placeholders = (f'{i}' for i in count())
    for item in t:
        if isinstance(t, Interpolation): yield next(placeholders)
        else: yield escape_braces(t)

def _(t: Template):
    lookup = ''.join(_lookup_pieces(t))
    interpolations = (i.value for i in t if isinstance(i, Interpolation))
    return gettext(lookup).format(*interpolations)

That is: we need to normalize the Template into a canonical string that can be looked up in the usual way - and if we follow appropriate conventions (using only numeric placeholders, and following standard brace-escaping convention) in the localized strings, the result will be suitable for use with str.format. We want to look up a string that depends only on the literal parts of the t-string - not on either the values that will be interpolated nor the expressions used to compute them. So e.g. t"Hello, {firstname} {lastname}!" and t"Hello, {firstname} {lastname.capitalize()}!" should both map to a lookup string "Hello, {0} {1}!", which can then retrieve a localized string like "{1} {0}ーさん、今日は!".

But it should be noted that none of this helps if the interpolated values need to be localized as well...

Virtually Unreadable

Posted Feb 5, 2025 17:27 UTC (Wed) by Reverend_Jim (guest, #175775) [Link]

For a language that stresses clarity and readability I find the concept of template strings, at least as explained in this posting, completely opposite to those principles. After readng this posting I still have no ideas as to what purpose template strings serve.


Copyright © 2025, Eklektix, Inc.
This article may be redistributed under the terms of the Creative Commons CC BY-SA 4.0 license
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds