User: Password:
|
|
Subscribe / Log in / New account

Fish - The friendly interactive shell

Fish - The friendly interactive shell

Posted May 19, 2005 19:13 UTC (Thu) by Bones0 (guest, #8041)
Parent article: Fish - The friendly interactive shell

Remember that there is a third type of quotes in bash/zsh: the $'string' variant, which expands sequences such as \n. You may want to include those semantics in "string", so we're down to two different quote types instead of 3.

You have all completion functions in a single 1482-line shell script, which is of course the opposite of getting rid of bloat. Zsh does this nicely, by putting completion for each command in its own file. That file is read every time the user presses TAB, so it doesn't have to keep all the 35778 lines of completion scripts in memory.

I fail to see your problem with wild card expansion. In zsh, when I type "echo *.txt" and press TAB, it will expand to a space-separated list of all *.txt files in the current directory. Some of your other gripes with zsh makes it sound like its configuration was deliberately crippled for compatibility or it at least failed to execute /usr/share/zsh/$ZSH_VERSION/zshrc_default.

With that said, I'm excited about the promise of fish. The syntax highlighting alone will be enough for me to recommend it to novices.
As for myself, I've grown too used to all the little details that make zsh so powerful, so I probably won't be switching until fish syntax is as bloated as other shells. Which is probably not what you want.


(Log in to post comments)

Fish - The friendly interactive shell

Posted May 19, 2005 20:59 UTC (Thu) by liljencrantz (guest, #28458) [Link]

I dont think supporting $'string' is a good idea, you can just use 'printf "%b" $string'. Don't build things into the shell if they can be performed just as well in an external command. Calling external commands is what shells are all about!

I will split the completions into separate files eventually. But since this ~1500 line file loads nearly instantly even on my old Pentim 2 300 MHz, this is not on the top of my priority list. (The history file takes about a quarter of a second to load 1000 entries, though. That has to improve.)

My gripe with zsh wildcard completion is that I dont want it to change the text before the cursor. Imagine that I want to remove all the backups of txt files in a directory. In fish I cound type 'rm *.t<TAB>' and it would be expanded to 'rm *.txt', after which I could add a '~' to make the commandline 'rm *.txt~', which is what I wanted. In zsh this won't work.

As to crippled configurations, I don't know. I base my experience on the default zsh package on Fedora Core 3 and on a custom build of zsh on Solaris at my old University. Both have underwhelming default settings. If they are crippled, either intentionally or by mistake, I dont know.

And as to further bloating fish - you're right. I don't want to bloat it. But on the other hand I've tried to make it possible to hack fish without bloating the shell itself. Examples of a few hacks that I've included in the default version of fish:

Press Meta-L to list the current directory. But if the cursor is over a string that is a file or directory, the contents of that directory is printed.

The vared function is used to edit a variable value. The vared function has a history, separate from the main command buffer, and includes quote and parenthesis highlighting X clipboard copy and paste, etc..

Ctrl-R does a regexp replace on the current commandline. Or if the current commandline is empty, the first entry of the command history is used.

These are all implemented as simple keybindings and/or fish functions, without bloating the shell with builtins.

Algorithms for loading history file

Posted Jun 1, 2005 0:27 UTC (Wed) by Blaisorblade (guest, #25465) [Link]

> The history file takes about a quarter of a second to load 1000 entries, > though. That has to improve.
About this: since this is something smart algorithms can help about (I'm not sure about load time, but at least for search time), I wanted to answer. Have you thought to using a trie (also called radix tree)? It's a tree (not binary) where each word is stored split in characters, with one character at each level. So you store "cat a" and "cat b" in the same tree path, except for the last node:

(representing the tree in horizontal)
<c> -> <a> -> <t> -> < > -> <a> | <b>

This helps a lot searches done from the beginning. The root would be an empty node, having all (used) letters as childs.

Actually this wouldn't help for Ctrl-R searches... a solution I have on top of my head (but which isn't on textbooks, so it's not as studied as the above one) is to have a parallel array who contains all letters, and where each letter points to all occurrences of that letter itself in the main tree. So for "a" you find a list of all "a"'s in the main tree. Sadly, it would be better to have a parallel trie of all "a" in the main tree, but this would be hard (either you restore again the whole trie or you all "a"'s share the descendants, which isn't nice).

If you want, we can discuss this further (even if I'm not a deep expert on this)...

Algorithms for loading history file

Posted Jun 1, 2005 11:11 UTC (Wed) by liljencrantz (guest, #28458) [Link]

The history search uses a linear search right now, since the history list is so small that searches take virtually no time at all. Making the search use a more clever algorithm would just increase code bloat without adding any noticable benefit to the user.

The thing that _does_ take time is reading the history file from disk. I haven't looked in to whether this is because of disk latency, because fish does mutiple mallocs for every entry, meaning several thousand mallocs are performed when loading into memory or if this is caused by using a slow hash function in the table used for removing duplicates. The hash function fish uses is pretty slow, since it does quite a lot of work. It is designed in much the same way as the SHA family of cryptographic functions, but with fewer laps and a smaller data set. The upside is that the distribution of the hash values is very good. The downside is that the hash function is a bit slow.

Algorithms for loading history file

Posted Jun 1, 2005 16:15 UTC (Wed) by Blaisorblade (guest, #25465) [Link]

Ok, probably the search algorithm is ok...
For the hash function, SHA1 has almost 0 probability of collision and gives a 160 bits hash; to avoid collisions between around 5000 entries, it's really overkill. Actually a good hash function for non-cryptographic purposes (assuming you go compare entries with the same hash) is of the followind kind:

int hash(char a[], int size, int prime) {
  char * p = a;
  int sum = 0;
  while (*p) {
    sum = sum * prime;
    sum = (sum + *p++) % size;
    //printf("%d ", sum);
  }
  return sum;
}

it returns an integer in the range [0..size); (and prime must be a prime number, which usually you choose at compile time). prime=131 is a good choice. It is easy to add this code to build a testing program with the below (send size and prime on the first line, then a test file, and then sort|uniq -c|count the average number of collisions on used hash codes with awk).

char str[1000];

int main(void) {
  int size, prime;
  scanf("%d %d", &size, & prime);
  while (scanf("%s", str) != EOF) {
    printf("%d\n", hash(str, size, prime));
  }
}


Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds