Files
blog/content-org/blog.org

88 KiB
Raw Blame History

Phundraks Blog

[EN] Open-Sourcing ALYS   ALYS

Too Long, Didnt Read

VoxWave no longer exists as a company, ALYS lives on as an open-source project under the GPL-3.0 and the BY-CC-4.0 license. You can find it at the following address:

https://labs.phundrak.com/phundrak/ALYS

EDIT: The main repository moved to https://labs.phundrak.com/ALYS/ALYS and vocal libraries are now separated in different repositories linked from the main one.

What happened?

You might have noticed it, but VoxWave became quite silent over the last months. This is because we at the head of VoxWave chose to close the company, a decision which came in effect in early September 2021. Theres not much else to say.

However, the good news is the rest still goes on! ALYS as a project is still alive and well! As her creator, I decided to step in and continue its technical support. Since the company no longer exists, and as a free and open-source software supporter, I also decided to open-source ALYS as much as possible. As a result:

ALYS for Alter/Ego is now free software as in free beer.

ALYS for UTAU, including its previously unreleased UTAU prototype, is now free as in free beer and in freedom.

You can find the installer of ALYS for Alter/Ego on the repository linked above as well as a free licence file. Regarding its UTAU version, its prototype is already configured with oto.ini files, but the source file for its Alter/Ego version are stripped of any configuration.

Whats New?

Therefore, ALYS is now available under three different licences:

Basically, this means you can do whatever you wish with the character as long as it is non-commercial, and you credit Saphirya, ALYS designer. The UTAU vocal libraries can be used, modified, and redistributed as much as you wish as long as it stays under the GPL-3.0 licence. And you are free to use the Alter/Ego vocal libraries as much as you wish, but you cannot redistribute or modify them.

I also decided to release ALYS very first, secret, unreleased, unheard French vocal library. It was scrapped no long after recording it due to quality issues and was replaced by its French UTAU prototype people could hear through ALYS first songs. It is released more as a way of preserving the fact it existed rather preserving a usable vocal library. (I dont even remember what it sounds like.)

If you have any question, you are free to email me at lucien@phundrak.com or open an issue on the repository mentioned above.

Conlanging   @conlang

TODO Writing my conlanging docs with Emacs   emacs conlanging

Development   @dev

[FR] Mettre à niveau mes sites org-mode   dev emacs

Le Problème

Cela fait quelques temps que je réfléchis à une nouvelle manière de gérer deux de mes sites web, conlang.phundrak.com et config.phundrak.com.

Les deux sites sont actuellement générés via un export depuis org-mode (un des nombreux modes dEmacs) directement vers du HTML. Sauf que lorganisation du fichier HTML de sortie de me plaît pas, et depuis plus de deux ans jutilise un script rédigé en Dart et compilé vers Javascript pour réorganiser les fichiers. En soit ce ne serait pas trop grave si mes pages web nétaient pas forcément lourdes. Mais elles le sont! La plus lourde page de mon site de linguistique fait 232Ko (la page francophone sur le Proto-Ñyqy) et celle de mon site de configuration fait 5,5Mo (configuration Emacs)! Je parle bien de fichiers HTML! Il faut vraiment que ça change!

Nouveau Framework pour le front-end

À la base je métais lancé pour écrire un exporteur personnalisé pour exporter mes fichiers org-mode vers des fichiers JSX qui seraient utilisés par un projet React, ou même Next.js. Mais jai récemment découvert quelque chose qui pourrait être bien plus pratique pour moi: Vue et tout particulièrement Nuxt!

En effet, Nuxt lit le MDC, ou Markdown Components. De fait, il est possible avec MDC et Nuxt dinsérer dans du Markdown des composants Vue soit en blocs soit en inline. Et pour moi, ça change tout! Je peux maintenant écrire un exporteur minimal qui se chargera simplement dexporter quelques éléments personnalisés vers des composants Vue, voire même de simples macros org-mode pour exporter les composants inline.

Et bien sûr, pour pallier au problème de fichiers HTML trop lourds, il me faudra séparer mes fichiers actuels en plusieurs fichiers, mais cela devrait être plus simple à gérer une fois la transition vers le nouveau framework effectuée.

Et pour le backend ?

Mais ce nest pas tout: un élément que jaimerais ajouter à mon site de linguistique serait un dictionnaire entre mes langues construites et dautres langues, quelles soient construites ou non. Ce dictionnaire doit pouvoir être interactif, avec par exemple une recherche, une page par mot, etc.

Je ne ferai certainement pas télécharger à mes utilisateurs lentièreté du dictionnaire à chaque recherche dun mot dans le dictionnaire, il ne peut donc pas être hébergé avec mon frontend, et jaurai besoin dun backend avec une API REST pour gérer les requêtes des visiteurs du site web. Maintenant la question est, quel type de back-end ?

Tout dabord, je vais complexifier un peu le problème: je suis un grand amateur de org-mode. Je pourrais gérer ça via une base de données classique, ajoutant chaque entrée manuellement, mais je vais plutôt essayer de gérer tout ça via org-mode. Les fichiers texte sont plus simples à versionner que des bases de données en un seul fichier binaire. Du coup, il va falloir que je mécrive un nouvel exporter, mais lequel?

Je pourrais rédiger un exporteur pour mon fichier dictionnaire.org qui lexporterait vers un fichier Json qui serait lu ensuite par mon backend qui extraierait et enverrai à mes utilisateurs les informations nécessaires. Lavantage serait de navoir quasiment pas besoin de manipuler le Json et den envoyer tel quel. Mais louverture et fermeture constante du fichier nest pas forcément la meilleure des idées, quoi que cela pourrait permettre de remplacer le fichier pendant que le backend tourne. Mais je suis sûr quon peut mieux faire.

Ma solution suivante était dutiliser EmacSQL, un paquet Emacs lui permettant dinteragir avec des bases de données SQLite, PostgreSQL et MySQL. Au moins ce serait une véritable base de données, avec seulement un blob binaire à mettre à jour, et ce serait potentiellement plus performant étant donné quil ny aura quà ouvrir une fois une connexion avec elle. Mais le problème est maintenant sa mise à jour. Mince…

Vient enfin ma troisième solution qui, je pense, sera celle que je vais adopter: utiliser une base de donnée type Firebase. Lidée dun verrouillage fournisseur ne me plaît pas franchement, donc jai décidé dutiliser une alternative open source et hébergeable: Appwrite! Je peux écrire sur une de ses bases de données pendant que mes utilisateurs peuvent la lire, donc la mise à jour nest pas un problème, et je nai rien à mettre en ligne, seulement une série de requêtes à faire. Cependant, un problème reste: comment communiquer avec Appwrite?

La quête pour un SDK Appwrite pour Emacs

Hélas, jai beau chercher, il nexiste aucun paquet pour Emacs permettant une communication avec Appwrite. Mais ce nest pas franchement surprenant: Appwrite nest pas encore extrêmement répandu, et même Firebase ne dispose pas de paquet pour Emacs.

Bien heureusement, Appwrite dispose dune API REST assez bien documentée, et Emacs est capable de gérer des requêtes nativement via sa bibliothèque url, cest donc naturellement que jai commencé à travailler sur appwrite.el, un SDK Appwrite pour du Emacs Lisp. Jaurais pu utiliser request.el, un paquet assez populaire pour Emacs afin de gérer les requêtes HTTP, mais je ne suis pas grand fan de son workflow et je préfère limiter au maximum le nombre de dépendances dans mes paquets. Ce que ce paquet fait actuellement est une transformation des paramètres nommés que mes fonctions acceptent en un payload Json. Par exemple, ma fonction appwrite-stogare-list-buckets accepte les mot-clefs search, limit, offset, cursor, cursor-direction et order-type. Ces arguments sont transformés en du Json via la bibliothèque native dEmacs afin de donner ceci:

{
  "search": "my search request",
  "limit": 30,
  "offset": 0,
  "cursor": "",
  "cursorDirection": "before",
  "orderType": "ASC",
}

Ce payload Json est enfin envoyé à lAPI REST correspondante, en loccurrence /v1/storage/buckets comme on peut le voir sur la documentation officielle. Bien sûr, les éléments optionels ne sont pas nécessairement inclus afin déviter à avoir à envoyer trop dinformations. Dans ce cas, tous les éléments du payload sont optionels, ce qui ferait que le appwrite.el nenverra que

{
} comme payload à lAPI.

Pour linstant, le projet en est encore à ses débuts, mais jai commencé à travailler sur le SDK pour Appwrite que vous pouvez trouver sur ce dépôt Github.

La question maintenant est: comment exporter mon dictionnaire vers Appwrite? La réponse me semble relativement simple; je pourrai écrire un exporteur org-mode dépendant de appwrite.el qui exportera pour chaque mot quil rencontrera un payload Json vers mon instance personnelle Appwrite. Et à la différence des exporteurs org-mode habituels, ox-appwrite nexportera aucun fichier sur mon système.

Conclusions

Au fur et à mesure de mon analyse du projet et de mes besoins, je me suis rendu compte que jaurai besoin doutils plus intelligents que de simples pages HTML exportées automatiquement via Emacs.

Ainsi, jaurai besoin de créer un site web avec Nuxt, profitant ainsi de sa capacité à rendre du Markdown avec du contenu interactif, agissant en tant que frontend pour mon site web. Ce Markdown sera exporté via org-mode à partir de mes fichiers déjà existants, bien quà fragmenter afin de réduire la taille des fichiers de sortie.

Le backend sera une instance Appwrite que jhébergerai moi-même sur mes serveurs. Elle sera populée par un exporter org-mode custom via Emacs, ce qui me permettra de continuer à gérer mes dictionnaires et mes langues avec org-mode.

Ce projet est vraiment intéressant car cela ma incité à explorer de nombreuses possibilités et technologies différentes afin de trouver ce qui correspond le mieux à mon besoin, notamment en me rendant compte par exemple que React nétait pas forcément loutil le plus adapté à ce projet précisément. Cela me fera également travailler sur ma capacité à interagir avec des backends et des API REST, tout autant du côté front-end pour le site web que du côté SDK avec Emacs. Enfin, la création de ce SDK ainsi que des exporteurs org-mode me sera bénéfique afin dapprofondir ma connaissance dEmacs et du Emacs Lisp.

Maintenant, au travail!

[EN] Writing a Dynamic Array in C   dev C

Edit on October 28th 2023: This article was written on November 28th 2020, almost three years ago. Since then, I have noticed issues with the current implementation of my dynamic C array, as noted by some readers in the comments below. I will probably rewrite a new dynamic array in C some time in the future addressing these issues.

Although C is a very, very popular language, it is also known to be quite tiny: memory is handled manually, and much of what is available in its standard library is a given in all other languages. But C being a low level language also means it lacks a lot of other stuff other popular languages have; for instance, dynamic arrays are present in the library of most popular languages, be it JavaScript, C++, Rust and so on, but Cs simplicity forbids them from being there. If you want it in C, you have to implement it which is exactly what I did!

Introduction

When I wrote this library, I was mostly inspired by C++s std::vector and Rusts std::vec::Vec, but my library lacks some features both have: its still a simple one. Here is the list of what it is able to do:

  • Create a dynamic array, with or without an initial capacity specified by the user
  • Store a function pointer to the destructor of the elements that will be stored in the vector for when they are destroyed
  • Append new elements at the end of the array
  • Get elements by position, safely or not, or get the first and last elements in the array
  • Get the length of the vector as well as its capacity
  • Shrink the size of the allocated array to the size of the vector
  • Remove an element at a specific index, or the last element
  • Completely destroy the vector and its elements

Elements that will be stored in the vector will need to be dynamically allocated in memory since the vector will not store the elements themselves, but rather pointers to them. This way, we avoid copying data when inserting it to the vector, and handling these elements is also a tad easier. And since we do not know what we will be storing, we will be storing void pointers. The user will be able to cast them to their desired type later on.

Before defining the vector, there are a few things I want to define. First, there is an attribute I will often use with my functions:

#indef NONNULL
# define NONNULL __attribute__((nonnull))
#endif

This will forbid passing to functions marked with this attribute NULL pointers, because we will use a lot of them.

We will also need to include some headers:

assert.h
so we can make sure memory is allocated and reallocated correctly
string.h
for some memory operations such as memcpy
#include <assert.h>
#include <string.h>

We also need to define a type that will be used as the destructor type. The functions we want to accept as destructors are functions that accept a void pointer to an element and return nothing, hence this definition:

typedef void (*Destructor)(void *element);

Now, onto the structure itself.

The Data Structure of the Vector

With our vector, we will need to keep track a couple of things:

  • the size of the vector
  • the capacity of the vector
  • the destructor
  • the array itself

With this, we can describe our structure for the vector:

struct Vector_s {
  size_t     capacity;
  size_t     length;
  void **    elements;
  Destructor destructor;
};
typedef struct Vector_s Vector;

We have now four elements:

elements
an array of void pointers pointing themselves either to elements stored in the vector or to nothing (initialized to NULL) (note this forbids storing NULL elements in the vector),
length
the number of elements currently stored in the vector,
capacity
the size of the allocated memory pointed to by elements divided by the size of a void pointer. This gives us the amount of elements that can be stored in the vector without any reallocation at most,
destructor
pointer to the function used to free elements stored in the vector

Now, onto the functions associated with this data structure. They are all prefixed with vec_ in order to avoid any collisions with other libraries and functions.

Building Vectors

The first function for building vectors is vec_new(). Here is its definition:

Vector *vec_new(Destructor const destructor);

It is quite straightforward: when creating a new, standard vector, simply pass as its arguments a pointer to the destructor of this vector, either a NULL pointer for trivial data types, or a pointer to an existing function you declared somewhere. Once you do that, you get yourself a pointer to the newly created vector with which you can now store elements. Lets see how it works under the hood:

Vector *vec_new(Destructor const destructor)
{
  Vector *self;
  self = (Vector *)malloc(sizeof(Vector));
  assert(self);
  ,*self = (Vector){.length   = 0,
                   .capacity = VEC_INITIAL_CAPACITY,
                   .elements = (void *)malloc(sizeof(void *) * VEC_INITIAL_CAPACITY),
                   .destroy  = destructor};
  assert(self->elements);
  return self;
}

A new pointer is created, which will be the pointer returned to the user. To this pointer, we allocate enough memory to hold a vector. Once that is done, we initialize this new memory buffer with an actual vector, with its members initialized as described above. An assertion is done in order to ensure both the vector but also its storage are correctly allocated.

The second function, vec_with_capacity, is quite similar though not the same as vec_new: it allows for an initialization of vec_with_capacity with a user-defined amount of capacity in the storage of the vector. That is, if vec_with_capacity(14) is called, the library will return a pointer to a vector which can contain and has the size of precisely fourteen elements. That way, if the user knows theyll need a certain amount of elements to be stored in a vector, theyll be able to reserve that exactly and limit the amount of reallocations when adding new elements. Its definition is the following:

Vector *vec_with_capacity(Destructor const destructor, size_t const capacity);

Under the hood, it calls vec_new, then it will reallocate the memory already allocated for the member elemements.

Vector *vec_with_capacity(Destructor const t_destructor,
                          size_t const     t_capacity)
{
  Vector *self = vec_new(t_destructor);
  free(self->elements);
  (*self).elements = (void *)malloc(sizeof(void *) * t_capacity);
  assert(self->elements);
  (*self).capacity = t_capacity;
  return self;
}

Adding Data

The main feature of vectors is to hold data, so lets make them able to take new data from the user. But first, let me explain a bit how this dynamic array which I call vector works in C.

As you saw earlier, a vector is initialized with a fixed amount of memory allocated to the vector, so people can store their data in these arrays. Now, imagine you have an array of four elements, and you wish to add one more, what to do? You can reallocate your array with realloc with one more slot for your element, so now you have an array for five elements with your four original elements and a free slot for your fifth. Cool, now you can add new elements as you need them!

Except that if you want to add some tens of thousands of new elements, you would end up calling some tens of thousands times realloc, and that is slow. Seriously, try it, youll understand what I mean. And all these calls to realloc are an opportunity for it to fail. Lets limit calls to this function, OK ? If we end up short on slots in our current array, lets actually double the amount of slots in it. So, if we have a four-slots array, lets make it an eight-slots array, and then a sixteen-slots array. And in a couple more calls to realloc, well quickly reach our tens of thousands slots array, way faster than by incrementing its capacity one by one.

“But, well end up with a lot of unused memory if we need just one more element than 216 elements! We dont need a 217 elements array for 216+1 elements!”

Youre completely right, but thats a tradeoff. Would you rather have a slow but memory-efficient program, or a fast but memory-hungry software? Plus, as youll see later, there is a function to shrink the size of the allocated array down to the actual amount of elements you stored in it, making it possible to temporarily have a 217 elements array, and immediately after shrink it down to 216+1, once you know you wont be adding any other elements.

With this out of the way, lets see how to add new elements to our vector. First, lets declare a static function that reallocates the memory of a vector. Here is its declaration:

static void vec_realloc(Vector *const self) NONNULL;

Its implementation is rather simple: double its capacity, and reallocate its array twice its previous size. Of course, there is an assertion on whether the arrays have been correctly reallocated to ensure memory safety.

void vec_realloc(Vector *const self)
{
  self->capacity *= 2;
  self->elements = realloc(self->elements, sizeof(void *) * vec_capacity(self));
  assert(self->elements);
  return;
}

Now, we can proceed to element insertion. Here is the definition of vec_push, which adds a new element at the end of the vector:

void   *vec_push(Vector *const self, void *const element) NONNULL;

As you can see, it takes as its arguments a pointer to the vector (the same returned by its constructor) as well as a pointer to the element to be added to the vector. This is an important point: the vector does not store elements themselves, only their pointer. If the function detects there is not enough space for a new element, a call will be made to vec_realloc described above. Once the function is done, it will return a pointer to the newly inserted element.

void *vec_push(Vector *const self, void *const t_element)
{
  if (vec_length(self) >= vec_capacity(self)) {
    vec_realloc(self);
  }
  self->elements[(*self).length++] = t_element;
  return vec_last(self);
}

And this is it! There may be a function added later that will allow the insertion of a new value in any valid position between the first and last position of an array (not counting the unused slots of said array), and if I implement this it will imply a reimplementation of vec_push so that vec_push relies on this potential new vec_insert.

Retrieving Data

Two functions are available when retrieving data: vec_safe_at which safely retrieves the element at a certain index, and vec_at, which is a bit more performant but without the safety of the former. Lets see the definition of both:

void   *vec_safe_at(Vector const *const self, size_t const index) NONNULL;
void   *vec_at(Vector const *const self, size_t const index) NONNULL;

Both have the same arguments: the former is a pointer to the vector we want to manipulate, and the latter is the index at which we want to retrieve our data. To see the difference in how both work, lets first see the definition of vec_at:

void *vec_at(Vector const *const self, size_t const index)
{
  return self->elements[index];
}

vec_at is really straightforward and is just syntax sugar around the vectors elements member and will behave exactly like the square brackets in standard C. However, vec_safe_at performs some additional checks as you can see below:

void *vec_safe_at(Vector const *const self, size_t const t_index)
{
  return (t_index >= vec_length(self)) ? NULL : vec_at(self, t_index);
}

If the requested index is larger than the furthest index possible, a NULL pointer will be returned, otherwise the pointer to the requested element is. With this function, it is possible to check whether an element has been returned or not while avoiding a possible segfault or something similar. It could be used in a loop for instance in order to check we only have valid elements.

It is also possible to retrieve directly the last element with vec_last. Here is its definition:

void   *vec_last(Vector const *const self) NONNULL;

Just as the previous functions, its declaration is really straightforward:

void *vec_last(Vector const *const self)
{
  return vec_at(self, vec_length(self) - 1);
}

For the sake of the Object-Oriented Programming paradigm, two functions were also declared in order to retrieve some data that could otherwise be easily accessible:

size_t  vec_length(Vector const *const self) NONNULL;
size_t  vec_capacity(Vector const *const self) NONNULL;

Their implementation is extremely trivial and doesnt really need any explanation.

size_t vec_length(Vector const *const self)
{
  return self->length;
}

size_t vec_capacity(Vector const *const self)
{
  return self->capacity;
}

Deleting Data

While this chapter is about destroying data, this first function will not exactly destroy data, or at least not data we care about: vec_shrink_to_fit will reallocate the memory in our vector to make it so that the member elements is exactly large enough to store all of our data with no more space than that. Here is its definition:

void    vec_shrink_to_fit(Vector *const self) NONNULL;

Theres nothing too exciting about its implementation: a simple reallocation exactly the size of the number of elements currently stored times the size of a void pointer, and we verify with an assert if it has been correctly reallocated. Nothing is returned.

void vec_shrink_to_fit(Vector *const self)
{
  if (self->length <= 0) {
    return;
  }
  self->capacity = self->length;
  self->elements = realloc(self->elements, sizeof(void *) * vec_capacity(self));
  assert(self->elements);
  return;
}

Notice that a check is done to see if the vector exists, because otherwise calling shrink_to_fit on an empty vector would result in an error while asserting the reallocation.

Next, we have two functions: vec_pop_at and vec_pop. The latter relies on the former, which can delete an element at any valid position. Beware: these functions return nothing and simply deletes the element. Here is their definition:

void    vec_pop_at(Vector *const self, size_t const index) NONNULL;
void    vec_pop(Vector *const self) NONNULL;

In order to insure memory safety, a static function is declared in src/vector.c which will delete an element if a destructor has been provided to the vector when it has been built. Its definition is the following:

static void vec_maybe_delete_element(Vector const *self,
                                     size_t const  t_index) NONNULL;

Its implementation is quite simple: if a destructor exists, then the element at the requested index will be destroyed through this destructor. Otherwise, nothing is done with the destructor, hence the name of the function vec_maybe_delete_element. However, it should be noted that the element will be freed from memory, so if the user needs it before popping it, they need to retrieve it with something like vec_at and store it elsewhere.

void vec_maybe_delete_element(Vector const *self, size_t const t_index)
{
  void *element = vec_at(self, t_index);
  if (self->destroy) {
    self->destroy(element);
  }
  free(element);
}

Now that we have this function sorted out, we can implement our pops. Here is the implementation of vec_pop_at:

void vec_pop_at(Vector *const t_self, size_t const t_index)
{
  if (vec_safe_at(t_self, t_index) == NULL) {
    return;
  }
  vec_maybe_delete_element(t_self, t_index);
  if (t_index + 1 < vec_length(t_self)) {
    memcpy(vec_at(t_self, t_index), vec_at(t_self, t_index + 1),
           sizeof(void *) * (t_self->length - (t_index + 1)));
  }
  --(*t_self).length;
}

A check is performed at the beginning of the function: that the element we want to pop actually exists. If it does not, the function does nothing, otherwise the function deletes the element if needed. The call to vec_maybe_delete_element will free the requested element. Then, a check is performed to see if the requested element was at the end of the array or not. If it was not, then the elements located after the destroyed element are shifted one element closer to the beginning of the array; otherwise, if the requested element was at the end of the array, nothing is done particularly. Lastly, the count of elements stored in the vector is decreased by one.

vec_pop uses the above function in order to provide a simpler call if we want to delete the last element of the array. We can see how it relies on vec_pop_at in its implementation:

void vec_pop(Vector *const self)
{
  vec_pop_at(self, vec_length(self));
}

Finally, vec_delete allows for the complete destruction and deallocation of a vector, including all of its elements. Here is its definition:

void    vec_delete(Vector *const self) NONNULL;

In its implementation, we can see three distinct steps:

  • The deletion of all its elements if a destructor exists
  • The deletion of the array of the vector
  • The deletion of the vector itself.
void vec_delete(Vector *const self)
{
  if (self->destroy) {
    for (size_t i = 0; i < vec_length(self); ++i) {
      self->destroy(self->elements[i]);
    }
  }
  free(self->elements);
  free(self);
}

The Final Source Code

Finally, we can see the whole source code. Here is the header for the library: vector.h

#ifndef VECTOR_H_
#define VECTOR_H_

<<vector-nonnull-h>>

<<vector-struct-def>>

<<vector-vec_new-h>>
<<vector-vec_with_capacity-h>>
<<vector-vec_push-h>>
<<vector-vec_at-h>>
<<vector-vec_last-h>>
<<vector-vec_length_capacity-h>>
<<vector-shrink_to_fit-h>>
<<vector-vec_pop-h>>
<<vector-vec_delete-h>>

#endif /* VECTOR_H_ */

And here is the implementation file: vector.c

#include "vector.h"

<<vector-includes-c>>

<<vector-vec_realloc-def-c>>
<<vector-vec_maybe_delete_element-def-c>>

<<vector-vec_new-c>>

<<vector-vec_with_capacity-c>>

<<vector-vec_realloc-c>>

<<vector-vec_push-c>>

<<vector-vec_at-c>>

<<vector-vec_safe_at-c>>

<<vector-vec_last-c>>

<<vector-vec_length_capacity-c>>

<<vector-shrink_to_fit-c>>

<<vector-vec_pop-c>>

<<vector-vec_maybe_delete_element-c>>

<<vector-vec_pop_at-c>>

<<vector-vec_pop-c>>

<<vector-vec_delete-c>>

And with that, we should be good! I used this library in a SOM (Kohonen, 1982) implementation and ran it through valgrind, and there were no memory leaks. If you find one though, dont hesitate telling me in the comments, through social media such as Twitter, or by email.

Happy programming!

Emacs   @emacs

Emacs 31 is coming, and heres whats new!   dev emacs release

A few years ago, I published a blog post regarding what was new in Emacs 29 as it came close to being released. I missed the mark for Emacs30, but now, Emacs31 is getting ready for release.

So, what can we expect for Emacs31? Everythings written in its NEWS file, but here are some elements I think are important. Be warned, although Im not as hyped as I was for Emacs29 that brought a few big features, this article is quite a bit longer.

Breaking Change

Configuration

site-start.el will now load before your early-init.el, instead of after it.

Python

python-mode will now default to calling python instead of python3, though it will fall back to python3 if python is not found. Most modern systems no longer ship Python2, and python most likely points to Python3. If python still points to Python2 on your system, you MUST change the value of python-interpreter and python-shell-interpreter.

As Python2 has been EOL for five years now, its support is now optional and disabled by default.

Editing

With the new option kill-region-dwim set to non-nil, calling kill-region will now kill the last word instead of raising an error if no region is selected.

Electric Pair mode got better: you can now set strings using multiple characters in electric-pair-pairs, such as '("r#\"" . "\"#") to surround a region with r#" and "#. And if you want an extra space between your delimiters and the selected region, you can instead use '("r#\"" "\"#" t). Also, providing a numerical prefix argument to electric pair allows you to insert multiple delimiters at once. Now, I just need mode-aware electric pairs to replace evil-surround.

Do you use query-replace? Well, you can now use M-s t to swap FROM and TO during a query-replace or query-replace-regexp. And the original M-s is now M-s M-s or M-s s.

And do you like always having the line youre editing to be at the centre of your window? Activate center-line-mode.

Accidentally hit M-q (fill-paragraph) and you want to undo it? Or you simply want to “unfill” your paragraph? Simply invoke unfill-paragraph (which I will probably bind to M-Q).

TTY Improvements

One of my biggest gripes with Emacs in the terminal is how limited it feels to its GUI version. Child frames, for instance, are one of TTY Emacs limitations. Or, rather, was.

Starting from Emacs31, TTY Emacs will support child frames, thanks to tty-child-frames. Hurray for Posframe and Corfu users among many others!

The option xterm-mouse-mode is also now enabled by default in terminals that support it, i.e. allowing Emacs to access the OS clipboard and mouse events. This means you can now bind mouse events to Emacs functions, but at the cost of now having to rely entirely on Emacs to copy and paste text instead of relying on your terminal emulator.

Also, you can now rename your TTY frames to F<number>, though it will throw an error if that name is already taken.

Proper Support for User Lisp Directories

Emacs will natively support your user-lisp/ directory in your Emacs config directory (either your $HOME/.emacs.d/ or your $HOME/.config/emacs/ directories) by recursively byte-compiling all of its .el files and adding them to your load-path. It will also look for autoloaded elements like it would for other packages, so no need to explicitely require your .el files anymore!

This feature can be disabled with (setq user-lisp-auto-scrape nil), or you can change the directory user-lisp-directory points to if your personal Elisp files are stored somewhere else.

Very nice, thanks Emacs devs!

Visual Customization and Improvements

Display

The new char-table special-mirror-table allows you to define replacement characters for characters Emacs may have trouble displaying. I think that, for most native English speakers, this feature might be pretty useless, but it can be very interesting if you deal with glyphs that are not ASCII, especially if they are part of your writing system (Arabic, Mandarin, Cyrillic, etc…).

I, for one, am excited for this, as I use Emacs for most of my worldbuilding projects, which include conlanging (creating languages), and it requires me sometimes using characters Emacs has troubles representing. I also have some glyphs in my Linux that render properly with certain fonts that Emacs cannot render well with the font it uses, therefore not making the config all that readable (Im looking at you, my Waybar configuration, which I should remove since I already dont use Waybar any more).

On the topic of display customization, a few font-locks were deprecated:

  • font-lock-builtin-face
  • font-lock-comment-delimiter-face
  • font-lock-comment-face
  • font-lock-constant-face
  • font-lock-doc-face
  • font-lock-doc-markup-face
  • font-lock-function-name-face
  • font-lock-keyword-face
  • font-lock-negation-char-face
  • font-lock-preprocessor-face
  • font-lock-string-face
  • font-lock-type-face
  • font-lock-variable-name-face
  • font-lock-warning-face

They all have equivalents, you should customize them instead of these deprecated font-locks.

Windows

We get some new commands for manipulating our window layouts!

  • C-x w t and C-x w r <left>/<right> to rotate the window layout
  • C-x w o <left>/<right> to rotate the windows within the current layout
  • C-x w f <left>/<right>/<up>/<down> to flip the layout

And now, you can also indicate to Emacs to kill buffers when their window is closed, thanks to the kill-buffer-quit-windows option. But I think Ill personally stick to kill-buffer-and-window, this new option seems a bit overkill for me. Still, quite nice to have!

Some commands and functions will create new windows on their own. Emacs current behaviour is to split below if possible, and split right otherwise. But now, split-window-preferred-direction introduces three values:

'longest
somewhat similar to the current behaviour, and the new default value: split below if your window is taller than it is wide (Emacs preferred direction whenever possible), split right otherwise. But what if both options are possible? Well, now, you can set split-width-threshold (now 150 instead of 160) and split-height-threshold to determine the correct behaviour to follow.
'vertical
always split below
'horizontal
always split right

The new command other-window-backward is also finally here! Ever wanted to go back to your initial window after C-x o (other-window)? Just use C-x O to go back!

Frames

Ever wondered how much time youve spent in a frame, like how you can already determine it with window-use-time (which I just discovered now)? With Emacs31, you can now use the function (not command) frame-use-time.

delete-frame now sends you to your most recently used frame, not the first one in the list of frames. A small change, but a welcome change.

The new command split-frame now allows you to create a new frame and send windows of your current frame to this new frame. The command merge-frames, on the other hand, brings back a frames windows into another before killing it. Very nice if you want to bring back a TTY frame into another GUI frame, and vice versa.

Also, frames cloned with clone-frame (which I just discovered exists) are now aware which frame they were cloned from, and if they were undeleted with undelete-frame (how many commands will I learn exist while writing this article?). And all frames have now a unique ID, much easier to refer to a specific frame in your Elisp code, such as with the new commands select-frame-by-id or undelete-frame-by-id.

Mode Line

The mode line can now collapse its minor modes when setting mode-line-collapse-minor-modes to non-nil, useful when it becomes to feel bloated. By default, its nil, so it wont change its default behaviour. It also became much easier to customize, using mode-line-modes-delimiters to change or remove the existing delimiters. Writing new mode line themes is about to get a lot easier!

But what if you dont want to see the mode line? Well, hide it with mode-line-invisible-mode, and enjoy your distraction-free Emacs!

Tabs

When tabs were introduced in Emacs, I didnt really see the point initially, until I realized theyre somewhat similar to sub-frames without actually creating new frames. Very nice if, like me, you prefer to have a single frame despite working on several projects with the same Emacs instance. But an issue I often encounter (might be a skill issue on my part) is that they sometimes become quite bloated, crossing over multiple projects, at which point I decide to create another tab and restore one specific project to that tab, recreating my window layout with the buffers I want. Thats a tad tedious.

Well now, you can invoke the command split-tab to clone your current tab to a new one and keep your windows! And of course, it comes with merge-tabs, in case youre finally done with this specific issue your tab was for, and you want to go back to the projects general tab. And in case you have a lot of tabs opened, tab-bar-truncate when set to non-nil will now truncate your tabs list, instead of squishing them together and avoid any ugly text wrapping.

The use-case tab-line-mode is, however, a bit more mysterious for me, but I guess it makes sense when you come from editors like VSCode and are used to see all your open files as tabs (not Emacs tabs, but more what I expected tabs to be when they were first announced). And now, you can use tab-line-define-keys and set it to nil to avoid tab-line-mode redefine C-x <left>/<right> switch between the visual tabs and go back to Emacs vanilla behaviour. You can also move your tabs position among your tabs in tab-line-mode with the new commands tab-line-move-tab-forward and tab-line-move-tab-backward, which are bount to C-x M-<right>/<left>. And you can also set tab-line-exclude-buffers to exclude known buffers from the tabs, such as *scratch* or i-dont-want-my-boss-to-see-this-when-he-walks-by.txt. In fact, you can even have even more powerful filtering using tab-line-tabs-window-buffers-filter-function. And using the option tab-line-close-modified-button-show, you can see the close button visually warning you the buffer has been modified but not saved. Nice.

Something I just learned is that you can close tabs with your mouses middle click. But what if you made a mistake, clicked on the wrong tab, and realized your mistake before releasing the button? Until Emacs30, thats too late. Since Emacs31, the tab will be deleted once you release the button, so you can still move the mouse and release the button either on the correct tab, or outside the tabs area if you dont want to close anything.

Completion Improvements

The *Completions* buffer can now be much faster, updating as you write, given the eager-update completion property is non-nil. And if you dont like the default value of the property, you can override it with completion-category-overrides. And you can force the autocompletion to update eagerly with (setq completion-eager-update t) (or any value that is non-nil, but why not just use t?), but that can slow Emacs down; I turned it off on my ThinkPadX220 and its IntelCorei5-2540M (yes, I still use it), but on for my main desktop computer with its AMDRyzen79800X3D. I should upgrade my X220s CPU sometime. But fortunately, the *Completions* buffer still got a performance upgrade, especially when many candidates exist, though with one caveat (see below in this chapter).

You can also now separate what the up/down keys do from the left/write keys when in the minibuffer! If you set the minibuffer-visible-completions option to 'up-down, you can now have the up/down keys select different suggestions in the *Completions* buffer, while the left/write keys moves your cursor in the minibuffer. Similarly, the M-<up> and M-<down> keys now allow you to select candidates in the *Completions* buffer, whether your completion is in the minibuffer on in-buffer. And in all cases, RET now chooses the completion you selected.

If you want to customize how the completion candidates are displayed, you can now use completions-format: set it to 'vertical, selecting the next candidates means selecting the one below the one currently selected, wrapping to the net column when you reached the bottom. But setting it to 'horizontal will keep the old behaviour intact, selecting the next option right of your current selection when using M-<down>. But be careful, setting completions-format to 'vertical will undo the improvements the *Completions* buffer received. Not an option for my ThinkPad.

By the way, your selection is now consistent even if the *Completions* buffer updates! Its frustrating when you started selecting something, but for some reason, something triggered a completions update, and now you have to move again to what you were about to select.

Minibuffer Improvements

How many times have I tried to do something, only for Emacs to not do what I wanted because the minibuffer was active but not actively selected? Well now, minibuffer-nonselected-mode will warn you when you should probably pay attention to the minibuffer, as its waiting for your input. Especially useful when you think its selected, but its actually not.

Mouse Improvements

When selecting text with your mouse and invoking context-menu-mode, you can now select Send to... to send your text selection, or even the current file, to external applications!

Built-in Package Updates

Org Mode Updated to 9.8

You may already have Org Mode 9.8 if you dont use Emacs builtin package, but this new version comes with some nice new features, such as a new babel backend for C#, customizable images alignment, fixed and better LaTeX table export, and so on.

Project

No need for calling M-x project-any-command M-x find-file any more! You can now call project-root-find-file instead. And no need for M-x project-any-command M-x customize-dirlocals, you can use project-customize-dirlocals instead.

The new command project-find-matching-buffer can also be useful when switching, for instance, git worktrees of the same repository, or simply repositories with a similar structure. You can customize its behaviour with project-find-matching-buffer-function.

You can also only save a projects files with M-x project-save-some-buffers or C-x p C-x s, similarly to projectiles projectile-save-project-buffers.

Tree-sitter

The new option treesit-enabled-modes will enable all known tree-sitter modes by default when set to t, or only the tree-sitter based modes in the list given to it, such as (setopt treesit-enabled-modes '(c-ts-mode nix-ts-mode uiua-ts-mode). It may change major-mode-remap-alist based on treesit-major-mode-remap-alist if needed.

The user option treesit-auto-install-grammar is one step to replace treesit-auto, with treesit-extra-load-path being a list of directories where grammars are installed. If you install a grammar with treesit-auto-install-grammar, it will be installed in the first directory. treesit-language-source-alist now supports keywords such as :commit, in case the default commit selected doesnt match what you want (a bug you want to avoid, or which you may consider a feature).

By the way, discoverability of things to natively do with tree-sitter has become better! Use treesit-cycle-sexp-thing to explore the navigation commands you can call.

You can also use treesit-language-remap-alist to make Emacs language A is language B, which would allow you to use Bs parser for A. Especially useful if you know B is a superset of A, like Typescript is a superset of JavaScript.

Tree-sitter also now properly supports lists and comments and allows you to act on them!

It also now allows for better support of multiple programming languages, such as treesit-simple-indent-modify-rules which unifies across languages indentation rules, treesit-aggregated-simple-imenu-settings for Imenu setup for multiple languages, and treesit-aggregated-outline-predicate which indirectly allows for outline-minor-mode for multiple languages. Thatll be quite enjoyable when Ill work on Vue files, with HTML, Typescript, and LESS code all in the same file. Speaking of indentation, keep an eye on treesit-simple-indent-add-rules and treesit-simple-indent-override-rules.

Language Specifics

Doxygen is now supported by c-ts-mode and java-ts-mode if enabling c-ts-mode-enable-doxygen and java-ts-mode-enable-doxygen respectively.

go-ts-mode now has unit test support with a few new commands like go-ts-mode-test-function-at-point which does exactly what you think it does,

php-ts-mode had a lot of work done: it now requires mhtml-ts-mode instead of js-ts-mode, css-ts-mode and html-ts-mode directly, and it now benefits greatly from the multilingual improvements I talked about earlier.

rust-ts-mode now fontifies number suffixes as types (like 10_u32) when rust-ts-mode-fontify-number-suffix-as-type is non-nil.

Eshell

Eshell also got some improvements: eshell-clear is now a better behaved eshell/clear alternative, while eshell-execute-file went from function to command.

You can also set the stderr of eshell-command and eshell-execute-file.

The syntax of Eshell also got an upgrade: the for command can now loop over integer ranges, such as 1..10 (first number included, last excluded), and you can also use else in if {condition} {true-command} else {false-command} (else remains optional). You can also now chain else if, as the false-command can be its own if/else statement.

The history search got an improvement, with the ability to search with regular expressions with the two new eshell-isearch-backward-regexp and eshell-isearch-forward-regexp, or M-r for the backward search while M-s is now freed.

You can also set inter-session history off by setting eshell-history-isearch to nil (the default value), which will limit isearch to the Eshells buffer content only. If set to t, it will search in the input history only, and if set to 'dwim, it will search in the input history only if the point is after the last prompt.

A Few Additional Goodies

emacs-lisp-mode now supports semantic highlight when elisp-fontify-semantically is non-nil.

A few years back, setopt came into Emacs as a better alternative to setq for most variables declared with defcustom. Well, now, describe-variable will tell you if a variable should be set with setopt, or if other methods is alright.

Something my ThinkPad will be thankful of, and a lot of laptops also will be, is the new option native-comp-async-on-battery-power: if set to nil, Emacs will not attempt to use the asynchronous native compilations if your laptop is running only on its battery. The libraries that need compilation will be a tad slower, but you wont have to look for a power socket as soon as with Emacs30. Especially nice for those in the Northern Hemisphere who want to enjoy the upcoming summer! Or if youre one of the weirdos like me who enjoy the cold more than the heat.

Something Ill really appreciate is setting show-paren-not-in-comments-or-strings to stop Emacs from highlighting parenthesis and brackets in comments or strings.

Sending empty strings to emacsclient is now possible! Until Emacs30, passing an empty string was the same as not passing one it at all. Now, Emacs will understand it!

Emacs now supports Unicode 17.0, in case you wanted to write something with the Sidetic, TolongSiki, BeriaErfe, or TaiYo scripts. I was prepared to make an emoji joke, but surprisingly, Unicode17 did not add any. Speaking of scripts, Emacs now support new input methods, such greek-polytonic for polytonic and archaic Greek, but also quite a few input methods for Northern Iroquoian languages, Burmese-based languages, and Syriac languages. My inner amateur linguist approves immensely!

Emacs now dislikes insecure protocols: its Network Security Manager will warn you about TLS1.1 and DHE and RSA key exchange.

f.el v0.21.0 released!   dev emacs release

Introduction

Today, a new stable version of f.el, a modern API for working with files and directories in Emacs, was released after six years!

While Melpa users should not see any difference since this release, Melpa Stable users should be able to upgrade from f.el 0.20 to f.el 0.21 within a few hours after the publication of this blog post.

Whats New?

A few new features landed in f.el 0.21, namely:

  • f-change-time, f-modification-time, and f-access-time, three new functions that can help users to deal with the atime, mtime, and ctime of a file. Thanks to Erik Anderson for his contribution!
  • f-newer-p, f-same-time-p, and f-older-p, building on the above mentioned functions to compare the atime, mtime, or ctime of to files.
  • f-mkdir-full-path allows you to create a directory from a fully written path, such as (f-mkdir-full-path "some/sub/directory"). This is complementary to the f-mkdir function which requires to write (f-mkdir "some" "sub" "directory") instead.
  • A shortdoc implementation is available for f.el for Emacs 28 and above. Simply execute M-x shortdoc f and explore your new built-in cheat!

Some fixes, improvements, and clarifications were also implemented in f.el 0.21. To get a full list, head over to the changelog.

It is important to note, however, that support for Emacs 24 is dropped with this release. If you are still using this Emacs version, I urge you to upgrade to at least Emacs 25 and honestly, you are missing out on a lot of things, just take a look at my previous blog post on what Emacs 29 brought to the table!

What now?

As you can see, only a few things were added to f.el in the six years and a half between the 0.20 release and the present 0.21 release. Personally, I would say f.el is pretty complete right now, with a minimal amount of bugs. Of course, I am not saying we are entirely bug-free, we still have a few issues open on the Github repository.

However, there are still things to do still! Here are some suggestions if you feel like contributing.

Help with open PRs

At the time of writing this blog post, there are four PRs open. You can weigh in if a decision is needed, or you can help in case of technical difficulties from the PRs author

Implement new features

The issue #18 suggests the creation of f.el functions for chmod and chown utilities. If you feel like you can write such functions, feel free to contribute!

Improvements over existing features

Some functions may not be complete, or may lack some features. For instance, f-hidden-p only works with the UNIX-style of hiding files and directories by prepending their name with a dot, like .file.el or .hidden/file.el. This does not necessarily work on Windows, but so far, attempts at creating a Windows-native solution resulted in failure due to the time required to make a Windows-native request on whether a file or folder is hidden. If you find a performant solution, feel free to contribute!

Documentation improvement

While I dont have any specific example in mind, if you feel like some documentation could be improved, both its content or how it is presented, you are as well very welcome to contribute to the project.

Conclusions

I became the maintainer of f.el some two years ago, as Johan Andersson, the owner of f.el, lacked time to maintain it himself. It has definitively been an interesting experience. Although I currently have a lot less time to make things for Emacs myself, I am definitively looking forward what will become of this library, and I hope I will be able to accompany anyone willing to contribute to this project.

Thank you to everyone who made this new version possible! Lets do our best for f.el version 0.22!

Emacs 29 is nigh! What can we expect?   dev emacs release

It was announced a couple of hours ago, Emacs 29s branch is now cut from the master branch! This means the emacs-29 branch will from now no longer receive any new feature, but only bug fixes.

So, whats new with this new major release? I skimmed over the NEWS file, and here are the changes which I find interesting and even exciting for some.

Article updated on December 22nd at 14:05 UTC

Major features

A couple of major improvements will be most likely present, here are the ones that stand out the most for me.

Eglot is now part of Emacs core

During the last couple of years, LSP has given text editors incredible capabilities, giving them IDE-like features relatively easily. Aside from Elisp development, most of the code I write is now done with the help of an LSP server, running along Emacs and analysing my code, suggesting and performing changes and actions for me.

Several integrations of LSP exist for Emacs, such as LSP Mode, Eglot, and lsp-bridge. Among the three, Eglot is now part of Emacs core! No longer do you need to install a package, simply register an LSP server and autocompletion, documentation, error detection, and other features will become available right away!

I must admit I dont really know Eglot, I personally use LSP Mode, but with this addition to Emacs core, I might attempt the switch.

Tree-Sitter is also part of Emacs core

In case you didnt know, Emacs current syntax highlighting is currently based on a system of regexes. Although it is not the worst thing to use, its not the best either, and it can become quite slow on larger files.

Tree-Sitter parses programming languages based into a concrete syntax tree. From there, not only can syntax highlighting can be done at high speed, but a much deeper analysis of the code is possible and actions such as syntax manipulation can also be achieved since the syntax tree itself is available as an object which can be manipulated!

In case you want some more information on Tree-Sitter itself, you can check out the official Tree-Sitter website, or you can even check this talk out given by Tree-Sitters creator, Max Brunsfeld.

Well, this is now a native solution in Emacs! Currently, Emacs Tree-Sitter supports the current major modes:

  • bash-ts-mode
  • c-ts-mode
  • c++-ts-mode
  • csharp-ts-mode
  • css-ts-mode
  • java-ts-mode
  • js-ts-mode
  • json-ts-mode
  • python-ts-mode
  • typescript-ts-mode

Tree-Sitter also holds for now a special status in the new emacs-29 branch since new features can still be added to it, as its merge with the master branch is still recent. So we might see the list of major modes for Emacs get a bit longer yet, especially considering Tree-Sitter tries to make adding new languages relatively easy.

If you cant wait to test Tree-Sitter, there is already another package available for Emacs you can use right now. Just be aware this is not the same package as the one that got integrated into Emacs.

Install packages from source with package.el

If you use Straight, you might be familiar with installing packages directly from their Git repository. Well, good news, it is now possible to install packages from Git using Emacs built-in packaging system package.el! It can be done with the new function package-vc-install, and packages installed that way can be updated with package-vc-update or package-vc-update-all.

On the topic of package.el, there is also the new function package-report-bug which allows Emacs users to report bugs to the developers of a package directly from Emacs! Be aware though, it only works for packages installed through package.el. Since Im a use-package and straight.el user, there is no package listed when I invoke the command.

Org mode 9.6

As confirmed by one of org-mode maintainers Bastien Guerry on a French-speaking Emacs mailing list, Org 9.6 is set to be part of Emacs 29! There is an official article on this release, which is already available on GNU ELPA!

use-package in Emacs core

It has also been confirmed on the Emacs development mailing list that use-package, an awesome package manager, is set to be part of Emacs 29, although it initially wasnt included in the emacs-29 branch.

Pure GTK Emacs is here for Wayland!

One of the major issues Emacs had on Linux was its dependency on Xorg when running in GUI mode. When running Xorg, its not really an issue, but Wayland has become more and more common during the last years, and even with the existence of XWayland, this became an annoyance.

Well, fear not, for pure GTK Emacs is here! It can now be built Xorg-free and run natively in Wayland!

Be aware though that Wayland is basically the only use-case for pure GTK Emacs. If you dont use Wayland, Emacs will display a warning message, as it will most likely cause issues if you are running Xorg. In my case, I sometimes see some ghost text when the content of a buffer updates (I still need pure GTK though, since I alternate between Xorg and Wayland).

Compile EmacsLisp files ahead of time

With Emacs 28 came the ability to natively compile EmacsLisp if your Emacs was built with the ability to do so, using GCCs Just In Time library. This results in quite the impressive boost in performance, which made Emacs much snappier than it was before. The only issue I had was Emacs would only compile its EmacsLisp files when they were loaded for the first time.

This is no longer the case! If you now compile Emacs with --with-native-compilation=aot, Emacs native EmacsLisp files will be natively compiled along with Emacs itself! Be aware though, it can be slow on most machines, so the time you save by not compiling these files when launching Emacs for the first time is basically transferred to when compiling Emacs itself. Is it worth your time? In my case, I would say yes, because when I compile Emacs, Im generally not in a hurry. But in your case? Well, test it out and see for yourself.

Native access to SQLite databases

Emacs can now be built with native support for SQLite and the sqlite3 library. In fact, this is now a default behaviour, since you need to pass --without-sqlite3 to Emacs build configuration script in order to prevent it.

This comes with a new sqlite-mode which allows you to explore SQLite databases within Emacs and to interact with them. Check out the sqlite-mode-open-file function!

HaikuOS support

For all three HaikuOS users out there, good news, you now have access to Emacs! (In all seriousness, I should check out HaikuOS one day)

Moreover, it also supports an optional window-system port to Haiku with --with-be-app. Be aware, you will need the Haiku Application Kit development headers and a C++ compiler. Otherwise, Emacs will only run in the terminal. If you want to also add Cairo to the mix, you can add --with-be-cairo.

New major mode for C#

csharp-mode is now a native major mode for Emacs and is based on cc-mode.

Minor features

Its easier to use Emacs in scripts!

If you like to write scripts and especially writing Lisp scripts, Emacs now supports the option -x in order to execute scripts written in EmacsLisp. When executing such a script with #!/usr/bin/emacs -x as its shebang, Emacs will not read its init file (like with -Q) and will instead execute the Elisp code right away and return the last value to the caller of the script (most likely the shell you called the script from).

TRAMP natively supports Docker, Podman, and Kubernetes

Three new connections are now available for TRAMP:

  • docker
  • podman
  • kubernetes

You will now be able to access your containerized environment right from Emacs without the need to write custom code.

Custom user directory

It is now easier to launch custom Emacs profiles without the need of tools such as chemacs2 with the addition of the flag --init-directory. This can set to any directory Emacs user-emacs-directory which includes the init.el which comes along with it. Yet another reason for me not to use a .emacs file, but the init.el file instead.

Support for Webp images

For quite some time, Emacs has been able to display images, but not webp yet. Well, this is now fixed! And in fact, support for webp images became the default behaviour, since you need to pass --without-webp to Emacs configuration script to disable webp support.

C++ mode now supports the C++20 standard

Yep. Theres nothing more to say, really. Happy coding!

Better handling of .pdmp files

Emacs has had for a few version the ability to dump its state into a pdmp file for faster startup time. Well now, when creating such a file, it will include in its name a fingerprint of its current state, although it will still prioritize an emacs.pdmp file if it exists.

Better mouse and touchpad support

Emacs now uses XInput 2, which enables Emacs to support more input events, such as touchpad events. For instance, by default, a pinch gesture on a touchpad increases or decreases the text size of the current buffer. This is thanks to the new event pinch, which comes along with touch-end.

Unicode 15.0 and emojis

Emacs now supports Unicode 15.0, which is currently the latest Unicode version. Although this is not directly related, quite a few new emoji-related features have been introduced. The new prefix C-x 8 e now leads to a few new commands related to emojis:

C-x 8 e e or C-x 8 e i
Insert an emoji (emoji-insert)
C-x 8 e s
Search an emoji (emoji-search)
C-x 8 e l
List all emojis in a new buffer (emoji-list)
C-x 8 e r
Insert a recently inserted emoji (emoji-recent)
C-x 8 e d
Describe an emoji (emoji-describe)
C-x 8 e + and C-x 8 e -
Increase and decrease the size of any character, but especially emojis (emoji-zoom-increase and emoji-zoom-decrease respectively)

There is also the new input method emoji which allows you to type for instance :grin: in order to get the emoji 😁.

True background transparency

Up until recently, if you wanted transparency with Emacs, you had no choice but to make the whole frame transparent, including text and images.

Thanks to the frame parameter alpha-background and its related alphaBackground X resource, it is now possible to set transparency only for the frames background without affecting any of the other elements on screen.

WebKit inspector in Emacs WebKit widget browser

You can now access the WebKit inspector when using the WebKit widget browser in Emacs, given you are using a version of Emacs which has been compiled with it. I wish there was a keybinding or at least a function for it, but apparently you can only open it with a right click and select Inspect Element. Still nice to have.

Some news for Windows

Although it has been available for Linux users since Emacs 26.1, Windows finally has access to double-buffering to reduce display flicker. If you wish to disable it, you can set the frame parameter inhibit-double-buffering to nil.

Emacs also follows Windows dark mode with Windows 10 (version 1809) and onwards.

Emacs also now uses Windows native API to render images. This includes BMP, GIF, JPEG, PNG, and TIFF images. Other formats, however, still rely on other dependencies and libraries to properly work, such as Webp images.

Whats next?

With Emacs 29 being cut, development on the master branch will now go towards Emacs 30. Is there anything we can expect yet?

Its still very early to say, most stable features merged into master went to Emacs 29, and only the feature/pkg and feature/improved-lock-narrowing branches seem to have received commits less than a week prior to the day of writing this, and I do not know the status of other branches that received commits during the past few weeks such as feature/package+vc or feature/eglot2emacs (which I assume both got merged).

However, there are currently talks about including use-package into Emacs! Im a bit disappointed it wont make it into Emacs 29, but progress is being made on scratch/use-package, and you can always check the mailing list to check its status such as here. Update: Rejoice! As mentioned above, use-package is actually set to land in Emacs 29!

[EN] Automatic Meaningful Custom IDs for Org Headings   emacs orgmode dev

Spoiler alert, I will just modify a bit of code that already exists, go directly to the bottom if you want the solution, or read the whole post if you are interested in how I got there.

Update 2021-11-22

Ive put the code presented here as a complete package. You can find it in this repository or in its GitHub mirror (be aware the latter may not be as up-to-date as the former is. Installation instructions are in the README.

The issue

About two to three years ago, as I was working on a project that was meant to be published on the internet, I looked for a solution to get fixed anchor links to my various headings when I performed HTML exports. As some of you may know, by default when an Org file is exported to an HTML file, a random ID will be generated for each header, and this ID will be used as their anchor. Heres a quick example of a simple org file:

#+title: Sample org file
* First heading
  Reference to a subheading
* Second heading
  Some stuff written here
** First subheading
   Some stuff
** Second subheading
   Some other stuff
Example org file

And this is the result once exported to HTML (with a lot of noise removed from <head>):

<html xmlns="http://www.w3.org/1999/xhtml" lang="en" xml:lang="en">

<head>
    <title>Sample org file</title>
    <meta name="generator" content="Org mode" />
    <meta name="author" content="Lucien Cartier-Tilet" />
</head>

<body>
    <div id="content">
        <h1 class="title">Sample org file</h1>
        <div id="outline-container-orgd8e6238" class="outline-2">
            <h2 id="orgd8e6238"><span class="section-number-2">1</span> First heading</h2>
            <div class="outline-text-2" id="text-1">
                <p>
                    Reference to a subheading
                </p>
            </div>
        </div>
        <div id="outline-container-org621c39a" class="outline-2">
            <h2 id="org621c39a"><span class="section-number-2">2</span> Second heading</h2>
            <div class="outline-text-2" id="text-2">
                <p>
                    Some stuff written here
                </p>
            </div>
            <div id="outline-container-orgae45d6b" class="outline-3">
                <h3 id="orgae45d6b"><span class="section-number-3">2.1</span> First subheading</h3>
                <div class="outline-text-3" id="text-2-1">
                    <p>
                        Some stuff
                    </p>
                </div>
            </div>
            <div id="outline-container-org9301aa9" class="outline-3">
                <h3 id="org9301aa9"><span class="section-number-3">2.2</span> Second subheading</h3>
                <div class="outline-text-3" id="text-2-2">
                    <p>
                        Some other stuff
                    </p>
                </div>
            </div>
        </div>
    </div>
</body>

</html>
Output HTML file

As you can see, all the anchors are in the format of org[a-f0-9]{7}. First, this is not really meaningful if you want to read the anchor and guess where it will lead you. But secondly, these anchors will change each time you export your Org file to HTML. If I want to share a URL to my website and to a specific heading, … well I cant, it will change the next time I update the document. And I dont want to have to set a CUSTOM_ID property for each one of my headings manually. So, what to do?

A first solution

A first solution I found came from this blog post, where Lee Hinman described the very same issue they had and wrote some Elisp code to remedy that (its a great read, go take a look). And it worked, and for some time I used their code in my Emacs configuration file in order to generate unique custom IDs for my Org headers. Basically what the code does is it detects if auto-id:t is set in an #+OPTIONS header. If it is, then it will iterate over all the Org headers, and for each one of them it will insert a CUSTOM_ID, which is made from a UUID generated by Emacs. And tadah! we get for each header a h-[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12} custom ID that wont change next time we export our Org file to HTML when we save our file, and only for headings which dont already have a CUSTOM_ID property. Wohoo!

Except…

These headers are not meaningful

OK, alright, thats still a huge step forward, we dont have to type any CUSTOM_ID property manually any more, its done automatically for us. But, when I send someone a link like https://langue.phundrak.com/eittland#h-76fc0b91-e41c-42ad-8652-bba029632333, the first reaction to this URL is often something along the lines of “What the fuck?”. And theyre right, this URL is unreadable when it comes to the anchor. How am I supposed to guess it links to the description of the vowels of the Eittlandic language? (Thats a constructed language Im working on, you wont find anything about it outside my website. Also, this link is dead now, it got simplified thanks to Vuepres.)

So, I went back to my configuration file for Emacs, and through some trial and error, I finally found a way to get a consistent custom ID which is readable and automatically set. With the current state of my code, what you get is the complete path of the Org heading, all spaces replaced by underscores and headings separated by dashes, with a final unique identifier taken from an Emacs-generated UUID. Now, the same link as above will look like https://langue.phundrak.com/eittland#Aperçu_structurel-Inventaire_phonétique_et_orthographe-Voyelles_pures-84f05c2c. It wont be more readable to you if you dont speak French, but you can guess it is way better than what we had before. I even added a safety net by replacing all forward slashes with dashes. The last ID is here to ensure the path will be unique in case wed have two identical paths in the org file for one reason or another.

The modifications I made to the first function eos/org-id-new are minimal, where I just split the UUID and get its first part. This is basically a way to simplify it.

(defun eos/org-id-new (&optional prefix)
  "Create a new globally unique ID.

An ID consists of two parts separated by a colon:
- a prefix
- a   unique   part   that   will   be   created   according   to
  `org-id-method'.

PREFIX  can specify  the  prefix,  the default  is  given by  the
variable  `org-id-prefix'.  However,  if  PREFIX  is  the  symbol
`none', don't  use any  prefix even if  `org-id-prefix' specifies
one.

So a typical ID could look like \"Org-4nd91V40HI\"."
  (let* ((prefix (if (eq prefix 'none)
                     ""
                   (concat (or prefix org-id-prefix)
                           "-"))) unique)
    (if (equal prefix "-")
        (setq prefix ""))
    (cond
     ((memq org-id-method
            '(uuidgen uuid))
      (setq unique (org-trim (shell-command-to-string org-id-uuid-program)))
      (unless (org-uuidgen-p unique)
        (setq unique (org-id-uuid))))
     ((eq org-id-method 'org)
      (let* ((etime (org-reverse-string (org-id-time-to-b36)))
             (postfix (if org-id-include-domain
                          (progn
                            (require 'message)
                            (concat "@"
                                    (message-make-fqdn))))))
        (setq unique (concat etime postfix))))
     (t (error "Invalid `org-id-method'")))
    (concat prefix (car (split-string unique "-")))))

Next, we have here the actual generation of the custom ID. As you can see, the let has been replaced by a let* which allowed me to create the ID with the variables orgpath and heading. The former concatenates the path to the heading joined by dashes, and heading concatenates orgpath to the name of the current heading joined by a dash if orgpath is not empty. It will then create a slug out of the result, deleting some elements such as forward slashes or tildes, and all whitespace is replaced by underscores. It then passes heading as an argument to the function described above to which the unique ID will be concatenated.

(defun eos/org-custom-id-get (&optional pom create prefix)
  "Get the CUSTOM_ID property of the entry at point-or-marker POM.

If POM is nil, refer to the entry at point. If the entry does not
have an CUSTOM_ID, the function returns nil. However, when CREATE
is non nil, create a CUSTOM_ID if none is present already. PREFIX
will  be passed  through to  `eos/org-id-new'. In  any case,  the
CUSTOM_ID of the entry is returned."
  (interactive)
  (org-with-point-at pom
    (let* ((orgpath (mapconcat #'identity (org-get-outline-path) "-"))
           (heading (replace-regexp-in-string
                     "/\\|~\\|\\[\\|\\]" ""
                     (replace-regexp-in-string
                      "[[:space:]]+" "_" (if (string= orgpath "")
                                  (org-get-heading t t t t)
                                (concat orgpath "-" (org-get-heading t t t t))))))
           (id (org-entry-get nil "CUSTOM_ID")))
      (cond
       ((and id
             (stringp id)
             (string-match "\\S-" id)) id)
       (create (setq id (eos/org-id-new (concat prefix heading)))
               (org-entry-put pom "CUSTOM_ID" id)
               (org-id-add-location id
                                    (buffer-file-name (buffer-base-buffer)))
               id)))))

The rest of the code is unchanged, here it is anyway:

(defun eos/org-add-ids-to-headlines-in-file ()
  "Add CUSTOM_ID properties to all headlines in the current file
which do not already have one.

Only adds ids if the `auto-id' option is set to `t' in the file
somewhere. ie, #+OPTIONS: auto-id:t"
  (interactive)
  (save-excursion
    (widen)
    (goto-char (point-min))
    (when (re-search-forward "^#\\+OPTIONS:.*auto-id:t"
                             (point-max)
                             t)
      (org-map-entries (lambda ()
                         (eos/org-custom-id-get (point)
                                                'create))))))

(add-hook 'org-mode-hook
          (lambda ()
            (add-hook 'before-save-hook
                      (lambda ()
                        (when (and (eq major-mode 'org-mode)
                                   (eq buffer-read-only nil))
                          (eos/org-add-ids-to-headlines-in-file))))))

Note that you will need the package org-id to make this code work. You simply need to add the following code before the code I shared above:

(require 'org-id)
(setq org-id-link-to-org-use-id 'create-if-interactive-and-no-custom-id)

And thats how my links are now way more readable and persistent! The only downside I found to this is when you move headings and their path is modified, or when you modify the heading itself, the custom ID is not automatically updated. I could fix that by regenerating the custom ID on each save, regardless of whether a custom ID already exists or not, but its at the risk an ID manually set will get overwritten.

Linux   @linux

[EN] My YouTube subscriptions as an RSS feed   linux dev tutorial

The Problem

Im sure youve been in the same situation before: you go on YouTube because you want to watch a video, maybe two, from your subscriptions. You open the first one. Oh great, an unskippable fifteen seconds ad. And another one! OK, the video starts. It gets cut a couple of times by other ads of varying length. Oh but whats this? This recommended video looks nice! And before you know it, your whole afternoon and evening went by painfully watching videos on YouTubes atrocious video player. You lost focus.

My Solution: mpv + RSS

Wouldnt it be nice if it were possible to watch these videos with a full-fledged video player over which you have complete control? Which could be customized to your hearts content? Which wont secretly track what you watch?

Oh right, mpv! It supports most video formats you can think of, and thanks to its interoperability with youtube-dl, you can also watch videos from an extremely wide variety of websites! So why not YouTube?

Now, the question is how to get rid of YouTubes interface. The answer is actually quite simple: lets use an RSS feed. With the RSS feeds from YouTube, you will receive in your RSS reader the link of the video with its thumbnail and its description. You can then copy from there the link and open it with mpv with a command like this:

mpv "https://www.youtube.com/watch?v=xym2R6_Qd7c"
Channel RSS

Now the question is how to get the RSS feed of a channel? The answer is quite simple. The base URL for a YouTube channel RSS feed is https://www.youtube.com/feeds/videos.xml?channel_id= to which you simply have to add the channel ID. For instance, if you want to follow Tom Scott with this, you simply have to extract the part of the channel after /channel/ in his URL and append it to the URL mentioned above, and TADAH! you get an RSS feed to his channel!

https://www.youtube.com/feeds/videos.xml?channel_id=UCBa659QWEk1AI4Tg--mrJ2A

Be careful to select the channel ID only if it is after a /channel/ though! The part that is after a /c/ will not work. If you end up on the URL https://www.youtube.com/c/TomScottGo, simply click on a random video, then click on the channels name. This should bring you back to the channel but with an important difference: the URL is now https://www.youtube.com/channel/UCBa659QWEk1AI4Tg--mrJ2A.

The thing that is really nice with this setup is you dont really need to actually subscribe to a channel, your RSS feed already does that for you! And with lots of RSS feed readers, you can categorize your different feeds, meaning you can even categorize your subscriptions!

Playlist RSS

It is also possible to follow not only a channel but a playlist of videos. For that, you will instead use https://www.youtube.com/feeds/videos.xml?playlist_id= as your base URL to which you will add the ID of the playlist you want to follow. For instance, with Tom Scotts playlist for Citation Needed Season 7, the URL of the playlist is https://www.youtube.com/playlist?list=PL96C35uN7xGI15-QbtUD-wJ5-G8oBI-tG, which means you need to keep the PL96C35uN7xGI15-QbtUD-wJ5-G8oBI-tG and put it into the URL like so:

https://www.youtube.com/feeds/videos.xml?playlist_id=PL96C35uN7xGI15-QbtUD-wJ5-G8oBI-tG

Which RSS reader to go with?

If you know me, youll know I am extremely biased towards Emacs, so of course Ill recommend Elfeed to any Emacs user (my relevant configuration is here). I even wrote an advice around elfeed-show-visit to ensure YouTube videos are open with mpv instead of my web browser.

If youre not into Emacs, or not that into Emacs, you can also try other alternatives such as NewsFlash, a very nice RSS reader written in GTK for Linux I may not always agree with DistroTube, but he made a very nice video presenting this piece of software. (Remember, right-click and then mpv "the url here"!)

The News app for Nextcloud is also very neat, I recommend you using it.

You can also get your RSS feed in your terminal with Newsboat. Not really my cup of tea, but I can see why some people enjoy it.

Improving a bit the mpv tooling

You might have heard it, but youtube-dl hasnt been doing great recently. The tool is becoming slow, and it lacks quite a few features it could really benefit from. While it is important to acknowledge its historical importance, I think it is now time to move on, and its successor shall be yt-dlp. In my experience, this youtube-dl fork is much faster than youtube-dl itself on top of providing additional features such as SponsorBlock integration.

How do you replace youtube-dl with yt-dlp then? If you use Arch Linux or one of its derivates (I hope not Manjaro though), you can simply install yt-dlp-drop-in from the AUR.

paru -S yt-dlp-drop-in
# or if you prefer yay
yay -S yt-dlp-drop-in
# or whichever AUR helper you prefer, as long as it is NOT yaourt

If you are not an Arch Linux user, check out this article, it will help you.