❌ About FreshRSS

Normal view

There are new articles available, click to refresh the page.
Before yesterdayNews from the Ada programming language world

Unlocking the Power of OpenAI in Ada programs

1 October 2023 at 16:33

The OpenAI(https://openai.com/) provides a service that can be queried using REST requests. The service API can be classified as follows:

  • OpenAI's GPT (generative pre-trained transformer) is the well-known text generation based on text inputs.
 It gives access to several AI models that have been trained to understand natural language and code.
  • Image generation gives access to the DALL.E(https://platform.openai.com/docs/models/dall-e) for both
  •  generating images based on a text description, or, modify an existing image based on a text description.
  • Speech to text gives access to the Whisper model(https://openai.com/research/whisper) that converts
  •  audio records into text (several languages are supported).
  • OpenAIҀ™s text embeddings measure the relatedness of text strings.
  • For a detailed description of these API have a look at the OpenAI API Introduction(https://platform.openai.com/docs/introduction) document. To use these APIs, you'll first need to register on their service and get an API key that will grant access to the operations.

    The library was generated by the OpenAPI(https://github.com/OpenAPITools/openapi-generator) code generator from the OpenAPI description of the OpenAI service and it uses the OpenAPI Ada(https://github.com/stcarrez/swagger-ada) library to make HTTP requests and perform JSON serialization and de-serialization. Each OpenAI operation is made available through an Ada procedure which takes care of building the request that was filled within some Ada record, submitting it to the OpenAI server and extract the response in another Ada record. Every request submitted to the OpenAI server is therefore strongly typed!

      1. Setup

    To use the library, you should use Alire to setup your project and use:

    ``` alr index add git+https://gitlab.com/stcarrez/awa-alire-index.git name awa alr with openai ```

    For the HTTP connection, you can either use AWS(https://github.com/AdaCore/aws) or curl(https://curl.se/) and run one of the following commands:

    ``` alr with utilada_curl alr with utilada_aws ```

      1. Initialization

    First, make sure you import at least the following Ada packages:

    ``` with Util.Http.Clients.Curl; -- replace Curl with AWS if needed with OpenAPI.Credentials.OAuth; with OpenAI.Clients; with OpenAI.Models; ```

    If you want to use curl(https://curl.se/), the initialization should use the following:

    ``` Util.Http.Clients.Curl.Register; ```

    But if you want to use AWS(https://github.com/AdaCore/aws), you will initialize with:

    ``` Util.Http.Clients.AWS.Register; ```

    After the initialization is done, you will declare the `OpenAI` client instance to access the API operations. The OpenAI(https://openai.com/) service uses an OAuth bearer API key to authenticate requests made on the server. We will need an `OAuth2_Credential_Type` instance represented by `Cred` below.

    ``` Cred : aliased OpenAPI.Credentials.OAuth.OAuth2_Credential_Type; Client : OpenAI.Clients.Client_Type; ```

      1. Credential setup

    For the credential setup you will first need to get your access key from your account. Once you have your key as a `String`, you can configure the `Cred` object and tell the client connection entry point which credentials to use:

    ``` Api_Key : constant String := ...;

      Cred.Bearer_Token (Api_Key);
      Client.Set_Credentials (Cred'Unchecked_Access);


      1. OpenAPI client setup

    The last step necessary before you can make requests, is to setup the server base URL to connect for the REST requests:


     Client.Set_Server ("https://api.openai.com/v1");


      1. API for chat

    The `Create_Chat` is the main operation for the conversation chat generation. The request is represented by the `ChatRequest_Type` type which describes the OpenAI chat model that must be used and the query parameters. The request can be filled with a complete conversation chat which means it is possible to call it several times with previous queries and responses to proceed in the chat conversation.

    ``` C : OpenAI.Clients.Client_Type; Req : OpenAI.Models.ChatRequest_Type; Reply : OpenAI.Models.ChatResponse_Type; ...

      Req.Model := OpenAPI.To_UString ("gpt-3.5-turbo");
      Req.Messages.Append ((Role => Openapi.To_Ustring ("user"),
                            Content => Prompt,
                            others => <>));
      Req.Temperature := 0.3;
      Req.Max_Tokens := (Is_Null => False, Value => 1000);
      Req.Top_P := 1.0;
      Req.Frequency_Penalty := 0.0;
      Req.Presence_Penalty := 0.0;
      C.Create_Chat (Req, Reply);


    Upon successful completion, we get a list of choices in the reply that contains the text of the conversation. You can iterate over the list with the following code extract. Beware that the returned string is using UTF-8 encoding and it may need a conversion to a `Wide_Wide_String` if necessary.


      for Choice of Reply.Choices loop
            Utf8 : constant String := OpenAPI.To_String (Choice.Message.Content);
            Put_Line (Ada.Strings.UTF_Encoding.Wide_Wide_Strings.Decode (Utf8));
      end loop;


    The complete chat example is available at: OpenAI Chat(https://gitlab.com/stcarrez/openai-chat) .

      1. Generating an image

    The image generation is based on the DALL.E(https://platform.openai.com/docs/models/dall-e) model generator. The creation is made by populating a request object, making the call (which is an HTTP POST) and getting the result in a response. Both request and response are represented by full Ada types and are strongly typed:

    The request contains a prompt string which must be provided and is the textual description of the image to create. An optional parameter allows to control the number of images which are created (from 1 to 10). Another optional parameter controls the dimension of the final image. The OpenAI API limits the possible values to: `256x256`, `512x512` and `1024x1024`. The image creation is represented by the `Create_Image` procedure and we can call it with our request instance:

    ``` Req : OpenAI.Models.CreateImagesRequest_Type; Reply : OpenAI.Models.ImagesResponse_Type; ...

      Req.Prompt := Prompt;
      Req.N := (Is_Null => False, Value => 3);
      Req.Size := (Is_Null => False, Value => "512x512");
      C.Create_Image (Req, Reply);


    Once it has finished, it produces a response which basically contains a list of URLs for each generated image.


      for Url of Reply.Data loop
         if not Url.Url.Is_Null then
            Ada.Text_IO.Put_Line (OpenAPI.To_String (Url.Url.Value));
         end if;
      end loop;


    The complete image generation example is available at: OpenAI Image Generation(https://gitlab.com/stcarrez/openai-image) .

    For more information about the image creation, have a look at the OpenAI Images API reference(https://platform.openai.com/docs/api-reference/images).

      1. Conclusion

    The Ada OpenAI(https://gitlab.com/stcarrez/ada-openai) library is eagerly awaiting your Ada programming creativity. You can try using the chat generation example to ask the AI to generate some Ada code, but you'll likely be disappointed by the poor quality of Ada programs that OpenAI(https://openai.com/) generates. However, you may still find some small benefits in using it across different domains. I encourage you to give it a try for your enjoyment and pleasure.

    IO stream composition and serialization with Ada Utility Library

    5 March 2022 at 22:48

    To be able to provide this IO stream combination, the Ada Utility Library(https://github.com/stcarrez/ada-util) defines two Ada types: the `Input_Stream` and the `Output_Stream` limited interfaces. The `Input_Stream` interface only defines a `Read` procedure and the `Output_Stream` interface defines the `Write`, `Flush` and `Close` procedures. By implementing these interfaces, it is possible to provide stream objects that can be combined together.

    [IO Stream Composition and Serialization](Ada/ada-util-streams.png)

    The Ada Utility Library(https://github.com/stcarrez/ada-util) provides stream types that implement these interfaces so that it is possible to read or write files, sockets and system pipes. In many cases, the concrete type implements both interfaces so that reading or writing is possible. This is the case for `File_Stream` which allows to read or write on a file, the `Socket_Stream` which handles sockets by using GNAT sockets. The `Pipe_Stream` on its side allows to launch an external program and either read its output, available through the `Input_Stream`, or feed the external program with some input by using the `Output_Stream`.

    The Ada Utility Library(https://github.com/stcarrez/ada-util) also provides stream objects that make transformation on the data through various data encoders. The Ada library supports the following encoders:

    • Base 16, Base 64,
    • AES encryption or decryption,
    • LZMA compression or decompression

    Other encoders could be added and it is always possible to provide custom transformations by implementing the `Input_Stream` and `Output_Stream` interfaces.

    The last part that completes the IO stream framework is the serialization framework. That framework defines and provides interface and types to read or write a CSV, XML, JSON or HTTP form stream. The serialization framework uses either the `Input_Stream` or the `Output_Stream` interfaces to either read or write the content. The serialization framework defines operations in a way that allows to read or write these streams independently of their representation format.

      1. LZMA Compression

    Let's have a look at compressing a file by using the `Util.Streams` framework. First we need a `File_Stream` that is configured to read the file to compress and we need another `File_Stream` configured for writing to save in another file. The first file is opened by using the `Open` procedure and the `In_File` mode while the second one is using `Create` and the `Out_File` mode. The `File_Stream` is using the Ada `Stream_IO` standard package to access files.

    ```Ada with Util.Streams.Files;

      In_Stream  : aliased Util.Streams.Files.File_Stream;
      Out_Stream : aliased Util.Streams.Files.File_Stream;
      In_Stream.Open (Mode => Ada.Streams.Stream_IO.In_File, Name => Source);
      Out_Stream.Create (Mode => Ada.Streams.Stream_IO.Out_File, Name => Destination);


    In the middle of these two streams, we are going to use a `Compress_Stream` whose job is to compress the data and write the compressed result to a target stream. The compression stream is configured by using the `Initialize` procedure and it is configured to write on the `Out_Stream` file stream. The compression stream needs a buffer and its size is configured with the `Size` parameter.

    ```Ada with Util.Streams.Lzma;

      Compressor : aliased Util.Streams.Lzma.Compress_Stream;
      Compressor.Initialize (Output => Out_Stream'Unchecked_Access, Size => 32768);


    To feed the compressor stream with the input file, we are going to use the `Copy` procedure. This procedure reads the content from the `In_Stream` and writes what is read to the `Compressor` stream.


      Util.Streams.Copy (From => In_Stream, Into => Compressor);


    Flushing and closing the files is automatically handled by a `Finalize` procedure on the `File_Stream` type.

    Complete source example: compress.adb(https://github.com/stcarrez/ada-util/tree/master/samples/compress.adb)

      1. LZMA Decompression

    The LZMA decompression is very close to the LZMA compression but instead it uses the `Decompress_Stream`. The complete decompression method is the following:

    ```Ada procedure Decompress_File (Source : in String;

                              Destination : in String) is
      In_Stream    : aliased Util.Streams.Files.File_Stream;
      Out_Stream   : aliased Util.Streams.Files.File_Stream;
      Decompressor : aliased Util.Streams.Lzma.Decompress_Stream;


      In_Stream.Open (Mode => Ada.Streams.Stream_IO.In_File, Name => Source);
      Out_Stream.Create (Mode => Ada.Streams.Stream_IO.Out_File, Name => Destination);
      Decompressor.Initialize (Input => In_Stream'Unchecked_Access, Size => 32768);
      Util.Streams.Copy (From => Decompressor, Into => Out_Stream);

    end Decompress_File; ```

    Complete source example: decompress.adb(https://github.com/stcarrez/ada-util/tree/master/samples/decompress.adb)

      1. AES Encryption

    Encryption is a little bit more complex due to the encryption key that must be configured. The encryption is provided by the `Encoding_Stream` and it uses a `Secret_Key` to configure the encryption key. The `Secret_Key` is a limited type and it cannot be copied. To build the encryption key, one method consists in using the PBKDF2 algorithm described in RFC 8018(https://tools.ietf.org/html/rfc8018). The user password is passed to the PBKDF2 algorithm configured to use the HMAC-256 hashing. The hash method is called on itself 20000 times in this example to produce the final encryption key.

    ```Ada with Util.Streams.AES; with Util.Encoders.AES; with Util.Encoders.KDF.PBKDF2_HMAC_SHA256;

      Cipher       : aliased Util.Streams.AES.Encoding_Stream;
      Password_Key : constant Util.Encoders.Secret_Key := Util.Encoders.Create (Password);
      Salt         : constant Util.Encoders.Secret_Key := Util.Encoders.Create ("fake-salt");
      Key          : Util.Encoders.Secret_Key (Length => Util.Encoders.AES.AES_256_Length);
         PBKDF2_HMAC_SHA256 (Password => Password_Key,
                             Salt     => Salt,
                             Counter  => 20000,
                             Result   => Key);


    The encoding stream is able to produce or consume another stream. For the encryption, we are going to use the first mode and use the `Produces` procedure to configure the encryption to write on the `Out_Stream` file. Once configured, the `Set_Key` procedure must be called with the encryption key and the encryption method. The initial encryption `IV` vector can be configured by using the `Set_IV` procedure (not used by the example). As soon as the encryption key is configured, the encryption can start and the `Cipher` encoding stream can be used as an `Output_Stream`: we can write on it and it will encrypt the content before passing the result to the next stream. This means that we can use the same `Copy` procedure to read the input file and pass it through the encryption encoder.


      Cipher.Produces (Output => Out_Stream'Unchecked_Access, Size => 32768);
      Cipher.Set_Key (Secret => Key, Mode => Util.Encoders.AES.ECB);
      Util.Streams.Copy (From => In_Stream, Into => Cipher);


    Complete source example: encrypt.adb(https://github.com/stcarrez/ada-util/tree/master/samples/encrypt.adb)

      1. AES Decryption

    Decryption is similar but it uses the `Decoding_Stream` type. Below is the complete example to decrypt the file:

    ```Ada procedure Decrypt_File (Source : in String;

                           Destination : in String;
                           Password    : in String) is
      In_Stream    : aliased Util.Streams.Files.File_Stream;
      Out_Stream   : aliased Util.Streams.Files.File_Stream;
      Decipher     : aliased Util.Streams.AES.Decoding_Stream;
      Password_Key : constant Util.Encoders.Secret_Key := Util.Encoders.Create (Password);
      Salt         : constant Util.Encoders.Secret_Key := Util.Encoders.Create ("fake-salt");
      Key          : Util.Encoders.Secret_Key (Length => Util.Encoders.AES.AES_256_Length);


      --  Generate a derived key from the password.
      PBKDF2_HMAC_SHA256 (Password => Password_Key,
                          Salt     => Salt,
                          Counter  => 20000,
                          Result   => Key);
      --  Setup file -> input and cipher -> output file streams.
      In_Stream.Open (Ada.Streams.Stream_IO.In_File, Source);
      Out_Stream.Create (Mode => Ada.Streams.Stream_IO.Out_File, Name => Destination);
      Decipher.Produces (Output => Out_Stream'Access, Size => 32768);
      Decipher.Set_Key (Secret => Key, Mode => Util.Encoders.AES.ECB);
      --  Copy input to output through the cipher.
      Util.Streams.Copy (From => In_Stream, Into => Decipher);

    end Decrypt_File; ```

    Complete source example: decrypt.adb(https://github.com/stcarrez/ada-util/tree/master/samples/decrypt.adb)

      1. Stream composition: LZMA > AES

    Now, if we want to compress the stream before encryption, we can do this by connecting the `Compressor` to the `Cipher` stream and we only have to use the `Compressor` instead of the `Cipher` in the call to `Copy`.


      In_Stream.Open (Ada.Streams.Stream_IO.In_File, Source);
      Out_Stream.Create (Mode => Ada.Streams.Stream_IO.Out_File, Name => Destination);
      Cipher.Produces (Output => Out_Stream'Unchecked_Access, Size => 32768);
      Cipher.Set_Key (Secret => Key, Mode => Util.Encoders.AES.ECB);
      Compressor.Initialize (Output => Cipher'Unchecked_Access, Size => 4096);
      Util.Streams.Copy (From => In_Stream, Into => Compressor);


    When `Copy` is called, the following will happen:

    • first, it reads the `In_Stream` source file,
    • the data is written to the `Compress` stream,
    • the `Compressor` stream runs the LZMA compression and writes on the `Cipher` stream,
    • the `Cipher` stream encrypts the data and writes on the `Out_Stream`,
    • the `Out_Stream` writes on the destination file.

    Complete source example: lzma_encrypt.adb(https://github.com/stcarrez/ada-util/tree/master/samples/lzma_encrypt.adb)

      1. More stream composition: LZMA > AES > Base64

    We can easily change the stream composition to encode in Base64 after the encryption. We only have to declare an instance of the Base64 `Encoding_Stream` and configure the encryption stream to write on the Base64 stream instead of the output file. The Base64 stream is configured to write on the output stream.

    ```Ada In_Stream : aliased Util.Streams.Files.File_Stream; Out_Stream : aliased Util.Streams.Files.File_Stream; Base64 : aliased Util.Streams.Base64.Encoding_Stream; Cipher : aliased Util.Streams.AES.Encoding_Stream; Compressor : aliased Util.Streams.Lzma.Compress_Stream;

      In_Stream.Open (Ada.Streams.Stream_IO.In_File, Source);
      Out_Stream.Create (Mode => Ada.Streams.Stream_IO.Out_File, Name => Destination);
      Base64.Produces (Output => Out_Stream'Unchecked_Access, Size => 32768);
      Cipher.Produces (Output => Base64'Unchecked_Access, Size => 32768);
      Cipher.Set_Key (Secret => Key, Mode => Util.Encoders.AES.ECB);
      Compressor.Initialize (Output => Cipher'Unchecked_Access, Size => 4096);


    Complete source example: lzma_encrypt_b64.adb(https://github.com/stcarrez/ada-util/tree/master/samples/lzma_encrypt_b64.adb)

      1. Serialization

    Serialization is achieved by using the `Util.Serialize.IO` packages and child packages and their specific types. The parent package defines the limited `Output_Stream` interface which inherit from the `Util.Streams.Output_Stream` interface. This allows to define specific operations to write various Ada types but also it provides common set of abstractions that allow to write either a JSON, XML, CSV and FORM (`x-www-form-urlencoded`) formats.

    The target format is supported by a child package so that you only have to use the `Output_Stream` type declared in one of the `JSON`, `XML`, `CSV` or `Form` child package and use it transparently. There are some constraint if you want to switch from one output format to another while keeping the same code. These constraints comes from the nature of the different formats: `XML` has a notion of entity and attribute but other formats don't differentiate entities from attributes.

    • A `Start_Document` procedure must be called first. Not all serialization method need it but it is required for JSON to produce a correct output.
    • A `Write_Entity` procedure writes an XML entity of the given name. When used in JSON, it writes a JSON attribute.
    • A `Start_Entity` procedure prepares the start of an XML entity or a JSON structure with a given name.
    • A `Write_Attribute` procedure writes an XML attribute after a `Start_Entity`. When used in JSON, it writes a JSON attribute.
    • A `End_Entity` procedure terminates an XML entity or a JSON structure that was opened by `Start_Entity`.
    • At the end, the `End_Document` procedure must be called to finish correctly the output and terminate the JSON or XML content.

    ```Ada procedure Write (Stream : in out Util.Serialize.IO.Output_Stream'Class) is begin

      Stream.Start_Entity ("person");
      Stream.Write_Entity ("name", "Harry Potter");
      Stream.Write_Entity ("gender", "male");
      Stream.Write_Entity ("age", 17);
      Stream.End_Entity ("person");

    end Write; ```

        1. JSON Serialization

    With the above `Write` procedure, if we want to produce a JSON stream, we only have to setup a JSON serializer. The JSON serializer is connected to a `Print_Stream` which provides a buffer and helper operations to write some text content. An instance of the `Print_Stream` is declared in `Output` and configured with a buffer size. The JSON serializer is then connected to it by calling the `Initialize` procedure and giving the `Output` parameter.

    After writing the content, the JSON is stored in the `Output` print stream and it can be retrieved by using the `To_String` function.


    ```Ada with Ada.Text_IO; with Util.Serialize.IO.JSON; with Util.Streams.Texts; procedure Serialize is

      Output : aliased Util.Streams.Texts.Print_Stream;
      Stream : Util.Serialize.IO.JSON.Output_Stream;


      Output.Initialize (Size => 10000);
      Stream.Initialize (Output => Output'Unchecked_Access);
      Write (Stream);
      Ada.Text_IO.Put_Line (Util.Streams.Texts.To_String (Output));

    end Serialize; ```

    The `Write` procedure described above produces the following JSON content:

    ```C {"person":{"name":"Harry Potter","gender":"male","age": 17}} ```

    Complete source example: serialize.adb(https://github.com/stcarrez/ada-util/tree/master/samples/serialize.adb)

        1. XML Serialization

    Switching to an XML serialization is easy: replace `JSON` by `XML` in the package to use the XML serializer instead.

    ```Ada with Ada.Text_IO; with Util.Serialize.IO.XML; with Util.Streams.Texts; procedure Serialize is

      Output : aliased Util.Streams.Texts.Print_Stream;
      Stream : Util.Serialize.IO.XML.Output_Stream;


      Output.Initialize (Size => 10000);
      Stream.Initialize (Output => Output'Unchecked_Access);
      Write (Stream);
      Ada.Text_IO.Put_Line (Util.Streams.Texts.To_String (Output));

    end Serialize; ```

    This time, the same `Write` procedure produces the following XML content:

    ```C <person><name>Harry Potter</name><gender>male</gender><age>17</age></person> ```

    Complete source example: serialize_xml.adb(https://github.com/stcarrez/ada-util/tree/master/samples/serialize_xml.adb)

    Easy reading and writing files with Ada Utility Library

    9 August 2020 at 20:49

    The Ada Utility Library provides several simple operations that simplify the reading and writing of files through a single procedure call. These operations are not new since I've implemented most of them 10 years ago!!!

    Reading a file line by line by using Ada.Text_IO is quite easy but annoying due to the fact that you have to open the file, iterate over the content getting one line at a time and then closing the file. To simplify this process, the Util.Files package of Ada Utility Library provides a simple Read_File procedure that uses a procedure as parameter and that procedure will be called for each line that is read.

    with Util.Files;
      procedure Read_Line (Line : in String) is ...;
      Util.Files.Read_File (Path => "file.txt",
                            Process => Read_Line'Access);

    Another Read_File procedure allows to read the file and get its content in a vector of strings so that once the file is read the vector contains each line. Yet, another Read_File procedure reads the file content in an Unbounded_String. You will use that later form as follows:

    with Util.Files;
    with Ada.Strings.Unbounded;
      Content : Ada.Strings.Unbounded.Unbounded_String;
      Util.Files.Read_File (Path => "file.txt",
                            Into => Content);

    Very often it is also necessary to write some content in a file. Again, this is easy to do but a simple Write_File procedure that takes the file path and the content to write is easier to use in several cases:

    with Util.Files;
      Util.Files.Write_File (Path => "file.txt",
                             Content => "something");

    The Ada Utility Library contains other useful operations and features that have helped me in implementing various projects such as Ada Web Application and Ada Keystore. Have a look at the Ada Utility Library Programmer's Guide!

    Reasons for loving Ada: Type invariants (because bugs shouldn't sleep...)

    19 July 2020 at 14:30

    This tutorial talks about a powerful bug trap that was introduced in Ada 2012: the Type_invariant attribute.

    Type invariant? What's that?

    The idea of type invariant is pretty simple. Sometimes data structure are quite simple, just an aggregation of values, but often they are quite complex with an internal structure that needs to be preserved.

    A type invariant associated to a given type is a Boolean expression that is true for every valid value of the type. Ada allows the association of a type invariant to a private type; the compiler β€” if instructed so β€” will insert code that checks that the invariant remains satisfied; if not, an exception will be raised.

    An example: a table with two keys

    Let's do a simple example. Suppose that

    • We have a set of users
    • Every user has a Serial_ID (a 6 digit number) and a SSN (Social Security Number). We will assume the Italian convention where the SSN is a 16 character string.
    • Both the Serial_ID and SSN identify uniquely the user, that is, there are no two users with the same Serial_ID nor two users with the same SSN.
    • We want to build a data structure User_Directory that will allow us to get (efficiently) an user both by Serial_ID and SSN

    A possible solution is shown in the figure

    Two table pointing to the same record

    The data structure is made of two Ordered_Maps: one map is indexed by the Serial_ID, the other one by the SSN and both maps store pointers (access values in Ada jargon) to a User_Record, a record that contains the user information. This structure clearly introduces some (quite obvious) redundancy

    • The SSN stored in a User_Record must be equal to the corresponding key in the SSN_Map, the same for the Serial_ID
    • The entries of the two maps corresponding to the same user must map to the same record
    • The two maps have the same number of entries.

    The spec file

    This is an excerpt of the public part of the spec file (roughly the *.h of C)

       subtype ID_Char is Character range '0' .. '9';
       type Serial_ID is array (1 .. 6) of ID_Char;
       type User_SSN is new String (1 .. 16);
       type User_Directory is private;
       procedure Add_User (Dir : in out User_Directory;
                           ID  : in Serial_ID;
                           SSN : in User_SSN)
           Pre => not Contains (Dir, Id) and not Contains (Dir, Ssn),
         Post =>  Contains (Dir, Id) and Contains (Dir, Ssn);
       procedure Delete (Dir : in out User_Directory;
                         ID  : in Serial_ID)
           Pre => Contains (Dir, Id),
         Post =>  not Contains (Dir, Id);

    Here we define

    • The types Serial_ID and User_SSN as fixed lenght strings. Note that Serial_ID can contain only digits.
    • User_Directory is the data structure we are defining.
    • Add_User and Delete should be quite obvious. Note the use of preconditions (after Pre =>) and postconditions (after Post =>) that document the behavior of Add_User and Delete.

    The actual definition (in the private part) of User_Directory is basically the translation in Ada of the scheme shown above

     type User_Record is
             ID  : Serial_ID;
             SSN : User_SSN;
          end record;
       type User_Record_Pt is access User_Record;
       package ID_To_User_Maps is
         new Ada.Containers.Ordered_Maps (Key_Type     => Serial_ID,
                                          Element_Type => User_Record_Pt);
       package SSN_To_User_Maps is
         new Ada.Containers.Ordered_Maps (Key_Type     => User_SSN,
                                          Element_Type => User_Record_Pt);
       type User_Directory is
             ID_To_User  : ID_To_User_Maps.Map;
             SSN_To_User : SSN_To_User_Maps.Map;
          end record;

    We have

    • The record User_Record holding the user information and its access User_Record_Pt
    • A package ID_To_User_Maps (obtained by specializing the generic standard package Ordered_Maps) that implements maps that associate User_ID to access to User_Record
    • A similar package SSN_To_User_Maps
    • Finally, we have the User_Directory that simply contains the two required maps.

    The implementation

    Let's look at the implementation of the package. Let's start with Add_User.

     procedure Add_User
         (Dir : in out User_Directory; 
          ID  : in Serial_ID; 
          SSN : in User_SSN)
          New_User : User_Record_Pt;
          if Dir.ID_To_User.Contains (ID) then
             raise Constraint_Error;
          end if;
          New_User  := new User_Record'(ID, SSN);
          Dir.ID_To_User.Include (Key      => Id,
                                  New_Item => New_User);
          Dir.SSN_To_User.Include (Key      => SSN,
                                   New_Item => New_User);
       end Add_User;

    The implementation of Add_User is quite straightforward: with

    New_User  := new User_Record'(ID  => ID, SSN => SSN);

    we allocate dynamically a record with the user information and with

       Dir.ID_To_User.Include (Key      => Id,
                               New_Item => New_User);
       Dir.SSN_To_User.Include (Key      => SSN,
                                New_Item => New_User);

    we update the two internal tables by associating the given ID and SSN with the address of the newly allocated record.

    This is our first tentative to the implementation of Delete.

    procedure Delete (Dir : in out User_Directory; ID : in Serial_ID) is
          To_Be_Removed : User_Record_Pt := Dir.ID_To_User (Id);
          Dir.ID_To_User.Delete (Id);
          Free (To_Be_Removed);
       end Delete;

    This implementation has a bug πŸ›, as it is clear from this figure that pictures the behavior of Delete.

    Entry deleted from only one table

    The dashed line means that the memory area of the user record now has gone back to the heap. It is clear that we forgot to remove the entry from Dir.SSN_To_User and now we have a dangling pointer referring to the old user record.

    This is a nasty bug.

    Gollum with a scolopendra saying it is a nasty bug
    Oh, yes, believe me. I have already seen this movie, actually I was the main character. What could go wrong? Well, for example, later you could want to update the entries of Dir.SSN_To_User in some way. However, the memory that was assigned to the user record now belongs to some other structure that will be corrupted by your update.

    Depending on your code, the bug can remain dormant for long time, then, suddenly, one day, if you are lucky, you get a SEGFAULT when you try to use the corrupted structure again. If you are unlucky you'll get funny random behaviors, possibly difficult to replicate.

    Even if you are lucky (so to say...) you'll get the SEGFAULT in a place totally unrelated with the real bug in Delete. Believe me, finding this bug can take days of stumbling around in the dark. Not even an army of rubber ducks can save you.

    Army of rubber ducks

    Although this is a specific example, this kind of time bomb πŸ’£ (or Sword of Damocles) behavior is typical of bugs that cause a loss of internal coherence in some complex structure. The bug will not manifest itself when the structure is corrupted, but later as a delayed consequence of the loss of coherence, making bug hunting difficult.

    Remember: there is one thing worse than a bug that causes a crash: a bug that causes the program to continue as nothing happened.

    The solution: type invariant

    Fear not, my friends and colleagues. Ada comes to the rescue with the Type_Invariant. It suffices to add it to the type definition as in

    type User_Directory is
             ID_To_User  : ID_To_User_Maps.Map;
             SSN_To_User : SSN_To_User_Maps.Map;
          end record
        with Type_Invariant => Is_Coherent (User_Directory);

    where Is_Coherent is a function that checks that the whole structure is coherent, that is, that the two maps have the same number of entries and that the data in the user record match the corresponding key in the tables. The source code of the (fairly long) function is included, for reference, at the end of this post.

    Now if we run the following test program

    with User_Directories;   use User_Directories;
    procedure Main is
       Dir : User_Directory;
       Add_User (Dir => dir,
                 ID  => "123456",
                 SSN => "ABCDEF64X12Q456R");
       Add_User (Dir => dir,
                 ID  => "654321",
                 SSN => "XBCDEF64X12Q456R");
       Delete (Dir, Id => "123456");
    end Main;

    we get the following results

    Execution of obj/main terminated by unhandled exception
      failed invariant from user_directories.ads:65
    Call stack traceback locations:

    where, for the sake of readability I replaced part of the dump text with β‹―. The line

    failed invariant from user_directories.ads:65

    refers to the line where type User_Directory is declared. In the stack dump the first two lines refer to some system files, we are interested in the third line


    that refers to the first line of procedure Delete.

    • Before calling Delete user directory Dir was fine
    • After calling Delete it is not fine anymore, who are you gonna blame?

    Ghostbusters! No, sorry, wrong citation... πŸ˜ŠπŸ˜› (this is also a hint to my age... 😊)

    Judge finds Delete guilty of data structure damage

    After a brief inspection of Delete we discover the problem and we fix it quite easily

    procedure Delete (Dir : in out User_Directory; ID : in Serial_ID) is
          To_Be_Removed : User_Record_Pt := Dir.ID_To_User (Id);
          Dir.SSN_To_User.Delete (To_Be_Removed.SSN); -- New line
          -- and here???
          Dir.ID_To_User.Delete (Id);
          Free (To_Be_Removed);
       end Delete;

    Wait a minute...

    Someone maybe could observe that at the line -- and here??? the variable Dir is in a non coherent state since we updated one table, but not the other. The same happen in the Add_User procedure. Wouldn't this raise an exception?

    Well, no. The details are a bit complex, but the basic idea is that since Add_User and Delete are declared in the same package of User_Directory, they are able to operate on the internal structure of the type and it is acceptable that during this manipulation the internal structure is not coherent for a moment. The type invariant will be checked only when the procedures end;
    see the Reference manual if you want every single, gory, detail...


    Implementation of Is_Coherent

    function Is_Coherent (Dir : User_Directory) return Boolean is
          use Ada.Containers;
          use ID_To_User_Maps;
          use SSN_To_User_Maps;
          if Dir.ID_To_User.Length /= Dir.SSN_To_User.Length then
             return False;
          end if;
          for Pos in Dir.ID_To_User.Iterate loop
                Usr_Entry : constant User_Record_Pt := Element (Pos);
                if Usr_Entry /= Dir.SSN_To_User (Usr_Entry.Ssn) then
                   return False;
                end if;
                if Key (Pos) /= Usr_Entry.Id then
                   return False;
                end if;
          end loop;
          return True; 
       end Is_Coherent;


    Proving the correctness of a binary search procedure with SPARK/Ada

    9 July 2020 at 13:55


    SPARK/Ada is a language derived from Ada that allows for a formal checking (i.e., mathematically prove the correctness) of the software. Several types of checks can be done: from the absence of runtime exceptions to no use of uninitialized variables, up to the formal proof that a given procedure/function fulfills its contract.

    I like formal checking because it can actually prove that your program is correct (or that, at least, some kind of errors are absent), something that is usually not possible with testing. Of course, formal checking cannot applied to every program, but when it is possible it is a very powerful tool.

    SPARK/Ada is definitively on my list of stuff that I want to learn.

    Recently I had to write a procedure for a binary search in an ordered array. I thought that it could be an interesting exercise to write it in SPARK/Ada in order to have it formally verified. This is a brief summary of this experience, written with a tutorial spirit.

    The requirements

    The problem I want to solve is the following

    • INPUTS
      • Table: an array of Float sorted in strictly increasing order (that is, Table(n+1) > Table(n) for every n).
      • What : a Float such that
        • it is included between the array extremes, that is, Table(Table'First) ≀ What ≀ Table(Table'Last), but
        • it is not necessarily in Table, that is it is not required that there is n such that Table(n) = What
      • The unique index n such that
    Table(n) ≀ What < Table(n+1)

    The solution

    The spec file

    First basic draft of specs

    Let's first flesh out the spec file (roughly [very roughly] similar to a *.h file for you C people).

    pragma SPARK_Mode (On);
    package Searching is
       subtype Element_Type is Float;
       subtype Index_Type is Natural;
       type Array_Type is 
          array (Index_Type range <>) of Element_Type;
       function Find (What  : Element_Type;
                      Table : Array_Type)
                      return Index_Type;
    end Searching;

    Entering SPARK Mode

    The first line we encounter is

    pragma SPARK_Mode (On);

    This claims that this package will be compatible with SPARK restrictions and conventions. The lines

       subtype Element_Type is Float;
       subtype Index_Type is Natural;

    define Element_Type and Index_Type as synominous of Float and Index_Type, respectively. This is not strictly necessary, but it can make it easier to change the types in a future

    Function declaration

     function Find (What  : Element_Type;
                    Table : Array_Type)
                    return Index_Type;

    should be clear enough.

    Introducing contracts

    In order for SPARK to be able to proof that our implementation of Find is correct, we need to describe to SPARK what we expect Find to do. This is done by means of a contract. Like a normal contract between people, a function contract usually has two parts: preconditions and postconditions.

    The idea is that if you (the caller) do your part (i.e. you meet the preconditions when you call me), then I, the function, promise to do my part, that is, that the result will satisfy the post-conditions.

    If a contract is given, I can ask SPARK to prove that the post-conditions follow from the pre-conditions. If SPARK succeeds, I know that the code I wrote is correct (in the sense that it respects the contract) without doing any test: I have a mathematical proof for it.

    BTW, contracts are very useful even if you are not using SPARK. First, they are a wonderful "bug trap." By using the right options, you can ask the compiler to insert code that checks the pre-conditions when the function is called and the post-conditions when the function ends. If pre/post-conditions are not met, an exception is raised.

    Moreover, contracts document in a formal and unambiguous way the behavior of the function and, differently from usual comment-based documentation, cannot go out of sync with the code.

    OK, so contracts are cool. How do we write the contract for our function? Well, let's start with the precondition and let's check the specs. It is said that (i) Table must be sorted and (ii) What must be between Table extrema; we just translate that in Ada in this way

        function Find (What  : Element_Type;
                      Table : Array_Type)
                      return Index_Type
           Pre =>
              Is_Sorted (Table)
              and Table (Table'Last) >= What
              and What >= Table (Table'First);

    where Is_Sorted is a function that checks if its argument is sorted and defined as follows

    function Is_Sorted (Table : Array_Type) return Boolean
       is (for all L in Table'Range =>
             (for all M in Table'Range =>
                (if L > M then Table (L) > Table (M))))
       with Ghost;

    The body of the function (this form is called expression function) is the translation in Ada of the definition of monotonic array.

    Note the with Ghost in the definition. This says that Is_Sorted is a ghost function, that is, a function that can be used only in contracts or assertions; if used in other places ("real code") an error is raised. (I love this kind of protections β™₯)

    The preconditions look right and they represent the actual specs, but if we try to run SPARK we get an error

    medium: array index check might fail

    at line

          and What >= Table (Table'First);

    This means that Table'First, that is the first index of the array, can be ... outside array bounds? The first index is by definition within the bounds, right?

    Perplexed woman
    Go home SPARK, you're drunk...

    Well... Actually, SPARK is right. If you ask for a counterexample you get

    e.g. when Table'First=1 and Table'Last=0

    Now, when in the world can be possible that the last index is smaller than the first one?!? Well, if Table is empty... Empty arrays in Ada have the last index smaller than the first one. Therefore, trying to access Table (Table'First) would cause an error.
    Well, SPARK, good catch...

    Old man asking why someone would want to search an empty table

    Well, I agree (although it could happen because of an error), but SPARK does not know it. To make SPARK happy it suffices to add Table'Length > 0 to the precondition to get

        function Find (What  : Element_Type;
                      Table : Array_Type)
                      return Index_Type
           Pre => Table'Length > 0
                  and then (Is_Sorted (Table)
                            and Table (Table'Last) >= What
                            and What >= Table (Table'First));

    Now SPARK is happy and even the casual user sees that you cannot call the function with an empty table.

    The postcondition

    Also for the post-conditions we translate directly the requirement from English to Ada. It is required that What is between Table(n) and Table(n+1) where n is the value returned by the function. In Ada we get

             (if Find'Result = Table'Last then
                 Table (Find'Result) = What
                 Table (Find'Result) <= What 
             and What < Table (Find'Result + 1));

    Note that Find'Result is the value returned by Find and that we consider as special case Find'Result = Table'Last since in this case there is no "following entry" in Table. Overall, the function declaration with the full contract is

       function Find (What  : Element_Type;
                      Table : Array_Type)
                      return Index_Type
           Pre => 
              Table'Length > 0
              and then (Is_Sorted (Table)
                        and Table (Table'Last) >= What
                        and What >= Table (Table'First)),
           Post =>
             (if Find'Result = Table'Last then
                 Table (Find'Result) = What
                 Table (Find'Result) <= What 
             and What < Table (Find'Result + 1));

    The body file (implementation)

    The algorithm

    The algorithm for a binary search is well known and exemplified in the following picture

    Example of binary search over a length 8 array

    Basically, we keep two "cursors" in the table: Bottom and Top with the condition that it must always be

    Table(Bottom) ≀ What < Top(top)

    We get the Middle point between Top and Bottom and if Table(Middle) is too large we move Top to Middle, otherwise we move Bottom. We iterate this until on of the following two conditions holds

    • Table(Bottom) = What, that is, we actually found What in Table
    • Top = Bottom+1 there are no intermediate entries between Top and Bottom

    In both cases we return Bottom

    The actual code

    Let's start with a basic skeleton of the procedure.

    function Find (What : Element_Type; Table : Array_Type) return Index_Type
          Bottom : Index_Type;
          Top    : Index_Type;
          Middle : Index_Type;
          if Table (Table'Last) = What then
             return Table'Last;
          end if;
          Bottom := Table'First;
          Top    := Table'Last;
          pragma Assert (Table (Top) > What and What >= Table(Bottom));
          while Table (Bottom) < What and Top - Bottom > 1 loop
             Middle := (Bottom + Top)/2;
             if Table (Middle) > What then
                Top := Middle;
                Bottom := Middle;
             end if;
          end loop;
          return Bottom;
       end Find;

    It is just a translation of the algorithm informally described above. Note how the special case Table (Table'Last) = What is handled separately; in this way we know that

    Table (Top) > What and What >= Table(Bottom)

    holds (see the pragma Assert).

    Now, in order to make it easier for SPARK to prove the correctness of the code we insert in the code some pragma Assert claiming properties that hold true in different points of the code.

    Automatic theorem proving is not easy and some hint to the prover can help. Moreover, I love spreading generously my code with assertion since they help understanding what the code does and which properties I expect to be true. They are also formidable "bug traps".

    The function body with all the assertions looks like

    function Find (What : Element_Type; Table : Array_Type) return Index_Type
          Bottom : Index_Type;
          Top    : Index_Type;
          Middle : Index_Type;
          if Table (Table'Last) = What then
             return Table'Last;
          end if;
          pragma Assert (Table (Table'Last) > What);
          Bottom := Table'First;
          Top    := Table'Last;
          pragma Assert (Bottom < Top);
          pragma Assert (Table (Bottom) <= What and What < Table (Top));
          while Table (Bottom) < What and Top - Bottom > 1 loop
             pragma Loop_Invariant (Bottom >= Table'First);
             pragma Loop_Invariant (Top <= Table'Last);
             pragma Loop_Invariant (Top > Bottom);
             pragma Loop_Invariant (Table (Bottom) <= What and What < Table (Top));
             pragma Loop_Variant (Decreases => Top - Bottom);
             Middle := (Bottom + Top)/2;
             pragma Assert (Bottom < Middle and Middle < Top);
             if Table (Middle) > What then
                Top := Middle;
                Bottom := Middle;
             end if;
          end loop;
          pragma Assert (Table (Bottom) <= What and Table (Bottom + 1) > What);
          return Bottom;
       end Find;

    There are just two points worth describing; the first is this sequence of pragmas inside the loop

    pragma Loop_Invariant (Bottom >= Table'First);
    pragma Loop_Invariant (Top <= Table'Last);
    pragma Loop_Invariant (Top > Bottom);
    pragma Loop_Invariant (Table (Bottom) <= What 
                           and What < Table (Top));
    pragma Loop_Variant (Decreases => Top - Bottom);

    pragmas Loop_Invariant and Loop_Variant are specific to SPARK. A loop invariant is a condition that is true at every iteration and you can see that this loop invariant is a formalization of what we said before: at every iteration What is between Top and Bottom. Loop invariants are important in the proof of correctness of while loops.

    A while loop can potentially go on forever; a technique that allows us to prove that the loop will terminate is to search for some value that always increases (or decreases) at every iteration; if we know a bound for this value (e.g., it is always positive and it always decreases) we can deduce that the loop will terminate. The pragma Loop_Variant allows us to declare said value. In this case the distance between Top and Bottom is halved at every iteration, therefore it is a good variant. Since Top-Bottom cannot be smaller than 2 (see condition in the while), we deduce that the loop will terminate.

    A second observation is about seemly innocuous line

     Middle := (Top + Bottom) / 2;

    Here SPARK complains with

    medium: overflow check might fail 
    (e.g. when Bottom = Index_Type'Last-4 and Top = Index_Type'Last-2)

    Good catch! Here SPARK is observing that although theoretically Middle should be in the range of representable integers if Top and Bottom are, there is a possibility of an overflow while doing the sum. Since on my PC Index_Type'Last = 2^63-1, it is unlikely that one would work with tables so big, nevertheless...

    We have two solutions: (i) allow a smaller range of integers (that is, up to Index_Type'Last/2) or (ii) compute the mean with function

    function Mean (Lo, Hi : Index_Type) return Index_Type
    is (Lo + (Hi - Lo) / 2)
         Pre  => Lo < Index_Type'Last and then Hi > Lo + 1,
         Post => Hi > Mean'Result and Mean'Result > Lo;

    that is guaranteed to have no overflow. Note that in the precondition we require that Hi > Lo + 1, this is guaranteed by the loop condition and it is necessary in order to guarantee that the post conditions hold which in turn guarantees that the loop variant decreases at every iteration.


    OK, now if we run SPARK we get

    Summary of SPARK analysis
    SPARK Analysis results        Total      Flow   Interval   CodePeer      Provers   Justified   Unproved
    Data Dependencies                 .         .          .          .            .           .          .
    Flow Dependencies                 .         .          .          .            .           .          .
    Initialization                    3         3          .          .            .           .          .
    Non-Aliasing                      .         .          .          .            .           .          .
    Run-time Checks                  29         .          .          .    29 (CVC4)           .          .
    Assertions                       14         .          .          .    14 (CVC4)           .          .
    Functional Contracts              3         .          .          .     3 (CVC4)           .          .
    LSP Verification                  .         .          .          .            .           .          .
    Total                            49    3 (6%)          .          .     46 (94%)           .          .

    Do you see on the extreme right the column Unproved with no entries? That is what I want to see... SPARK was able to prove everything, so now you can kick back, relax and enjoy your binary search function with confidence: you made terra bruciata (scorched earth) around it and no bug can survive.

    Castle with flag labelled "Find" in the middle of a desert and a soldier with a flamethrower with a "SPARK" labeled t-shirt

    Ada Stemmer Library

    16 May 2020 at 07:55

    Stemming is not new as it was first introduced in 1968 by Julie Beth Lovis who was a computational linguist that created the first algorithm known today as the Lovins Stemming algorithm. Her algorithm has significantly influenced other algorithms such as the Porter Stemmer algorithm which is now a common stemming algorithm for English words. These algorithms are specific to the English language and will not work for French, Greek or Russian.

    To support several natural languages, it is necessary to have several algorithms. The Snowball stemming algorithms project provides such support through a specific string processing language, a compiler and a set of algorithms for various natural languages. The Snowball compiler has been adapted to generate Ada code (See Snowball Ada on GitHub).

    The Ada Stemmer Library integrates stemming algorithms for: English, Danish, Dutch, French, German, Greek, Italian, Serbian, Spanish, Swedish, Russian. The Snowball compiler provides several other algorithms but they are not integrated yet: their integration is left as an exercise to the reader.

    Stemmer Overview

    Snowball is a small string processing language designed for creating stemming algorithms for use in Information Retrieval. A Snowball script describes a set of rules which are applied and checked on an input word or some portion of it in order to eliminate or replace some terms. The stemmer will usually transform a plural into a singular form, it will reduce the multiple forms of a verb, find the noun from an adverb and so on. Romance languages, Germanic languages, Scandinavian languages share some common rules but each language will need its own snowball algorithm. The Snowball compiler provides a detailed list of several stemming algorithms for various natural languages. This list is available on: https://snowballstem.org/algorithms/


    The Snowball compiler reads the Snowball script and generates the stemmer implementation for a given target programming language such as Ada, C, C#, Java, JavaScript, Go, Python, Rust. The Ada Stemmer Library contains the generated algorithms for several natural languages. The generated stemmers are not able to recognize the natural language and it is necessary to tell the stemmer library which natural language you wish to use.

    The Ada Stemmer Library supports only UTF-8 strings which simplifies both the implementation and the API. The library only uses the Ada String type to handle strings.


    To use the library, you should run the following commands:

      git clone https://github.com/stcarrez/ada-stemmer.git
      cd ada-stemmer
      make build install

    This will fetch, compile and install the library. You can then add the following line in your GNAT project file:

      with "stemmer";

    Stemming examples

    Each stemmer algorithm works on a single word at a time. The Ada Stemmer Library does not split words. You have to give it one word at a time to stem and it returns either the word itself or its stem. The Stemmer.Factory is the multi-language entry point. The stemmer algorithm is created for each call. The following simple code:

      with Stemmer.Factory; use Stemmer.Factory;
      with Ada.Text_IO; use Ada.Text_IO;
        Put_Line (Stem (L_FRENCH, "chienne"));

    will print the string:


    When multiple words must be stemmed, it may be better to declare the instance of the stemmer and use the same instance to stem several words. The Stem_Word procedure can be called with each word and it returns a boolean that indicates whether the word was stemmed or not. The result is obtained by calling the Get_Result function. For exemple,

      with Stemmer.English;
      with Ada.Text_IO; use Ada.Text_IO;
        Ctx : Stemmer.English.Context_Type;
        Stemmed : Boolean;
        Ctx.Stem_Word ("zealously", Stemmed);
        if Stemmed then
           Put_Line (Ctx.Get_Result);
        end if;

    Integrating a new Stemming algorithm

    Integration of a new stemming algorithm is quite easy but requires to install the Snowball Ada compiler.

      git clone --branch ada-support https://github.com/stcarrez/snowball
      cd snowball

    The Snowball compiler needs the path of the stemming algorithm, the target programming language, the name of the Ada child package that will contain the generated algorithm and the target path. For example, to generate the Lithuanian stemmer, the following command can be used:

      ./snowball algorithms/lithuanian.sbl -ada -P Lithuanian -o stemmer-lithuanian

    You will then get two files: stemmer-lithuanian.ads and stemmer-lithuanian.adb. After integration of the generated files in your project, you can access the generated stemmer with:

      with Stemmer.Lithuanian;
        Ctx : Stemmer.Lithuanian.Context_Type;


    Thanks to the Snowball compiler and its algorithms, it is possible to do some natural language analysis. Version 1.0 of the Ada Stemmer Library being available on GitHub, it is now possible to start doing some natural language analysis in Ada!

    Using the Gnome and KDE Secret Service API in Ada

    25 June 2017 at 17:00

    The libsecret is the C library that gives access to the Secret Service API. The Ada Libsecret is an Ada binding for the C library. The Ada binding does not allow to access and use all of the functionalities implemented by the C library but it implements the most useful operations allowing to store, retrieve and delete some application secret data.

    Understanding the Secret Service API

    At first glance, the Secret Service API is not easy to use. Each secret is stored together with lookup attributes and a label. Lookup attributes are formed of key/value pairs. The label is the user friendly name that desktop key manager will use to display some information to the end user.


    The Secret Service API is implemented by a keyring manager such as gnome-keyring-daemon or kwalletd. This is a daemon that is started when a user opens a desktop session. It manages the application secrets and protects their access. The secret database can be locked in which case the access to secrets is forbidden. Unlocking is possible but requires authentication by the user (in most cases a dialog popup window opens and asks to unlock the keyring).

    When a client application wishes to retrieve one of its secret, it builds the lookup attributes that correspond to the secret to retrieve. The lookup attributes are not encrypted and they are not part of the secret. The client application uses the D-Bus IPC mechanism to ask the keyring manager for the secret. The keyring manager will manage for unlocking the database by asking the user to confirm the access. The keyring manager will then look in its database for the secret associated with the lookup attributes.

    Note that the label cannot be used as a key to retrieve the secret since the same label can be associated with different lookup attributes.

    Using the Ada Secret Service API

    Setting up the project

    After building and installing the Ada Libsecret library you will add the following line to your GNAT project file:

    with "secret";

    This definition will give you access to the Secret package and will handle the build and link support to use the libsecret C library.

    Setting the lookup attributes

    Attributes are defined by the Secret.Attributes package which provides the Map type that represents the lookup attributes. First, you will add the following with clause:

    with Secret.Attributes;

    to make available the operations and types provided by the package. Then, you will declare the attributes instance by using:

       List : Secret.Attributes.Map;

    At this stage, the lookup attributes are empty and you can check that by using the Is_Null function that will return True in that case. You must now add at least one key/value pair in the attributes by using the Insert procedure:

       List.Insert ("secret-tool", "key-password");
       List.Insert ("user", "joe");

    Applications are free about what attributes they use. The attributes have to be unique so that the application can identify and retrieve them. For example, the svn command uses two attributes to store the password to authenticate to svn remote repositories: domain and user. The domain represents the server URL and the user represents the user name to use for the connection. By using these two attributes, it is possible to store several passwords for different svn accounts.

    Storing a secret

    To store a secret, we will use the operations and types from the Secret.Services and Secret.Values packages. The following definitions:

    with Secret.Services;
    with Secret.Values;

    will bring such definitions to the program. The secret service is represented by the Service_Type type and we will declare an instance of it as follows:

       Service : Secret.Services.Service_Type;

    This service instance is a proxy to the Secret Service API and it communicates to the gnome-keyring-daemon by using the D-Bus protocol.

    The secret value itself is represented by the Secret_Type and we can define and create such secret by using the Create function as follows:

       Value : Secret.Values.Secret_Type := Secret.Values.Create ("my-secret");

    Storing the secret is done by the Store operation which associates the secret value to the lookup attributes and a label. As explained before, the lookup attributes represent the unique key to identify the secret. The label is used to give a user friendly name to the association. This label is used by the desktop password and key manager to give information to the user.

       Service.Store (List, "Secret tool password", Value);

    Retreiving a secret

    Retreiving a secret follows the same steps but involves using the Lookup function that returns the secret value from the lookup attributes. Care must be made to provide the same lookup attributes that were used during the store phase.

       Value : Secret.Values.Secret_Type := Service.Lookup (List);

    The secret value should be checked by using the Is_Null function to verify that the value was found. The secret value itself is accessed by using the Get_Value function.

       if not Value.Is_Null then
          Ada.Text_IO.Put_Line (Value.Get_Value);
       end if;


    By using the Ada Secret Service API, Ada applications can now securely store private information and protect resources for their users. The API is fairly simple and can be used to store OAuth access tokens, database passwords, and more...

    Read the Ada Libsecret Documentation to learn more about the API.

    Using the Ada Wiki Engine

    30 April 2016 at 16:07

    The Ada Wiki Engine is used in two steps:

    1. The Wiki text is parsed according to its syntax to produce a Wiki Document instance.
    2. The Wiki document is then rendered by a renderer to produce the final HTML or text.

    The Ada Wiki Engine does not manage any storage for the wiki content so that it only focuses on the parsing and rendering aspects.


    The Ada Wiki engine is organized in several packages:

    • Several Wiki stream packages define the interface, types and operations for the Wiki engine to read the Wiki or HTML content and for the Wiki renderer to generate the HTML or text outputs.
    • The Wiki parser is responsible for parsing HTML or Wiki content according to a selected Wiki syntax. It builds the final Wiki document through filters and plugins.

    • The Wiki filters provides a simple filter framework that allows to plug specific filters when a Wiki document is parsed and processed. Filters are used for the table of content generation, for the HTML filtering, to collect words or links and so on.
    • The Wiki plugins defines the plugin interface that is used by the Wiki engine to provide pluggable extensions in the Wiki. Plugins are used for the Wiki template support, to hide some Wiki text content when it is rendered or to interact with other systems.
    • The Wiki documents and attributes are used for the representation of the Wiki document after the Wiki content is parsed.
    • The Wiki renderers are the last packages which are used for the rendering of the Wiki document to produce the final HTML or text.

    Building Ada Wiki Engine

    Download the ada-wiki-1.0.1.tar.gz or get the sources from GitHub:

    git clone [email protected]:stcarrez/ada-wiki.git ada-wiki

    If you are using Ada Utility Library then you can configure with:


    Otherwise, you should configure with:

    ./configure --with-ada-util=no

    Then, build the library:


    Once complete, you can install it:

    make install

    To use the library in your Ada project, add the following line in your GNAT project file:

    with "wiki";

    Rendering example

    The rendering example described in this article generates an HTML or text content from a Wiki source file. The example reads the file in one of the supported Wiki syntax and produces the HTML or text. You will find the source file on GitHub in render.adb. The example has the following usage:

    Render a wiki text file into HTML (default) or text
    Usage: render [-t] [-m] [-M] [-d] [-c] [-s style] {wiki-file}
      -t        Render to text only
      -m        Render a Markdown wiki content
      -M        Render a Mediawiki wiki content
      -d        Render a Dotclear wiki content
      -g        Render a Google wiki content
      -c        Render a Creole wiki content
      -s style  Use the CSS style file

    Parsing a Wiki Text

    To render a Wiki text you will first need to parse the Wiki text and produce a Wiki document instance. For this you will need to declare the Wiki document instance and the Wiki parser instance:

     with Wiki.Documents;
     with Wiki.Parsers;
        Doc      : Wiki.Documents.Document;
        Engine   : Wiki.Parsers.Parser;

    The Ada Wiki Engine has a filter mechanism that is used while parsing the input and before building the target wiki document instance. Filters are chained together and a filter can do some work on the content it sees such as blocking some content (filtering), collecting some data and doing some transformation on the content. When you want to use a filter, you have to declare an instance of the corresponding filter type.

     with Wiki.Filters.Html;
     with Wiki.Filters.Autolink;
     with Wiki.Filters.TOC;
        Filter   : aliased Wiki.Filters.Html.Html_Filter_Type;
        Autolink : aliased Wiki.Filters.Autolink.Autolink_Filter;
        TOC      : aliased Wiki.Filters.TOC.TOC_Filter;

    We use the Autolink filter that detects links in the text and transforms them into real links. The TOC filter is used to collect header sections in the Wiki text and builds a table of content. The Html filter is used to filter HTML content that could be contained in a Wiki text. By default it ignores several HTML tags such as html, head, body, title, meta (these tags are silently discarded). Furthermore it has the ability to hide several elements such as style and script (the tag and its content is discarded).

    You will then configure the Wiki engine to build the filter chain and then define the Wiki syntax that the parser must use:

     Engine.Add_Filter (TOC'Unchecked_Access);
     Engine.Add_Filter (Autolink'Unchecked_Access);
     Engine.Add_Filter (Filter'Unchecked_Access);
     Engine.Set_Syntax (Syntax);

    The Wiki engine gets its input from an Input_Stream interface that only defines a Read procedure. The Ada Wiki Engine provides several implementations of that interface, one of them is based on the Ada Text_IO package. This is what we are going to use:

     with Wiki.Streams.Text_IO;
        Input    : aliased Wiki.Streams.Text_IO.File_Input_Stream;

    You will then open the input file. If the file contains UTF-8 characters, you may open it as follows:

     Input.Open (File_Path, "WCEM=8");

    where File_Path is a string that represents the file's path.

    Once the Wiki engine is setup and the input file opened, you can parse the Wiki text and build the Wiki document:

     Engine.Parse (Input'Unchecked_Access, Doc);

    Rendering a Wiki Document

    After parsing a Wiki text you get a Wiki.Documents.Document instance that you can use as many times as you want. To render the Wiki document, you will first choose a renderer according to the target format that you need. The Ada Wiki Engine provides three renderers:

    • A Text renderer that produces text outputs,
    • A HTML renderer that generates an HTML presentation for the document,
    • A Wiki renderer that generates various Wiki syntaxes.

    The renderer needs an output stream instance. We are using the Text_IO implementation:

     with Wiki.Stream.Html.Text_IO;
     with Wiki.Render.Html;
        Output   : aliased Wiki.Streams.Html.Text_IO.Html_File_Output_Stream;
        Renderer : aliased Wiki.Render.Html.Html_Renderer;

    You will then configure the renderer to tell it the output stream to use. You may enable or not the rendering of Table Of Content and you just use the Render procedure to render the document.

     Renderer.Set_Output_Stream (Output'Unchecked_Access);
     Renderer.Set_Render_TOC (True);
     Renderer.Render (Doc);

    By default the output stream is configured to write on the standard output. This means that when Render is called, the output will be written to the standard output. You can choose another output stream or open the output stream to a file according to your needs.


    The Ada Wiki Engine can be used to parse HTML content, sanitize the result through the HTML filter and convert it to text or to some Wiki syntax (have a look at the import.adb example). The engine can be extended through filters or plugins thus providing some flexible architecture. The library does not impose any storage mechanism. The Ada Wiki Engine is the core engine used by AWA Blogs and AWA Wiki web applications. You may have a look at some online Wiki in the Atlas Wiki demonstrator.

    Using Ada LZMA to compress and decompress LZMA files

    16 December 2015 at 10:25

    Setup of Ada LZMA binding

    First download the Ada LZMA binding at http://download.vacs.fr/ada-lzma/ada-lzma-1.0.0.tar.gz or at [email protected]:stcarrez/ada-lzma.git, configure, build and install the library with the next commands:

    make install

    After these steps, you are ready to use the binding and you can add the next line at begining of your GNAT project file:

    with "lzma";

    Import Declaration

    To use the Ada LZMA packages, you will first import the following packages in your Ada source code:

    with Lzma.Base;
    with Lzma.Container;
    with Lzma.Check;

    LZMA Stream Declaration and Initialization

    The liblzma library uses the lzma_stream type to hold and control the data for the lzma operations. The lzma_stream must be initialized at begining of the compression or decompression and must be kept until the compression or decompression is finished. To use it, you must declare the LZMA stream as follows:

    Stream  : aliased Lzma.Base.lzma_stream := Lzma.Base.LZMA_STREAM_INIT;

    Most of the liblzma function return a status value of by lzma_ret, you may declare a result variable like this:

    Result : Lzma.Base.lzma_ret;

    Initialization of the lzma_stream

    After the lzma_stream is declared, you must configure it either for compression or for decompression.

    Initialize for compression

    To configure the lzma_stream for compression, you will use the lzma_easy_encode function. The Preset parameter controls the compression level. Higher values provide better compression but are slower and require more memory for the program.

    Result := Lzma.Container.lzma_easy_encoder (Stream'Unchecked_Access, Lzam.Container.LZMA_PRESET_DEFAULT,
    if Result /= Lzma.Base.LZMA_OK then
      Ada.Text_IO.Put_Line ("Error initializing the encoder");
    end if;
    Initialize for decompression

    For the decompression, you will use the lzma_stream_decoder:

    Result := Lzma.Container.lzma_stream_decoder (Stream'Unchecked_Access,

    Compress or decompress the data

    The compression and decompression is done by the lzma_code function which is called several times until it returns LZMA_STREAM_END code. Setup the stream 'next_out', 'avail_out', 'next_in' and 'avail_in' and call the lzma_code operation with the action (Lzma.Base.LZMA_RUN or Lzma.Base.LZMA_FINISH):

    Result := Lzma.Base.lzma_code (Stream'Unchecked_Access, Action);

    Release the LZMA stream

    Close the LZMA stream:

        Lzma.Base.lzma_end (Stream'Unchecked_Access);


    To better understand and use the library, use the source Luke