โŒ About FreshRSS

Normal view

There are new articles available, click to refresh the page.
Before yesterdayNews from the Ada programming language world

Ada Web Application 1.0.0 is available

27 July 2014 at 17:59

Ada Web Application is a framework to build web applications.

The new version of AWA provides:

  • New countries plugin to provide country/region/city data models
  • New settings plugin to control application user settings
  • New tags plugin to easily add tags in applications
  • New <awa:tagList> and <awa:tagCloud> components for tag display
  • Add tags to the question and blog plugins
  • Add comments to the blog post

AWA can be downloaded at http://blog.vacs.fr/vacs/download.html

A live demonstration of various features provided by AWA is available at http://demo.vacs.fr/atlas

A small tutorial explains how you can easily setup a project, design the UML model, and use the features provided by the Ada Web Application framework.

Using Ada LZMA to compress and decompress LZMA files

16 December 2015 at 10:25

Setup of Ada LZMA binding

First download the Ada LZMA binding at http://download.vacs.fr/ada-lzma/ada-lzma-1.0.0.tar.gz or at [email protected]:stcarrez/ada-lzma.git, configure, build and install the library with the next commands:

./configure
make
make install

After these steps, you are ready to use the binding and you can add the next line at begining of your GNAT project file:


with "lzma";

Import Declaration

To use the Ada LZMA packages, you will first import the following packages in your Ada source code:


with Lzma.Base;
with Lzma.Container;
with Lzma.Check;

LZMA Stream Declaration and Initialization

The liblzma library uses the lzma_stream type to hold and control the data for the lzma operations. The lzma_stream must be initialized at begining of the compression or decompression and must be kept until the compression or decompression is finished. To use it, you must declare the LZMA stream as follows:


Stream  : aliased Lzma.Base.lzma_stream := Lzma.Base.LZMA_STREAM_INIT;

Most of the liblzma function return a status value of by lzma_ret, you may declare a result variable like this:


Result : Lzma.Base.lzma_ret;

Initialization of the lzma_stream

After the lzma_stream is declared, you must configure it either for compression or for decompression.

Initialize for compression

To configure the lzma_stream for compression, you will use the lzma_easy_encode function. The Preset parameter controls the compression level. Higher values provide better compression but are slower and require more memory for the program.


Result := Lzma.Container.lzma_easy_encoder (Stream'Unchecked_Access, Lzam.Container.LZMA_PRESET_DEFAULT,
                                            Lzma.Check.LZMA_CHECK_CRC64);
if Result /= Lzma.Base.LZMA_OK then
  Ada.Text_IO.Put_Line ("Error initializing the encoder");
end if;
Initialize for decompression

For the decompression, you will use the lzma_stream_decoder:


Result := Lzma.Container.lzma_stream_decoder (Stream'Unchecked_Access,
                                              Long_Long_Integer'Last,
                                              Lzma.Container.LZMA_CONCATENATED);

Compress or decompress the data

The compression and decompression is done by the lzma_code function which is called several times until it returns LZMA_STREAM_END code. Setup the stream 'next_out', 'avail_out', 'next_in' and 'avail_in' and call the lzma_code operation with the action (Lzma.Base.LZMA_RUN or Lzma.Base.LZMA_FINISH):


Result := Lzma.Base.lzma_code (Stream'Unchecked_Access, Action);

Release the LZMA stream

Close the LZMA stream:


    Lzma.Base.lzma_end (Stream'Unchecked_Access);

Sources

To better understand and use the library, use the source Luke

Download

New releases for Ada Util, Ada EL, Ada Security, Ada Database Objects, Ada Server Faces, Dynamo

30 December 2015 at 22:00

Ada Utility Library, Version 1.8.0

  • Added support for immediate flush and file appending to the file logger
  • Added support for RFC7231/RFC2616 date conversion
  • Improvement of configure and installation process with gprinstall (if available)
  • Added file system stat/fstat support
  • Use gcc intrinsics for atomic counters (Intel, Arm)

Download: http://download.vacs.fr/ada-util/ada-util-1.8.0.tar.gz
GitHub: https://github.com/stcarrez/ada-util

Ada EL, Version 1.6.0

  • Added support for thread local EL context
  • Improvement of configure and installation process with gprinstall (if available)

Download: http://download.vacs.fr/ada-el/ada-el-1.6.0.tar.gz
GitHub: https://github.com/stcarrez/ada-el

Ada Security, Version 1.1.2

  • Improvement of configure and installation process with gprinstall (if available)

Download: http://download.vacs.fr/ada-security/ada-security-1.1.2.tar.gz
GitHub: https://github.com/stcarrez/ada-security

Ada Database Objects, Version 1.1.0

  • Fix link issue on Fedora
  • Detect MariaDB as a replacement for MySQL
  • Improvement of configure and installation process with gprinstall (if available)

Download: http://download.vacs.fr/ada-ado/ada-ado-1.1.0.tar.gz
GitHub: https://github.com/stcarrez/ada-ado

Ada Server Faces, Version 1.1.0

  • New EL function util:formatDate
  • New request route mapping with support for URL component extraction and parameter injection in Ada beans
  • Improvement of configure, build and installation with gprinstall when available
  • Integrate jQuery 1.11.3 and jQuery UI 1.11.4
  • Integrate jQuery Chosen 1.4.2
  • New component <w:chosen> for the Chosen support
  • Added a servlet cache control filter

Download: http://download.vacs.fr/ada-asf/ada-asf-1.1.0.tar.gz
GitHub: https://github.com/stcarrez/ada-asf

Dynamo, Version 0.8.0

  • Support to generate Markdown documentation
  • Support to generate query Ada bean operations
  • Better code generation and support for UML Ada beans

Download: http://download.vacs.fr/dynamo/dynamo-0.8.0.tar.gz
GitHub: https://github.com/stcarrez/dynamo

GCC 6.1 Ada Compiler From Scratch

29 April 2016 at 12:35

We will do the following tasks:

  1. The binutils build and installation,
  2. The gcc build and installation,
  3. Setting up a default configuration for gprbuild,
  4. The XML/Ada build and installation,
  5. The gprbuild build and installation.

Pre-requisites

First, prepare three distinct directories for the sources, the build materials and the installation. Make sure you have more than 1.5G for the source directory, reserve 7.0G for the build directory and arround 1.5G for the installation directory.

To simplify the commands, define the following shell variables:

BUILD_DIR=<Path of build directory>
INSTALL_DIR=<Path of installation directory>
SRC_DIR=<Path of directory containing the extracted sources>

Also, check that:

  • You have a GNAT Ada compiler installed (at least a 4.9 I guess).
  • You have the gprbuild tool installed and configured for the Ada compiler.
  • You have libmpfr-dev, libgmp3-dev and libgmp-dev installed (otherwise this is far more complex).
  • You have some time and can wait for gcc's compilation (it took more than 2h for me).

Create the directories:

mkdir -p $BUILD_DIR
mkdir -p $INSTALL_DIR/bin
mkdir -p $SRC_DIR

And setup your PATH so that you will use the new binutils and gcc commands while building everything:

export PATH=$INSTALL_DIR/bin:/usr/bin:/bin

Binutils

Download binutils 2.26 and extract the tar.bz2 in the source directory $SRC_DIR.

cd $SRC_DIR
tar xf binutils-2.26.tar.bz2

Never build the binutils within their sources, you must use the $BUILD_DIR for that. Define the installation prefix and configure the binutils as this:

mkdir $BUILD_DIR/binutils
cd $BUILD_DIR/binutils
$SRC_DIR/binutils-2.26/configure --prefix=$INSTALL_DIR

And proceed with the build in the same directory:

make

Compilation is now complete you can install the package:

make install

Gcc

Download gcc 6.1.0 and extract the tar.bz2 in the source directory $SRC_DIR.

cd $SRC_DIR
tar xf gcc-6.1.0.tar.bz2

Again, don't build gcc within its sources and use the $BUILD_DIR directory. At this stage, it is important that your PATH environment variable uses the $INSTALL_DIR/bin first to make sure you will use the new installed binutils tools. You may add the --disable-bootstrap to speed up the build process.

mkdir $BUILD_DIR/gcc
cd $BUILD_DIR/gcc
$SRC_DIR/gcc-6.1.0/configure --prefix=$INSTALL_DIR --enable-languages=c,c++,ada

And proceed with the build in the same directory (go to the restaurant or drink a couple of beers while it builds):

make

Compilation is now complete you can install the package:

make install

The Ada compiler installation does not install two symbolic links which are required during the link phase of Ada libraries and programs. You must create them manually after the install step:

ln -s libgnarl-6.so $INSTALL_DIR/lib/gcc/x86_64-pc-linux-gnu/6.1.0/adalib/libgnarl-6.1.so
ln -s libgnat-6.so $INSTALL_DIR/lib/gcc/x86_64-pc-linux-gnu/6.1.0/adalib/libgnat-6.1.so

Setup the default.cgpr file

The gnatmake command has been deprecated and it is now using gprbuild internally. This means we need a version of gprbuild that uses the new compiler. One way to achieve that is by setting up a gprbuild configuration file:

cd $BUILD_DIR
gprconfig

Select the Ada and C compiler and then edit the default.cgpr file that was generated to change the Toolchain_Version, Runtime_Library_Dir, Runtime_Source_Dir, Driver to indicate the new gcc 6.1 installation paths (replace <INSTALL_DIR> with your installation directory):

configuration project Default is
   ...
   for Toolchain_Version     ("Ada") use "GNAT 6.1";
   for Runtime_Library_Dir   ("Ada") use "<INSTALL_DIR>/lib/gcc/x86_64-pc-linux-gnu/6.1.0//adalib/";
   for Runtime_Source_Dir    ("Ada") use "<INSTALL_DIR>/lib/gcc/x86_64-pc-linux-gnu/6.1.0//adainclude/";
   package Compiler is
      for Driver ("C") use "<INSTALL_DIR>/bin/gcc";
      for Driver ("Ada") use "<INSTALL_DIR>/bin/gcc";
      ...
   end Compiler;
   ...
end Default;

This is the tricky part because if you missed it you may end up using the old Ada compiler. Make sure the Runtime_Library_Dir and Runtime_Source_Dir are correct otherwise you'll have problems during builds. As far as I'm concerned, the gcc target triplet was also changed from x86_64-linux-gnu to x86_64-pc-linux-gnu. Hopefully, once we have built a new gprbuild everything will be easier. The next step is to build XML/Ada which is used by gprbuild.

XML/Ada

Download and extract the XML/Ada sources. Using the git repository works pretty well:

cd $BUILD_DIR
git clone https://github.com/AdaCore/xmlada.git xmlada

This time we must build within the sources. Before running the configure script, the default.cgpr file is installed so that the new Ada compiler is used:

cp $BUILD_DIR/default.cgpr $BUILD_DIR/xmlada/
cd $BUILD_DIR/xmlada
./configure --prefix=$INSTALL_DIR

And proceed with the build in the same directory:

make static shared

Compilation is now complete you can install the package:

make install-static install-relocatable

gprbuild

Get the gprbuild sources from the git repository:

cd $BUILD_DIR
git clone https://github.com/AdaCore/gprbuild.git gprbuild

Copy the default.cgpr file to the gprbuild source tree and run the configure script:

cp $BUILD_DIR/default.cgpr $BUILD_DIR/gprbuild/
cd $BUILD_DIR/gprbuild
./configure --prefix=$INSTALL_DIR

Setup the ADA_PROJECT_PATH environment variable to use the XML/Ada library that was just compiled. If you miss this step, you'll get a file dom.ali is incorrectly formatted error during the bind process.

export ADA_PROJECT_PATH=$INSTALL_DIR/lib/gnat

And proceed with the build in the same directory:

make

Compilation is now complete you can install the package:

make install

Using the compiler

Now you can remove the build directory to make some space. You'll not need the default.cgpr file anymore nor define the ADA_PROJECT_PATH environment variable (except for other needs). To use the new Ada compiler you only need to setup your PATH:

export PATH=$INSTALL_DIR/bin:/usr/bin:/bin

You're now ready to play and use the GCC 6.1 Ada Compiler.

Using the Ada Wiki Engine

30 April 2016 at 16:07

The Ada Wiki Engine is used in two steps:

  1. The Wiki text is parsed according to its syntax to produce a Wiki Document instance.
  2. The Wiki document is then rendered by a renderer to produce the final HTML or text.

The Ada Wiki Engine does not manage any storage for the wiki content so that it only focuses on the parsing and rendering aspects.

Overview

The Ada Wiki engine is organized in several packages:

  • Several Wiki stream packages define the interface, types and operations for the Wiki engine to read the Wiki or HTML content and for the Wiki renderer to generate the HTML or text outputs.
  • The Wiki parser is responsible for parsing HTML or Wiki content according to a selected Wiki syntax. It builds the final Wiki document through filters and plugins.

ada-wiki.png
  • The Wiki filters provides a simple filter framework that allows to plug specific filters when a Wiki document is parsed and processed. Filters are used for the table of content generation, for the HTML filtering, to collect words or links and so on.
  • The Wiki plugins defines the plugin interface that is used by the Wiki engine to provide pluggable extensions in the Wiki. Plugins are used for the Wiki template support, to hide some Wiki text content when it is rendered or to interact with other systems.
  • The Wiki documents and attributes are used for the representation of the Wiki document after the Wiki content is parsed.
  • The Wiki renderers are the last packages which are used for the rendering of the Wiki document to produce the final HTML or text.

Building Ada Wiki Engine

Download the ada-wiki-1.0.1.tar.gz or get the sources from GitHub:

git clone [email protected]:stcarrez/ada-wiki.git ada-wiki

If you are using Ada Utility Library then you can configure with:

./configure

Otherwise, you should configure with:

./configure --with-ada-util=no

Then, build the library:

make

Once complete, you can install it:

make install

To use the library in your Ada project, add the following line in your GNAT project file:

with "wiki";

Rendering example

The rendering example described in this article generates an HTML or text content from a Wiki source file. The example reads the file in one of the supported Wiki syntax and produces the HTML or text. You will find the source file on GitHub in render.adb. The example has the following usage:

Render a wiki text file into HTML (default) or text
Usage: render [-t] [-m] [-M] [-d] [-c] [-s style] {wiki-file}
  -t        Render to text only
  -m        Render a Markdown wiki content
  -M        Render a Mediawiki wiki content
  -d        Render a Dotclear wiki content
  -g        Render a Google wiki content
  -c        Render a Creole wiki content
  -s style  Use the CSS style file

Parsing a Wiki Text

To render a Wiki text you will first need to parse the Wiki text and produce a Wiki document instance. For this you will need to declare the Wiki document instance and the Wiki parser instance:


 with Wiki.Documents;
 with Wiki.Parsers;
 ...
    Doc      : Wiki.Documents.Document;
    Engine   : Wiki.Parsers.Parser;

The Ada Wiki Engine has a filter mechanism that is used while parsing the input and before building the target wiki document instance. Filters are chained together and a filter can do some work on the content it sees such as blocking some content (filtering), collecting some data and doing some transformation on the content. When you want to use a filter, you have to declare an instance of the corresponding filter type.


 with Wiki.Filters.Html;
 with Wiki.Filters.Autolink;
 with Wiki.Filters.TOC;
 ...
    Filter   : aliased Wiki.Filters.Html.Html_Filter_Type;
    Autolink : aliased Wiki.Filters.Autolink.Autolink_Filter;
    TOC      : aliased Wiki.Filters.TOC.TOC_Filter;

We use the Autolink filter that detects links in the text and transforms them into real links. The TOC filter is used to collect header sections in the Wiki text and builds a table of content. The Html filter is used to filter HTML content that could be contained in a Wiki text. By default it ignores several HTML tags such as html, head, body, title, meta (these tags are silently discarded). Furthermore it has the ability to hide several elements such as style and script (the tag and its content is discarded).

You will then configure the Wiki engine to build the filter chain and then define the Wiki syntax that the parser must use:


 Engine.Add_Filter (TOC'Unchecked_Access);
 Engine.Add_Filter (Autolink'Unchecked_Access);
 Engine.Add_Filter (Filter'Unchecked_Access);
 Engine.Set_Syntax (Syntax);

The Wiki engine gets its input from an Input_Stream interface that only defines a Read procedure. The Ada Wiki Engine provides several implementations of that interface, one of them is based on the Ada Text_IO package. This is what we are going to use:


 with Wiki.Streams.Text_IO;
 ...
    Input    : aliased Wiki.Streams.Text_IO.File_Input_Stream;

You will then open the input file. If the file contains UTF-8 characters, you may open it as follows:


 Input.Open (File_Path, "WCEM=8");

where File_Path is a string that represents the file's path.

Once the Wiki engine is setup and the input file opened, you can parse the Wiki text and build the Wiki document:


 Engine.Parse (Input'Unchecked_Access, Doc);

Rendering a Wiki Document

After parsing a Wiki text you get a Wiki.Documents.Document instance that you can use as many times as you want. To render the Wiki document, you will first choose a renderer according to the target format that you need. The Ada Wiki Engine provides three renderers:

  • A Text renderer that produces text outputs,
  • A HTML renderer that generates an HTML presentation for the document,
  • A Wiki renderer that generates various Wiki syntaxes.

The renderer needs an output stream instance. We are using the Text_IO implementation:


 with Wiki.Stream.Html.Text_IO;
 with Wiki.Render.Html;
 ...
    Output   : aliased Wiki.Streams.Html.Text_IO.Html_File_Output_Stream;
    Renderer : aliased Wiki.Render.Html.Html_Renderer;

You will then configure the renderer to tell it the output stream to use. You may enable or not the rendering of Table Of Content and you just use the Render procedure to render the document.


 Renderer.Set_Output_Stream (Output'Unchecked_Access);
 Renderer.Set_Render_TOC (True);
 Renderer.Render (Doc);

By default the output stream is configured to write on the standard output. This means that when Render is called, the output will be written to the standard output. You can choose another output stream or open the output stream to a file according to your needs.

Conclusion

The Ada Wiki Engine can be used to parse HTML content, sanitize the result through the HTML filter and convert it to text or to some Wiki syntax (have a look at the import.adb example). The engine can be extended through filters or plugins thus providing some flexible architecture. The library does not impose any storage mechanism. The Ada Wiki Engine is the core engine used by AWA Blogs and AWA Wiki web applications. You may have a look at some online Wiki in the Atlas Wiki demonstrator.

Using the Ada Embedded Network STM32 Ethernet Driver

29 September 2016 at 19:19

In any network stack, the buffer management is key to obtain good performance. Let's see how it is modeled.

Net.Buffers

The Net.Buffers package provides support for network buffer management. A network buffer can hold a single packet frame so that it is limited to 1500 bytes of payload with 14 or 16 bytes for the Ethernet header. The network buffers are allocated by the Ethernet driver during the initialization to setup the Ethernet receive queue. The allocation of network buffers for the transmission is under the responsibility of the application.

Before receiving a packet, the application also has to allocate a network buffer. Upon successful reception of a packet by the Receive procedure, the allocated network buffer will be given to the Ethernet receive queue and the application will get back the received buffer. There is no memory copy.

The package defines two important types: Buffer_Type and Buffer_List. These two types are limited types to forbid copies and force a strict design to applications. The Buffer_Type describes the packet frame and it provides various operations to access the buffer. The Buffer_List defines a list of buffers.

The network buffers are kept within a single linked list managed by a protected object. Because interrupt handlers can release a buffer, that protected object has the priority System.Max_Interrupt_Priority. The protected operations are very basic and are in O(1) complexity so that their execution is bounded in time whatever the arguments.

Before anything, the network buffers have to be allocated. The application can do this by reserving some memory region (using STM32.SDRAM.Reserve) and adding the region with the Add_Region procedure. The region must be a multiple of NET_ALLOC_SIZE constant. To allocate 32 buffers, you can do the following:


  NET_BUFFER_SIZE  : constant Interfaces.Unsigned_32 := Net.Buffers.NET_ALLOC_SIZE * 32;
  ...
  Net.Buffers.Add_Region (STM32.SDRAM.Reserve (Amount => NET_BUFFER_SIZE), NET_BUFFER_SIZE);

An application will allocate a buffer by using the Allocate operation and this is as easy as:


  Packet : Net.Buffers.Buffer_Type;
  ...
  Net.Buffers.Allocate (Packet);

What happens if there is no available buffer? No exception is raised because the networks stack is intended to be used in embedded systems where exceptions are not available. You have to check if the allocation succeeded by using the Is_Null function:

  if Packet.Is_Null then
    null; --  Oops
  end if;

Net.Interfaces

The Net.Interfaces package represents the low level network driver that is capable of sending and receiving packets. The package defines the Ifnet_Type abstract type which defines the three important operations:

  • Initialize to configure and setup the network interface,
  • Send to send a packet on the network.
  • Receive to wait for a packet and get it from the network.

STM32 Ethernet Driver

The STM32 Ethernet driver implements the three important operations required by the Ifnet_Type abstraction. The Initialize procedure performs the STM32 Ethernet initialization, configures the receive and transmit rings and setup to accept interrupts. This operation must be called prior to any other.

Sending a packet

The STM32 Ethernet driver has a transmit queue to manage the Ethernet hardware transmit ring and send packets over the network. The transmit queue is a protected object so that concurrent accesses between application task and the Ethernet interrupt are safe. To transmit a packet, the driver adds the packet to the next available transmit descriptor. The packet buffer ownership is transferred to the transmit ring so that there is no memory copy. Once the packet is queued, the application has lost the buffer ownership. The buffer being owned by the DMA, it will be released by the transmit interrupt, as soon as the packet is sent (3).

ada-driver-send.png

When the transmit queue is full, the application is blocked until a transmit descriptor becomes available.

Receiving a packet

The SMT32 Ethernet driver has a receive queue which is a second protected object, separate from the transmit queue. The receive queue is used by the Ethernet hardware to control the Ethernet receive ring and by the application to pick received packets. Each receive descriptor is assigned a packet buffer that is owned by default to the DMA. When a packet is available and the application calls the Wait_Packet operation, the packet buffer ownership is transferred to the application to avoid any memory copy. To avoid having a ring descriptor loosing its buffer, the application gives a new buffer that is used for the ring descriptor. This is why the application has first to allocate the buffer (1), call the Receive operation (2) to get back the packet in a new buffer and finally release the buffer when it has done with it (3).

ada-driver-receive.png

Receive loop example

Below is an example of a task that loops to receive Ethernet packets and process them. This is the main receiver task used by the EtherScope monitoring tool.

The Ifnet driver initialization is done in the main EtherScope task. We must not use the driver before it is full initialized. This is why the task starts to loop for the Ifnet driver to be ready.


   task body Controller is
      use type Ada.Real_Time.Time;
   
      Packet  : Net.Buffers.Buffer_Type;
   begin
      while not Ifnet.Is_Ready loop
         delay until Ada.Real_Time.Clock + Ada.Real_Time.Seconds (1);
      end loop;
      Net.Buffers.Allocate (Packet);
      loop
         Ifnet.Receive (Packet);
         EtherScope.Analyzer.Base.Analyze (Packet);
      end loop;
   end Controller;

Then, we allocate a packet buffer and enter in the main loop to continuously receive a packet and do some processing. The careful reader will note that there is no buffer release. We don't need that because the Receive driver operation will pick our buffer for its ring and it will give us a buffer that holds the received packet. We will give him back that buffer at the next loop. In this application, the number of buffers needed by the buffer pool is the size of the Ethernet Rx ring plus one.

The complete source is available in etherscope-receiver.adb.

Using this design and implementation, the EtherScope application has shown that it can sustain more than 95Mb of traffic for analysis. Quite nice for 216 Mhz ARM Cortex-M7!

Ethernet Traffic Monitor on a STM32F746

30 September 2016 at 19:19

The application is completely written in Ada 2012 with:

  • The GNAT ARM embedded runtimes is the Ada 2012 ravenscar runtime that provides support for interrupts, tasks, protected objects and other Ada features.
  • The Ada Embedded Network Stack is the small network library that provides network buffer management with an Ethernet driver for the STM32F746 board.
  • The EtherScope application which performs the analysis and displays the information.

Traffic Analyzer

The traffic analyzer inspects the received packet and tries to find interesting information about it. The analyzer is able to recognize several protocols. New protocols may easily be added in the future. The first version supports:

  • Analysis of Ethernet frame to identify the devices that are part of the network with their associated IP address and network utilization.
  • Analysis of IPv4 packet to identify the main IPv4 protocols including ICMP, IGMP, UDP and TCP.
  • Analysis of IGMP with discovery of subscribed multicast groups and monitoring of the associated UDP traffic.
  • Analysis of TCP with the identification of some well known protocols such as http, https, ssh and others.

Each analyser collects the information and is able to report the number of bytes, number of packets and network bandwidth utilization. Some information is also collected in different graph tables so that we can provide some visual graph about the network bandwidth usage.

Network setup to use EtherScope

To use EtherScope, you will connect the STM32F746 board to an Ethernet switch that you insert or have on your network. By default, the switch will isolate the different ports (as opposite to a hub) and unicast traffic is directed only to the concerned port. In other words, EtherScope will only see broadcast and multi-cast traffic. In order to see the interesting traffic (TCP for example), you will need to configure the switch to do port mirroring. By doing so, you tell the switch to mirror all the traffic of a selected port to the mirror port. You will connect EtherScope to that mirror port and it will see all the mirrored traffic.

net-monitoring.png

EtherScope in action

The following 4 minutes video shows the EtherScope in action.

EtherScope Internal Design

The EtherScope has several functional layers:

  • The display layer manages the user interaction through the touch panel. It displays the information that was analyzed and manages the refresh of the display with its graphs.
  • The packet analyzer inspects the traffic.
  • The Ethernet network driver configures the Ethernet receive ring, handles interrupts and manages the reception of packets (the transmission part is not used for this project).
  • The Ada Drivers Library provides a number of utility packages from their samples to manage the display and draw text as well as some geometric forms.
  • The GNAT ARM ravenscar runtime provides low level support for the STM32 board configuration, interrupt and task management. It also brings a number of important drivers to control the touch panel, the button, SPI, I2C and other hardware components.

etheroscope-design.png

The EtherScope.Receiver is the package that has the receiver task that loops to receive a packet from the Ethernet driver and analyzer it through the analyzer. Because the result of the analysis is shared between two tasks, it is protected by the DB protected object.

The EtherScope.Display provides several operations to display the analysis in various forms depending on the user selection. Its operations are called repeatedly by the etherscope main loop. The display operation fetch the analysis from the DB protected object and format the result through the UI.Graphs or text presentations.

Conclusion

You can get the EtherScope sources at: https://github.com/stcarrez/etherscope Feel free to fork EtherScope, hack it and add new protocol analyzers.

The following analyzers could be implemented in the future:

  • A DNS analyzer that shows which DNS requests are made,
  • A DHCP analyzer to track and show IP allocation,
  • A FTP analyzer to reconcile the ftp-data stream to the ftp flow,
  • An IPv6 analyzer

Simple UDP Echo Server on STM32F746

4 December 2016 at 23:01

Overview

The Echo server listens to the UDP port 7 on the Ethernet network and it sends back the received packet to the sender: this is the RFC 862 Echo protocol. Our application follows that RFC but it also maintains a list of the last 10 messages that have been received. The list is then displayed on the STM32 display so that we get a visual feedback of the received messages.

The Echo server uses the DHCP client to get and IPv4 address and the default gateway. We will see how that DHCP client is integrated in the application.

The application has two tasks. The main task loops to manage the refresh of the STM32 display and also to perform some network housekeeping such as the DHCP client management and ARP table management. The second task is responsible for waiting Ethernet packets, analyzing them to handle ARP, ICMP and UDP packets.

Through this article, you will see:

  1. How the STM32 board and network stack are initialized,
  2. How the board gets an IPv4 address using DHCP,
  3. How to implement the UDP echo server,
  4. How to build and test the echo server.

Initialization

STM32 Board Initialization

First of all, the STM32 board must be initialized. There is no random generator available in the Ada Ravenscar profile and we need one for the DHCP protocol for the XID generation. The STM32 provides a hardware random generator that we are going to use. The Initialize_RNG must be called once during the startup and before any network operation is called.

We will use the display to list the messages that we have received. The Display instance must be initialized and the layer configured.


with HAL.Bitmap;
with STM32.RNG.Interrupts;
with STM32.Board;
...
   STM32.RNG.Interrupts.Initialize_RNG;
   STM32.Board.Display.Initialize;
   STM32.Board.Display.Initialize_Layer (1, HAL.Bitmap.ARGB_1555);
Network stack initialization

The network stack will need some memory to receive and send network packets. As described in Using the Ada Embedded Network STM32 Ethernet Driver, we allocate the memory by using the SDRAM.Reserve function and the Add_Region procedure to configure the network buffers that will be available.

An instance of the STM32 Ethernet driver must be declared in a package. The instance must be aliased because the network stack will need to get an access to it.


with Interfaces;
with Net.Buffers;
with Net.Interfaces.STM32;
with STM32.SDRAM;
...
   NET_BUFFER_SIZE : constant Interfaces.Unsigned_32 := Net.Buffers.NET_ALLOC_SIZE * 256;
   Ifnet : aliased Net.Interfaces.STM32.STM32_Ifnet;

The Ethernet driver is initialized by calling the Initialize procedure. By doing so, the Ethernet receive and transmit rings are configured and we are ready to receive and transmit packets. On its side the Ethernet driver will also reserve some memory by using the Reserve and Add_Region operations. The buffers allocated will be used for the Ethernet receive ring.


   Net.Buffers.Add_Region (STM32.SDRAM.Reserve (Amount => NET_BUFFER_SIZE), NET_BUFFER_SIZE);
   Ifnet.Initialize;

The Ethernet driver configures the MII transceiver and enables interrupts for the receive and transmit rings.

Getting the IPv4 address with DHCP

At this stage, the network stack is almost ready but it does not have any IPv4 address. We are going to use the DHCP protocol to automatically get an IPv4 address, get the default gateway and other network configuration such as the DNS server. The DHCP client uses a UDP socket on port 68 to send and receive DHCP messages. Such DHCP client is provided by the Net.DHCP package and we need to declare an instance of it. The DHCP client is based on the UDP socket support that we are going to use for the echo server. The DHCP client instance must be declared aliased because the UDP socket layer need to get an access to it to propagate the DHCP packets that are received.


with Net.DHCP;
...
   Dhcp : aliased Net.DHCP.Client;

The DHCP client instance must be initialized and the Ethernet driver interface must be passed as parameter to correctly configure and bind the UDP socket. After the Initialize procedure is called, the DHCP state machine is ready to enter into action. We don't have an IPv4 address after the procedure returns.


   Dhcp.Initialize (Ifnet'Access);

The DHCP client is using an asynchronous implementation to maintain the client state according to RFC 2131. For this it has two important operations that are called by tasks in different contexts. First the Process procedure is responsible for sending requests to the DHCP server and to manage the timeouts used for the retransmissions, renewal and lease expiration. The Process procedure sends the DHCPDISCOVER and DHCPREQUEST messages. On the other hand, the Receive procedure is called by the network stack to handle the DHCP packets sent by the DHCP server. The Receive procedure gets the DHCPOFFER and DHCPACK messages.

Getting an IPv4 address with the DHCP protocol can take some time and must be repeated continuously due to the DHCP lease expiration. This is why the DHCP client must not be stopped and should continue forever.

Refer to the DHCP documentation to learn more about this process.

UDP Echo Server

Logger protected type

The echo server will record the message that are received. The message is inserted in the list by the receive task and it is read by the main task. We use the an Ada protected type to protect the list from concurrent accesses.

Each message is represented by the Message record which has an identifier that is unique and incremented each time a message is received. To avoid dynamic memory allocation the list of message is fixed and is represented by the Message_List array. The list itself is managed by the Logger protected type.


type Message is record
   Id      : Natural := 0;
   Content : String (1 .. 80) := (others => ' ');
end record;
type Message_List is array (1 .. 10) of Message;

protected type Logger is
   procedure Echo (Content : in Message);
   function Get return Message_List;
private
   Id   : Natural := 0;
   List : Message_List;
end Logger;

The Logger protected type provides the Echo procedure to insert a message to the list and the Get function to retrieve the list of messages.

Server Declaration

The UDP Echo Server uses the UDP socket support provided by the Net.Sockets.UDP package. The UDP package defines the Socket abstract type which represents the UDP endpoint. The Socket type is abstract because it defines the Receive procedure that must be implemented. The Receive procedure will be called by the network stack when a UDP packet for the socket is received.

The declaration of our echo server is the following:


with Net.Buffers;
with Net.Sockets;
...
   type Echo_Server is new Net.Sockets.UDP.Socket with record
      Count    : Natural := 0;
      Messages : Logger;
   end record;

It holds a counter of message as well as the messages in the Logger protected type.

The echo server must implement the Receive procedure:


overriding
procedure Receive (Endpoint : in out Echo_Server;
                   From     : in Net.Sockets.Sockaddr_In;
                   Packet   : in out Net.Buffers.Buffer_Type);

The network stack will call the Receive procedure each time a UDP packet for the socket is received. The From parameter will contain the IPv4 address and UDP port of the client that sent the UDP packet. The Packet parameter contains the received UDP packet.

Server Implementation

Implementing the server is very easy because we only have to implement the Receive procedure (we will leave the Logger protected type implementation as an exercise to the reader).

First we use the Get_Data_Size function to get the size of our packet. The function is able to return different sizes to take into account one or several protocol headers. We want to know the size of our UDP packet, excluding the UDP header. We tell Get_Data_Size we want to get the UDP_PACKET size. This size represents the size of the echo message sent by the client.


   Msg    : Message;
   Size   : constant Net.Uint16 := Packet.Get_Data_Size (Net.Buffers.UDP_PACKET);
   Len    : constant Natural
        := (if Size > Msg.Content'Length then Msg.Content'Length else Natural (Size));

Having the size we truncate it so that we get a string that fits in our message. We then use the Get_String procedure to retrieve the echo message in a string. This procedure gets from the packet a number of characters that corresponds to the string length passed as parameter.


   Packet.Get_String (Msg.Content (1 .. Len));

The Buffer_Type provides other Get operations to extract data from the packet. It maintains a position in the buffer that tells the Get operation the location to read in the packet and each Get updates the position according to what was actually read. There are also several Put operations intended to be used to write and build the packet before sending it. We are not going to use them because the echo server has to return the original packet as is. Instead, we have to tell what is the size of the packet that we are going to send. This is done by the Set_Data_Size procedure:


   Packet.Set_Data_Size (Size);

Here we want to give the orignal size so that we return the full packet.

Now we can use the Send procedure to send the packet back to the client. We use the client IPv4 address and UDP port represented by From as the destination address. The Send procedure returns a status that tells whether the packet was successfully sent or queued.


Status : Net.Error_Code;
...
   Endpoint.Send (To => From, Packet => Packet, Status => Status);
Server Initialization

Now that the Echo_Server type is implemented, we have to make a global instance of it and bind it to the UDP port 7 that corresponds to the UDP echo protocol. The port number must be defined in network byte order (as in Unix Socket API) and this is why it is converted using the To_Network function. We don't know our IPv4 address and by using 0 we tell the UDP stack to use the IPv4 address that is configured on the Ethernet interface.


Server : aliased Echo_Server;
...
   Server.Bind (Ifnet'Access, (Port => Net.Headers.To_Network (7),
                               Addr => (others => 0)));

Main loop and receive task

As explained in the overview, we need several tasks to handle the display, network housekeeping and reception of Ethernet packets. To make it simple the display, ARP table management and DHCP client management will be handled by the main task. The reception of Ethernet packet will be handled by a second task. It is possible to use a specific task for the ARP management and another one for the DHCP but there is no real benefit in doing so for our simple echo server.

The main loop repeats calls to the ARP Timeout procedure and the DHCP Process procedure. The Process procedure returns a delay that we are supposed to wait but we are not going to use it for this example. The main loop simply looks as follows:


Dhcp_Timeout : Ada.Real_Time.Time_Span;
...
   loop
      Net.Protos.Arp.Timeout (Ifnet);
      Dhcp.Process (Dhcp_Timeout);
      ...
      delay until Ada.Real_Time.Clock + Ada.Real_Time.Milliseconds (500);
   end loop;

The receive task was described in the previous article Using the Ada Embedded Network STM32 Ethernet Driver. The task is declared at package level as follows:


   task Controller with
     Storage_Size => (16 * 1024),
     Priority => System.Default_Priority;

And the implementation loops to receive packets from the Ethernet driver and calls either the ARP Receive procedure, the ICMP Receive procedure or the UDP Input procedure. The complete implementation can be found in the receive.adb file.

Building and testing the server

To build the UDP echo server and have it run on the STM32 board is a three step process:

  1. First, you will use the arm-eabi-gnatmake command with the echo GNAT project. After successful build, you will get the echo ELF binary image in obj/stm32f746disco/echo.
  2. Then, the ELF image must be converted to binary by extracting the ELF sections that must be put on the flash. This is done by running the arm-eabi-objcopy command.
  3. Finaly, the binary image produced by arm-eabi-objcopy must be put on the flash using the st-util utility. You may have to press the reset button on the board so that the st-util is able to take control of the board; then release the reset button to let st-util the flas

Atlas 1.0.0 the Ada Web Application demonstrator available as Docker image

18 March 2017 at 17:27

The application features:

  • A small blogging system,
  • A question and answer area,
  • A complete wiki system,
  • A document and image storage space,
  • Authentication with Google+ or Facebook.

atlas-mashup.png

Atlas is now available as a Docker image so that you can easily try it.

What is Docker ?

Docker is a container platform that allows to run applications on the host but within an isolated environment. The container has its own libraries, its own network, its own root file system but it shares the same running Linux kernel as the host. Docker is based on Linux containers which provides kernel namespaces and cgroups. Docker provides a lot of abstractions that simplifies the creation, startup and management of containers.

To learn more about Docker, you may have a look at the Get started with Docker documentation.

Using the Atlas Docker image

The Atlas Docker image is available at the Docker Hub cloud-based registry service. This registry allows you to get and synchronize your local Docker images easily by pulling them from the cloud.

Assuming that you have installed Docker, you can pull the Atlas Docker image by using the following command:

  sudo docker pull ciceron/atlas

Beware that the Docker image is a 64-bit image so it runs only on Linux x86_64 hosts. Once you have obtained the image, you can create the container and start it as follows:

  sudo docker run --name atlas -p 8080:8080 ciceron/atlas

and then point your browser to http://localhost:8080/atlas/index.html The -p 8080:8080 option tells Docker to expose the TCP/IP port 8080 from the container to the host so that you can access the web application.

The application will first display some installation page that allows you to choose the database, configure the mail server and the Google and Facebook connexions (most of the default values should be correct).

To stop and cleanup the docker container, you can use the following commands:

  sudo docker stop atlas
  sudo docker rm atlas

Learning more about Ada Web Application

You may read the following tutorials to lean more about the technical details about setting up and building an Ada Web Application:

Rest API Benchmark comparison between Ada and Java

21 March 2017 at 22:55

The goal is to benchmark the following servers and have an idea of how they compare with each others:

The first three are implemented in Ada and the last one in Java.

REST Server Implementation

The implementation is different for each server but they all implement the same REST GET operation accessible from the /api base URL. They return the same JSON content:

{"greeting":"Hello World!"}

Below is an extract of the server implementation for each server.

AWS Rest API Server

function Get_Api (Request : in AWS.Status.Data) return AWS.Response.Data is
begin
   return AWS.Response.Build ("application/json", "{""greeting"":""Hello World!""}");
end Get_Api;

ASF Rest API Server

procedure Get (Req    : in out ASF.Rest.Request'Class;
               Reply  : in out ASF.Rest.Response'Class;
               Stream : in out ASF.Rest.Output_Stream'Class) is
begin
   Stream.Start_Document;
   Stream.Write_Entity ("greeting", "Hello World!");
   Stream.End_Document;
end Get;

EWS Rest API Server

function Get (Request : EWS.HTTP.Request_P) return EWS.Dynamic.Dynamic_Response'Class is
   Result : EWS.Dynamic.Dynamic_Response (Request);
begin
   EWS.Dynamic.Set_Content_Type (Result, To => EWS.Types.JSON);
   EWS.Dynamic.Set_Content (Result, "{""greeting"":""Hello World!""}");
   return Result;
end Get;

Java Rest API Server

@Produces(APPLICATION_JSON_UTF8_VALUE)
@Path("/api")
@Component
public class ApiResource {
  public static final String RESPONSE = "{\"greeting\":\"Hello World!\"}";
  
  @GET
  public Response test() {
      return ok(RESPONSE).build();
  }
}

Benchmark Strategy and Results

The Ada and Java servers are started on the same host (one at a time), a Linux Ubuntu 14.04 64-bit powered by an Intel i7-3770S CPU @3.10Ghz with 8-cores. The benchmark is made by using Siege executed on a second computer running Linux Ubuntu 15.04 64-bit powered by an Intel i7-4720HQ CPU @2.60Ghz with 8-cores. Client and server hosts are connected through a Gigabit Ethernet link.

Siege makes an intensive use of network connections which results in exhaustion of TCP/IP port to connect to the server. This is due to the TCP TIME_WAIT that prevents the TCP/IP port from being re-used for future connections. To avoid such exhaustion, the network stack is tuned on both the server and the client hosts with the sysctl commands:

sudo sysctl -w net.ipv4.tcp_tw_recycle=1
sudo sysctl -w net.ipv4.tcp_tw_reuse=1

The benchmark tests are executed by running the run-load-test.sh script and then making GNUplot graphs using plot-perf.gpi script. The benchmark gives the number of REST requests which are made per second for different level of concurrency.

  • The Embedded Web Server targets embedded platforms and it uses only one task to serve requests. Despite this simple configuration, it gets some honorable results as it reaches 8000 requests per second.
  • The Ada Server Faces provides an Ada implementation of Java Server Faces. It uses the Ada Web Server. The benchmark shows a small overhead (arround 4%).
  • The Ada Web Server is the fastest server in this configuration. As for the Ada Server Faces it is configured to only have 8 tasks that serve requests. Increasing the number of tasks does not bring better performance.
  • The Java Grizzly server is the faster Java server reported by Arcadius's benchmark. It uses 62 threads. It appears to serve 7% less requests than the Ada Web Server.

ada-rest-api-benchmark.png

On the memory side, the process Resident Set Size (RSS) is measured once the benchmark test ends and graphed below. The Java Grizzly server uses arround 580 Mb, followed by Ada Server Faces that uses 5.6Mb, Ada Web Server 3.6Mb and the EWS only 1 Mb.

ada-rest-api-memory.png

Conclusion and References

The Ada Web Server has comparable performance with the Java Grizzly server (it is even a little bit faster). But as far a memory is concerned, Ada has a serious advantage since it cuts the memory size by a factor of 100. Ada has other advantages that make it an alternative choice for web development (safety, security, realtime capabilities, ...).

Sources of the benchmarks are available in the following two GitHub repositories:

Using the Gnome and KDE Secret Service API in Ada

25 June 2017 at 17:00

The libsecret is the C library that gives access to the Secret Service API. The Ada Libsecret is an Ada binding for the C library. The Ada binding does not allow to access and use all of the functionalities implemented by the C library but it implements the most useful operations allowing to store, retrieve and delete some application secret data.

Understanding the Secret Service API

At first glance, the Secret Service API is not easy to use. Each secret is stored together with lookup attributes and a label. Lookup attributes are formed of key/value pairs. The label is the user friendly name that desktop key manager will use to display some information to the end user.

ada-libsecret-dbus.png

The Secret Service API is implemented by a keyring manager such as gnome-keyring-daemon or kwalletd. This is a daemon that is started when a user opens a desktop session. It manages the application secrets and protects their access. The secret database can be locked in which case the access to secrets is forbidden. Unlocking is possible but requires authentication by the user (in most cases a dialog popup window opens and asks to unlock the keyring).

When a client application wishes to retrieve one of its secret, it builds the lookup attributes that correspond to the secret to retrieve. The lookup attributes are not encrypted and they are not part of the secret. The client application uses the D-Bus IPC mechanism to ask the keyring manager for the secret. The keyring manager will manage for unlocking the database by asking the user to confirm the access. The keyring manager will then look in its database for the secret associated with the lookup attributes.

Note that the label cannot be used as a key to retrieve the secret since the same label can be associated with different lookup attributes.

Using the Ada Secret Service API

Setting up the project

After building and installing the Ada Libsecret library you will add the following line to your GNAT project file:

with "secret";

This definition will give you access to the Secret package and will handle the build and link support to use the libsecret C library.

Setting the lookup attributes

Attributes are defined by the Secret.Attributes package which provides the Map type that represents the lookup attributes. First, you will add the following with clause:

with Secret.Attributes;

to make available the operations and types provided by the package. Then, you will declare the attributes instance by using:

   List : Secret.Attributes.Map;

At this stage, the lookup attributes are empty and you can check that by using the Is_Null function that will return True in that case. You must now add at least one key/value pair in the attributes by using the Insert procedure:

   List.Insert ("secret-tool", "key-password");
   List.Insert ("user", "joe");

Applications are free about what attributes they use. The attributes have to be unique so that the application can identify and retrieve them. For example, the svn command uses two attributes to store the password to authenticate to svn remote repositories: domain and user. The domain represents the server URL and the user represents the user name to use for the connection. By using these two attributes, it is possible to store several passwords for different svn accounts.

Storing a secret

To store a secret, we will use the operations and types from the Secret.Services and Secret.Values packages. The following definitions:

with Secret.Services;
with Secret.Values;

will bring such definitions to the program. The secret service is represented by the Service_Type type and we will declare an instance of it as follows:

   Service : Secret.Services.Service_Type;

This service instance is a proxy to the Secret Service API and it communicates to the gnome-keyring-daemon by using the D-Bus protocol.

The secret value itself is represented by the Secret_Type and we can define and create such secret by using the Create function as follows:

   Value : Secret.Values.Secret_Type := Secret.Values.Create ("my-secret");

Storing the secret is done by the Store operation which associates the secret value to the lookup attributes and a label. As explained before, the lookup attributes represent the unique key to identify the secret. The label is used to give a user friendly name to the association. This label is used by the desktop password and key manager to give information to the user.

   Service.Store (List, "Secret tool password", Value);

Retreiving a secret

Retreiving a secret follows the same steps but involves using the Lookup function that returns the secret value from the lookup attributes. Care must be made to provide the same lookup attributes that were used during the store phase.

   Value : Secret.Values.Secret_Type := Service.Lookup (List);

The secret value should be checked by using the Is_Null function to verify that the value was found. The secret value itself is accessed by using the Get_Value function.

   if not Value.Is_Null then
      Ada.Text_IO.Put_Line (Value.Get_Value);
   end if;

Conclusion

By using the Ada Secret Service API, Ada applications can now securely store private information and protect resources for their users. The API is fairly simple and can be used to store OAuth access tokens, database passwords, and more...

Read the Ada Libsecret Documentation to learn more about the API.

Generating a REST Ada client with OpenAPI and Swagger Codegen

8 October 2017 at 18:32

swagger-ada-generator.png

Writing an OpenAPI document

The OpenAPI document is either a JSON or a YAML file that describes the REST API operations. The document can be used both for the documentation of the API and for the code generation in several programming language. We will see briefly through the Petstore example how the OpenAPI document is organized. The full OpenAPI document is available in petstore.yaml.

General description

A first part of the OpenAPI document provides a general description of the API. This includes the general description, the terms of service, the license and some contact information.

swagger: '2.0'
info:
  description: 'This is a sample server Petstore server.  You can find out more about Swagger at [http://swagger.io](http://s
wagger.io) or on [irc.freenode.net, #swagger](http://swagger.io/irc/).  For this sample, you can use the api key `special-key
` to test the authorization filters.'
  version: 1.0.0
  title: Swagger Petstore
  termsOfService: 'http://swagger.io/terms/'
  contact:
    email: [email protected]
  license:
    name: Apache 2.0
    url: 'http://www.apache.org/licenses/LICENSE-2.0.html'
host: petstore.swagger.io
basePath: /v2

Type description

The OpenAPI document can also describe types which are used by the REST operations. These types provide a description of how the data is organized and passed through the API operations.

It is possible to describe almost all possible types from simple properties, group of properties up to complex types including arrays. For example a Pet type is made of several properties each of them having a name, a type and other information to describe how the type is serialized.

definitions:
  Pet:
    title: a Pet
    description: A pet for sale in the pet store
    type: object
    required:
      - name
      - photoUrls
    properties:
      id:
        type: integer
        format: int64
      category:
        $ref: '#/definitions/Category'
      name:
        type: string
        example: doggie
      photoUrls:
        type: array
        xml:
          name: photoUrl
          wrapped: true
        items:
          type: string
      tags:
        type: array
        xml:
          name: tag
          wrapped: true
        items:
          $ref: '#/definitions/Tag'
      status:
        type: string
        description: pet status in the store
        enum:
          - available
          - pending
          - sold
    xml:
      name: Pet

In this example, the Pet type contains 6 properties (id, category, name, photoUrls, tags, status) and refers to two other types Category and Tag.

Operation description

Operations are introduced by the paths object in the OpenAPI document. This section describes the possible paths that can be used by URL and the associated operation. Some operations receive their parameter within the path and this is represented by the {name} notation.

The operation description indicates the HTTP method that is used get, post, put or delete.

The following definition describes the getPetById operation.

paths:
  '/pet/{petId}':
    get:
      tags:
        - pet
      summary: Find pet by ID
      description: Returns a single pet
      operationId: getPetById
      produces:
        - application/xml
        - application/json
      parameters:
        - name: petId
          in: path
          description: ID of pet to return
          required: true
          type: integer
          format: int64
      responses:
        '200':
          description: successful operation
          schema:
            $ref: '#/definitions/Pet'
        '400':
          description: Invalid ID supplied
        '404':
          description: Pet not found
      security:
        - api_key: []

The summary and description are used for the documentation purposes. The operationId is used by code generators to provide an operation name that a target programming language can use. The produces section indicates the media types that are supported by the operation and which are generated for the response. The parameters section represents all the operation parameters. Some parameters can be extracted from the path (which is the case for the petId parameter) and some others can be passed as query parameter.

The responses section describes the possible responses for the operation as well as the format used by the response. In this example, the operation returns an object described by the Pet type.

Using Swagger Codegen

The documentation and the Ada client are generated from the OpenAPI document by using the Swagger Codegen generator. The generator is a Java program that is packaged within a jar file. It must be launched by the Java 7 or Java 8 runtime.

Generating the documentation

The HTML documentation is generated from the OpenAPI document by using the following command:

 java -jar swagger-codegen-cli.jar generate -l html -i petstore.yaml -o doc

Generating the Ada client

To generate the Ada client, you will use the -l ada option to use the Ada code generator. The OpenAPI document is passed with the -i option.

 java -jar swagger-codegen-cli.jar generate -l ada -i petstore.yaml -o client \
       -DprojectName=Petstore --model-package Samples.Petstore

The Ada generator uses two options to control the generation. The -DprojectName=Petstore option allows to control the name of the generated GNAT project and the --model-package option controls the name of the Ada package for the generated code.

The Ada generator will create the following Ada packages:

  • Samples.Petstore.Models is the package that contains all the types described in the OpenAPI document. Each OpenAPI type is represented by an Ada record and it

is also completed by an instantiation of the Ada.Containers.Vectors package for the representation of arrays of the given type. The Models package also provides Serialize and Deserialize procedures for the serialization and deserialization of the data over JSON or XML streams.

  • Samples.Petstore.Clients is the package that declares the Client_Type tagged record which provides all the operations for the OpenAPI document.

For the Pet type describe previously, the Ada generator produces the following code extract:

package Samples.Petstore.Models is
   ...
   type Pet_Type is
     record
       Id : Swagger.Long;
       Category : Samples.Petstore.Models.Category_Type;
       Name : Swagger.UString;
       Photo_Urls : Swagger.UString_Vectors.Vector;
       Tags : Samples.Petstore.Models.Tag_Type_Vectors.Vector;
       Status : Swagger.UString;
     end record;
     ...
end Samples.Petstore.Models;

and for the operation it generates the following code:

package Samples.Petstore.Clients is
   ...
   type Client_Type is new Swagger.Clients.Client_Type with null record;
   procedure Get_Pet_By_Id
      (Client : in out Client_Type;
       Pet_Id : in Swagger.Long;
       Result : out Samples.Petstore.Models.Pet_Type);
   ...
end Samples.Petstore.Clients;

Using the REST Ada client

Initialization

The HTTP/REST support is provided by Ada Util and encapsulated by Swagger Ada. The Ada Util library also takes care of the JSON and XML serialization and deserialization. If you want to use Curl, you should initialize with the following:

with Util.Http.Clients.Curl;
...
   Util.Http.Clients.Curl.Register;

But if you want to use AWS, you will initialize with:

with Util.Http.Clients.Web;
...
   Util.Http.Clients.Web.Register;

After the initialization is done, you will declare a client instance to access the API operations:

with Samples.Petstore.Clients;
...
   C : Samples.Petstore.Clients.Client_Type;

And you should initialize the server base URL you want to connect to. To use the live Swagger Petstore service you can set the server base URL as follows:

  C.Set_Server ("http://petstore.swagger.io/v2");

At this stage, you can use the generated operation by calling operations on the client.

Calling a REST operation

Let's retrieve some pet information by calling the Get_Pet_By_Id operation described previously. This operation needs an integer as input parameter and returns a Pet_Type object that contains all the pet information. You will first declare the pet instance as follows:

with Samples.Petstore.Models;
...
  Pet  : Samples.Petstore.Models.Pet_Type;

And then call the Get_Pet_By_Id operation:

  C.Get_Pet_By_Id (768, Pet);

At this stage, you can access information from the Pet instance:

with Ada.Text_IO;
...
  Ada.Text_IO.Put_Line ("Id      : " & Swagger.Long'Image (Pet.Id));
  Ada.Text_IO.Put_Line ("Name    : " & Swagger.To_String (Pet.Name));
  Ada.Text_IO.Put_Line ("Status  : " & Swagger.To_String (Pet.Status));

The Swagger Ada Petstore illustrates other uses of the generated operations. It allows to list the inventory, list the pets with a given status, add a pet and so on...

Conclusion and references

The OpenAPI Specification provides a standard way to describe REST operations. The Swagger Codegen is the generator to be used to simplify the implementation of REST clients in many programming languages and to generate the documentation of the API. The Ada code generator only supports the client side but the server code generation is under work.

The sources of the petstore samples are available:

The APIs.guru lists more than 550 API descriptions from various providers such as Amazon, Google, Microsoft and many other online services. They are now available to the Ada community!

Writing an Ada programmer's guide with Dynamo, Pandoc and Read the Docs

18 February 2018 at 09:17

user-guide-generation.png

Writing user's guide in Ada specification

Since I often forget to update some external documentation, I've found convenient to have it close to the implementation within the code. I'm not speaking about a reference documentation that explains every type, function or procedure provided by an Ada package. I'm talking about a programmer's guide.

The solution I've used is to write a small programmer's guide within some Ada package specification. The programmer's guide is written within comments before the package declaration. It is then extracted and merged with other package documentation to create the final programmer's guide. One benefit in having such small programmer's guide in the package specification is that it also brings some piece of documentation to developers: the user's guide is close to the specification.

The documentation is written using Markdown syntax and put before the package declaration. The extraction tool recognizes a number of formatting patterns and commands that help in merging the different pieces in one or several files.

Section headers

First the small programmer's guide must start with a section header introduced by at least one = (equal) sign. The programmer's guide documentation ends with the start of the Ada package declaration. Unlike AdaBrowse and AdaDoc, the package specification is not parsed and not used.

--  = Streams =
--  The `Util.Streams` package provides several types and operations to allow the
--  composition of input and output streams.  Input streams can be chained together so that
--  ...
...
package Util.Streams is ...

When an Ada package specification includes such comment, a documentation file is created. The generated file name is derived from the package name found after the package keyword. The . (dot) are replaced by _ (underscore) and the .md extension is added. In this example, the generated file is Util_Streams.md.

Merging with @include <file>

The @include command indicates that the small programmer's guide from the given file must be included. For example, the Streams support of Ada Utility Library is provided by several packages, each being a child of the Util.Streams package. The different pieces of the programmer's guide are merged together by using the following comments:

--  @include util-streams-buffered.ads
--  @include util-streams-texts.ads
--  @include util-streams-files.ads
--  @include util-streams-pipes.ads
--  @include util-streams-sockets.ads
--  @include util-streams-raw.ads
--  @include util-streams-buffered-encoders.ads

Autolink

To avoid having links in the Ada comments, an auto-link feature is used so that some words or short sentences can be turned into links automatically. The auto-link feature works by using a simple text file that indicates words or sequence or words that should be changed to a link. The text file contains one link definition per line, composed of a set of words that must match and the associated link.

Java Bean                  https://en.wikipedia.org/wiki/JavaBean
Java Log4j                 https://logging.apache.org/log4j/2.x/
Log4cxx                    https://logging.apache.org/log4cxx/latest_stable/index.html
RFC7231                    https://tools.ietf.org/html/rfc7231

The auto-link feature is very basic. To match a link, a sequence of several words must be present on the same comment line. For example, the following documentation extract:

--  = Logging =
--  The `Util.Log` package and children provide a simple logging framework inspired
--  from the Java Log4j library.  It is intended to provide a subset of logging features

will generate the following Markdown extract with a link for the "Java Log4j" word sequence:

# Logging
The `Util.Log` package and children provide a simple logging framework inspired
from the [Java Log4j](https://logging.apache.org/log4j/2.x/) library...

Code extract

Having code examples is important for a programmer's guide and I've made the choice to have them as part of the comment. The extraction tool recognizes them by assuming that they are introduced by an empty line and indented by at least 4 spaces. The code extractor will use the Markdown fenced code block (```) to enclose them.

--  is free but using the full package name is helpful to control precisely the logs.
--
--    with Util.Log.Loggers;
--    package body X.Y is
--      Log : constant Util.Log.Loggers := Util.Log.Loggers.Create ("X.Y");
--    end X.Y;
--
--  == Logger Messages ==

Extracting documentation from Ada specification

Once the documentation is written, the Dynamo command is used to extract, merge and generate the documentation. The build-doc sub-command scans the project files, reading the Ada specification, some project XML files and generates the documentation in Markdown format. The -pandoc option tells the documentation generator to write the documentation for a book oriented organization formatted with Pandoc. It will generate them in the docs directory.

dynamo build-doc -pandoc docs

Putting it all together

Pandoc being a versatile document converter, it allows to format all the generated documentation with some other files and produce a final PDF document. Several files are not generated by Dynamo and they are written either as LaTeX (pagebreak.tex) or Markdown, for example the cover page, the introduction and installation chapters.

By using a custom LaTeX template (eisvogel.tex) and using several configuration option some nice PDF is generated.

pandoc -f markdown -o util-book.pdf --listings --number-sections \
  --toc --template=./docs/eisvogel.tex docs/title.md docs/pagebreak.tex \
  docs/intro.md docs/pagebreak.tex docs/Installation.md docs/pagebreak.tex \
  docs/Util_Log.md docs/pagebreak.tex docs/Util_Properties.md docs/pagebreak.tex \
  docs/Util_Dates.md doc/pagebreak.tex docs/Util_Beans.md docs/pagebreak.tex \
  docs/Util_Http.md docs/pagebreak.tex docs/Util_Streams.md docs/pagebreak.tex \
  docs/Util_Encoders.md docs/pagebreak.tex docs/Util_Events_Timers.md \
  docs/pagebreak.tex docs/Util_Measures.md

Here is the final PDF file: Ada Utility Library Programmer's Guide

Publishing the programmer's guide

Read the Docs offers a free documentation hosting with a complete build and publication environment. They are able to connect to a GitHub repository and be notified each time some changes are pushed to build automatically the documentation.

The documentation is produced by MkDocs and the mkdocs.yml configuration file allows to configure how the documentation is built, organized and presented:

site_name: Ada Utility Library
docs_dir: doc
pages:
  - Introduction: index.md
  - Installation: Installation.md
  - Log: Util_Log.md
  - Properties: Util_Properties.md
  - Dates: Util_Dates.md
  - Ada Beans: Util_Beans.md
  - HTTP: Util_Http.md
  - Streams: Util_Streams.md
  - Encoders: Util_Encoders.md
  - Measures: Util_Measures.md
  - Timers: Util_Events_Timers.md
theme: readthedocs

Here is the final programmer's guide: Ada Utility Library Users' Guide.

Conclusion

I've found quite useful to write the programmer's guide within the Ada specification. Doing so also helps during the design of a new package because it forces to think a little bit on how the package is going to be used. There are some drawbacks in using this model:

  • Each time the documentation must be fixed, the Ada specification file is modified,
  • The layout of a programmer's guide does not always follow a package organization,
  • Merging the documentation from different parts could bring some complexity when you have to find out where some documentation actually comes from.

New releases of Ada Web Application et al.

15 July 2018 at 21:17

Ada Utility Library, Version 1.9.0

  • Improvement and fixes of the JSON, XML, CSV serialization
  • Improvement of properties to also read and describe INI files
  • Add encoders to support SHA256 and HMAC-SHA256
  • Added a command package for implementation of command line tools
  • Added event timer list management
  • Fix on the HTTP curl support
  • Implementation of x-www-form-urlencoded serialization
  • Added localized date parsing

Download: http://download.vacs.fr/ada-util/ada-util-1.9.0.tar.gz

GitHub: https://github.com/stcarrez/ada-util

Ada EL, Version 1.6.1

  • Fix minor compilation warnings and build with Ada 2012

Download: http://download.vacs.fr/ada-el/ada-el-1.6.1.tar.gz

GitHub: https://github.com/stcarrez/ada-el

Ada Security, Version 1.1.2

  • OAuth 2.0 server implementation (RFC 6749)
  • Improvement of the role based security policy

Download: http://download.vacs.fr/ada-security/ada-security-1.2.0.tar.gz

GitHub: https://github.com/stcarrez/ada-security

Ada Database Objects, Version 1.2.0

  • Improvement of SQLite connection management
  • Fix logs to avoid having password in clear text in logs
  • Fix lazy object loading
  • Fix link issue on Fedora

Download: http://download.vacs.fr/ada-ado/ada-ado-1.2.0.tar.gz

GitHub: https://github.com/stcarrez/ada-ado

Ada Server Faces, Version 1.1.0

  • New EL function util:translate for translation with a resource bundle
  • New REST servlet with support for server API implementation
  • Provide pre-defined beans in ASF contexts: requestScope
  • Add support for servlet requests to retrieve the body content as a stream
  • Improvement of navigation rules to allow setting the return status
  • Moved the servlet support in a separate project: ada-servlet
  • Integrate jQuery datetime picker

Download: http://download.vacs.fr/ada-asf/ada-asf-1.2.0.tar.gz

GitHub: https://github.com/stcarrez/ada-asf

Ada Servlet, Version 1.2.0

  • New REST servlet with support for server API implementation
  • Add support for servlet requests to retrieve the body content as a stream
  • Moved the Ada Servlet implementation outside of Ada Server Faces in a separate project

Download: http://download.vacs.fr/ada-servlet/ada-servlet-1.2.0.tar.gz

GitHub: https://github.com/stcarrez/ada-servlet

Ada Wiki Engine, Version 1.1.0

  • New condition plugins for the conditional inclusion of wiki content
  • Added support for NOTOC by the TOC filter

Download: http://download.vacs.fr/ada-wiki/ada-wiki-1.1.0.tar.gz

GitHub: https://github.com/stcarrez/ada-wiki

Swagger Ada, Version 0.1.0

  • Initial implementation of Swagger OpenAPI to easily implement REST clients and servers

Download: http://download.vacs.fr/swagger-ada/swagger-ada-0.1.0.tar.gz

GitHub: https://github.com/stcarrez/swagger-ada

Dynamo, Version 0.9.0

  • New type ASF.Parts.Part in the Dynamo UML model
  • Add support to generate ASF Upload method in UML Ada beans
  • Add support to generate AWA event actions
  • Generate JSON/XML serialization code for UML classes
  • Update the 'create-database' command to support SQLite
  • Fix model generation for multiple primary keys per table
  • Add support for <exclude> patterns in the dist command
  • Add support for YAML database model files

Download: http://download.vacs.fr/dynamo/dynamo-0.9.0.tar.gz

GitHub: https://github.com/stcarrez/dynamo

AWA, Version 1.1.0

  • New trumbowyg plugin for WYSIWYG Javascript editor
  • New setup plugin for AWA application setup
  • Moved the samples to a separate project
  • New wiki plugin to write online wiki-based documentation
  • New flotcharts plugin to integraph jQuery Flot to display various graphs
  • Improvement of configure, build and installation with gprinstall when available
  • New counter plugin to track wiki page and blog post reads
  • Moved the wiki engine to Ada Wiki library
  • Support to display images in blog post
  • New image selector for wiki and blog post editors
  • Add a programmer's guide

Download: http://download.vacs.fr/ada-awa/ada-awa-1.1.0.tar.gz

GitHub: https://github.com/stcarrez/ada-awa

All these Ada projects can be downloaded individually but they are also packaged together to help in their download and build process. You can also download everything at http://download.vacs.fr/ada-awa/awa-all-1.1.0.tar.gz

After downloading the awa-all-1.1.0.tar.gz package, have a look at the Ada Web Application Programmer's Guide to learn how to build, install and start using all this.

If you don't have time to build all this, a docker container is available: https://hub.docker.com/r/ciceron/ada-awa/

Ada, Java and Python database access

17 November 2018 at 14:02

The database also has a serious impact on such benchmark and I've measured the following three famous databases:

The purpose of the benchmark is to be able to have a simple comparison between these different databases and different programming languages. For this, a very simple database table is created with only two integer columns one of them being the primary key with auto increment. For example the SQLite table is created with the following SQL:

CREATE table test_simple (
  id INTEGER PRIMARY KEY AUTOINCREMENT,
  value INTEGER
)

The database table is filled with a simple INSERT statement which is also benchmarked. The goal is not to demonstrate and show the faster insert method, nor the faster query for a given database or language.

Benchmark

The SQL benchmarks are simple and they are implemented in the same way for each language so that we can get a rough comparison between languages for a given database. The SELECT query retrieves all the database table rows but it includes a LIMIT to restrict the number of rows returned. The query is executed with different values for the limit so that a simple graph can be drawn. For each database, the SQL query looks like:

SELECT * FROM test_simple LIMIT 10

The SQL statements are executed 10000 times for SELECT queries, 1000 times for INSERT and 100 times for DROP/CREATE statements.

Each SQL benchmark program generates an XML file that contains the results as well as resource statistics taken from the /proc/self/stat file. An Ada tool is provided to gather the results, prepare the data for plotting and produce an Excel file with the results.

Python code

def execute(self):
  self.sql = "SELECT * FROM test_simple LIMIT " + str(self.expect_count)
  repeat = self.repeat()
  db = self.connection()
  stmt = db.cursor()

  for i in range(0, repeat):
    stmt.execute(self.sql)
    row_count = 0
    for row in stmt:
      row_count = row_count + 1

    if row_count != self.expect_count:
      raise Exception('Invalid result count:' + str(row_count))

    stmt.close()
Java code
public void execute() throws SQLException {
  PreparedStatement stmt
 = mConnection.prepareStatement("SELECT * FROM test_simple LIMIT " + mExpectCount);

  for (int i = 0; i < mRepeat; i++) {
    if (stmt.execute()) {
      ResultSet rs = stmt.getResultSet();
      int count = 0;
      while (rs.next()) {
        count++;
      }
      rs.close();
      if (count != mExpectCount) {
        throw new SQLException("Invalid result count: " + count);
      }
    } else {
      throw new SQLException("No result");
    }
  }
  stmt.close();
}
Ada code
procedure Select_Table_N (Context : in out Context_Type) is
   DB    : constant ADO.Sessions.Master_Session := Context.Get_Session;
   Count : Natural;
   Stmt  : ADO.Statements.Query_Statement
        := DB.Create_Statement ("SELECT * FROM test_simple LIMIT " & Positive'Image (LIMIT));
begin
   for I in 1 .. Context.Repeat loop
      Stmt.Execute;
      Count := 0;
      while Stmt.Has_Elements loop
         Count := Count + 1;
         Stmt.Next;
      end loop;
      if Count /= LIMIT then
         raise Benchmark_Error with "Invalid result count:" & Natural'Image (Count);
      end if;
   end loop;
end Select_Table_N;

The benchmark were executed on an Intel i7-3770S CPU @3.10Ghz with 8-cores running Ubuntu 16.04 64-bits. The following database versions are used:

  • MariaDB 10.0.36
  • PostgreSQL 9.5.14

Resource usage comparison

The first point to note is the fact that both Python and Ada require only one thread to run the SQL benchmark. On its side, the Java VM and database drivers need 20 threads to run.

The second point is not surprising: Java needs 1000% more memory than Ada and Python uses 59% more memory than Ada. What is measured is the the VM RSS size which means this is really the memory that is physically mapped at a given time.

The SQLite database requires less resource than others. The result below don't take into account the resource used by the MariaDB and PostgreSQL servers. At that time, the MariaDB server was using 125Mb and the PostgreSQL server was using 31Mb.

sql-memory.png

Speed comparison

Looking at the CPU time used to run the benchmark, Ada appears as a clear winner. The Java PostgreSQL driver appears to be very slow at connecting and disconnecting to the database, and this is the main reason why it is slower than others.

sql-time.png

It is interesting to note however that both Java and Python provide very good performance results with SQLite database when the number of rows returned by the query is less than 100. With more than 500 rows, Ada becomes faster than others.

sql-sqlite.png

With a PostgreSQL database, Ada is always faster even with small result sets.

sql-postgresql.png

sql-mysql.png

Conclusion and references

SQLite as an embedded database is used on more than 1 billion of devices as it is included in all smartphones (Android, iOS). It provides very good performances for small databases.

With client-server model, MariaDB and PostgreSQL are suffering a little when compared to SQLite.

For bigger databases, Ada provides the best performance and furthermore it appears to be more predictable that other languages (ie, linear curves).

The Excel result file is available in: sql-benchmark-results.xls

Sources of the benchmarks are available in the following GitHub repository:

AKT a tool to store and protect your sensitive information

26 December 2019 at 17:47

AKT stores information in secure wallets and protects the stored information by encrypting the content with different keys. AKT can be used to safely store passwords, credentials, bank accounts, documents and even directories.

Wallets are protected by a master key using AES-256 and the wallet master key is protected by a user password or a user GPG encrypted key. The wallet defines up to 7 slots that identify a password key that is able to unlock the master key. To open a wallet, it is necessary to unlock one of these 7 slots by providing the correct password. Wallet key slots are protected by the user's password and the PBKDF2-HMAC-256 algorithm, a random salt, a random counter and they are encrypted using AES-256.

C

Values stored in the wallet are protected by their own encryption keys using AES-256. A wallet can contain another wallet which is then protected by its own encryption keys and passwords (with 7 independent slots). Because the child wallet has its own master key, it is necessary to known the primary password and the child password to unlock the parent wallet first and then the child wallet.

The data is organized in 4K blocks whose primary content is encrypted either by the wallet master key or by the entry keys. The data block is signed by using HMAC-256. A data block can contain several values but each of them is protected by its own encryption key. Each value is also signed using HMAC-256.

The keystore uses several encryption keys at different levels to protect the content. A document stored in the keystore is split in data fragment and each data fragment is encrypted by using its own key. The data fragments are stored in specific data blocks so that they are physically separated from the encryption keys.

The data fragment encryption keys are stored in the directory blocks and they are encrypted by using a specific directory key.

akt-keys.png

For example, a 10K document will be split in 3 data fragments, each of them encrypted by their own AES-256 key. A 5K document will be encrypted with two AES-256 keys, one for the two data fragments. All these keys are protected by the wallet data key key. The directory part of the wallet which describes entries in the wallet is also encrypted by another wallet key: the directory key.

The tool allows to separate the data blocks which contain data fragments from other blocks. This allows to keep the wallet keys separate from the data. It is then possible to export the data blocks, which are encrypted in AES-256-CBC, to the Cloud without exposing the keys used for encryption.

If you want to know more about the implementation, have a look at the Ada Keystore Implementation chapter.

Using AKT

akt is the command line tool that you can use to protect and store your documents. It contains several commands:

  • create: create the keystore
  • edit: edit the value with an external editor
  • get: get a value from the keystore
  • help: print some help
  • list: list values of the keystore
  • remove: remove values from the keystore
  • set: insert or update a value in the keystore

To create the secure file, use the following command and enter your secure password (it is recommended to use a long and complex password):

  akt create secure.akt

You may also protect the keystore by using your GPG key. In that case, you can use the --gpg option and specify one or several GPG key ids. Using GPG is probably the best method to protect your akt files.

  akt create secure.akt --gpg 0xFC15CA870BE470F9

At this step, the secure file is created and it can only be opened by providing the password you entered. To add something, use:

  akt set secure.akt bank.password 012345

To store a file, use the following command:

  akt store secure.akt contract.doc

If you want to retrieve a value, you can use one of:

  akt get secure.akt bank.password
  akt extract secure.akt contract.doc

You can also use the akt command together with the tar command to create secure backups. You can create the compressed tar file, pipe the result to the akt command to store the content in the wallet.

  tar czf - dir-to-backup | akt store secure.akt -- backup.tar.gz

To extract the backup you can use the extract command and feed the result to the tar command as follows:

  akt extract secure.akt -- backup.tar.gz | tar xzf -

Using Ada Keystore

The Ada Keystore is the Ada 2012 library that is behind AKT. It should be quite easy to integrate the library in an existing Ada application to protect for example some sensitive configuration file. The Keystore is the main package that provides operations to store information in secure wallets and protect the stored information by encrypting the content. To use it, add the following with clause at beginning of your GNAT project:

   with "keystoreada";
Creation

To create a keystore you will first declare a Wallet_File instance. You will also need a password that will be used to protect the wallet master key.

with Keystore.Files;
...
  WS   : Keystore.Files.Wallet_File;
  Pass : Keystore.Secret := Keystore.Create ("There was no choice but to be pioneers");

You can then create the keystore file by using the Create operation:

  WS.Create ("secure.akt", Pass);
Storing

Values stored in the wallet are protected by their own encryption keys using AES-256. The encryption key is generated when the value is added to the wallet by using the Add operation.

  WS.Add ("Grace Hopper", "If it's a good idea, go ahead and do it.");

The Get function allows to retrieve the value. The value is decrypted only when the Get operation is called.

  Citation : constant String := WS.Get ("Grace Hopper");

The Delete procedure can be used to remove the value. When the value is removed, the encryption key and the data are erased.

  WS.Delete ("Grace Hopper");

Getting AKT

You can get AKT by using the Ubuntu 18.04 binary packages. You can do this by running:

wget -O - http://apt.vacs.fr/apt.vacs.fr.gpg.key | sudo apt-key add -
sudo add-apt-repository "deb http://apt.vacs.fr/ubuntu-bionic bionic main"
sudo apt-get install akt

For other platforms, you have to get it from the sources. Install the GNAT Ada compiler, either the FSF version or the GNAT GPL version and then, run the following commands:

git clone --recursive https://github.com/stcarrez/ada-keystore.git
cd ada-keystore
./configure --disable-nls
make build install

You can browse the documentation online: Ada Keystore Guide.

New version of Ada Web Application

1 May 2020 at 20:49

The framework provides several ready to use and extendable modules that are common to many web applications. This includes the login, authentication, users, permissions, managing comments, tags, votes, documents, images. It provides a complete blog, question and answers and a wiki module.

AWA simplifies the Web Application development by taking care of user management authentication and by providing the foundations on top of which you can construct your own application. AWA provides a powerful permission management that gives flexibility to applications to grant access and protect your user's resources.

A typical architecture of an AWA application is represented by the picture below:

((Ada/awa_architecture_overview.png|awa_architecture_overview.png|C|Ada Web Application architecture)

This version of AWA integrates smoothly with Ada Keystore in order to protect the server sensitive configuration.

Ada Web Application, Version 2.0

  • Refactoring of build process and installation
  • New audit manager for database auditing
  • Support for Postgresql
  • Improvements of images and storage plugins
  • Update Trumbowyg editor to version 2.18.0
  • Update Flot library to version 4.2.0
  • Support for commands to configure, start, stop the server
  • New mail UI component <mail:attachment> to send attachments

Dynamo, Version 1.0.0

  • Improvement and fixes in the YAML database model files
  • Add support for Nullable_String type
  • Generate Postgresql SQL files from the model files
  • Add support for database record auditing
  • Add support for floating point
  • Add support for CSS and Javascript merge in the dist command

Ada Database Objects, Version 2.1.0

  • Added Is_Modified predicate on database objects
  • Fix SQLite Load_Schema to avoid loading SQLite specific tables
  • Support for Postgresql database
  • Improvement for errors reported by database drivers
  • New audit framework to track database record changes
  • Added support for floating point numbers
  • Serialize queries in JSON/XML streams

Ada Keystore, Version 1.2.0

  • Added support for Fuse with a new mount command in akt (beta!)
  • Fix the implementation to iterate with Keystore.Properties

Ada Server Faces, Version 1.4.0

  • Performance improvement for the Facelet cache
  • Integrate jQuery 3.4.1, jQuery UI 1.12.1, jQuery Chosen 1.8.7
  • New <f:validateRegex> to validate an input field with a regular expression

Ada Utility Library, Version 2.2.0

  • New Wait_Empty operation on fifo.
  • Add Get_Count and Wait operation on executors

Ada EL Library, Version 1.8.0

  • New Expand procedure to expand the properties in place

Ada Wiki Library, Version 1.2.1

  • Minor configuration and code coverage support
  • Corrections in the Markdown syntax parser

Ada Security Library, Version 1.3.0

  • Add support to extend the authenticate manager and allow to use custom authentication through the Set_Default_Factory operation.

Ada Servlet, Version 1.4.0

  • Added support to configure the web container

Ada Stemmer Library

16 May 2020 at 07:55

Stemming is not new as it was first introduced in 1968 by Julie Beth Lovis who was a computational linguist that created the first algorithm known today as the Lovins Stemming algorithm. Her algorithm has significantly influenced other algorithms such as the Porter Stemmer algorithm which is now a common stemming algorithm for English words. These algorithms are specific to the English language and will not work for French, Greek or Russian.

To support several natural languages, it is necessary to have several algorithms. The Snowball stemming algorithms project provides such support through a specific string processing language, a compiler and a set of algorithms for various natural languages. The Snowball compiler has been adapted to generate Ada code (See Snowball Ada on GitHub).

The Ada Stemmer Library integrates stemming algorithms for: English, Danish, Dutch, French, German, Greek, Italian, Serbian, Spanish, Swedish, Russian. The Snowball compiler provides several other algorithms but they are not integrated yet: their integration is left as an exercise to the reader.

Stemmer Overview

Snowball is a small string processing language designed for creating stemming algorithms for use in Information Retrieval. A Snowball script describes a set of rules which are applied and checked on an input word or some portion of it in order to eliminate or replace some terms. The stemmer will usually transform a plural into a singular form, it will reduce the multiple forms of a verb, find the noun from an adverb and so on. Romance languages, Germanic languages, Scandinavian languages share some common rules but each language will need its own snowball algorithm. The Snowball compiler provides a detailed list of several stemming algorithms for various natural languages. This list is available on: https://snowballstem.org/algorithms/

C

The Snowball compiler reads the Snowball script and generates the stemmer implementation for a given target programming language such as Ada, C, C#, Java, JavaScript, Go, Python, Rust. The Ada Stemmer Library contains the generated algorithms for several natural languages. The generated stemmers are not able to recognize the natural language and it is necessary to tell the stemmer library which natural language you wish to use.

The Ada Stemmer Library supports only UTF-8 strings which simplifies both the implementation and the API. The library only uses the Ada String type to handle strings.

Setup

To use the library, you should run the following commands:

  git clone https://github.com/stcarrez/ada-stemmer.git
  cd ada-stemmer
  make build install

This will fetch, compile and install the library. You can then add the following line in your GNAT project file:

  with "stemmer";

Stemming examples

Each stemmer algorithm works on a single word at a time. The Ada Stemmer Library does not split words. You have to give it one word at a time to stem and it returns either the word itself or its stem. The Stemmer.Factory is the multi-language entry point. The stemmer algorithm is created for each call. The following simple code:

  with Stemmer.Factory; use Stemmer.Factory;
  with Ada.Text_IO; use Ada.Text_IO;
    ...
    Put_Line (Stem (L_FRENCH, "chienne"));

will print the string:

 chien

When multiple words must be stemmed, it may be better to declare the instance of the stemmer and use the same instance to stem several words. The Stem_Word procedure can be called with each word and it returns a boolean that indicates whether the word was stemmed or not. The result is obtained by calling the Get_Result function. For exemple,

  with Stemmer.English;
  with Ada.Text_IO; use Ada.Text_IO;
  ..
    Ctx : Stemmer.English.Context_Type;
    Stemmed : Boolean;
    ..
    Ctx.Stem_Word ("zealously", Stemmed);
    if Stemmed then
       Put_Line (Ctx.Get_Result);
    end if;

Integrating a new Stemming algorithm

Integration of a new stemming algorithm is quite easy but requires to install the Snowball Ada compiler.

  git clone --branch ada-support https://github.com/stcarrez/snowball
  cd snowball
  make

The Snowball compiler needs the path of the stemming algorithm, the target programming language, the name of the Ada child package that will contain the generated algorithm and the target path. For example, to generate the Lithuanian stemmer, the following command can be used:

  ./snowball algorithms/lithuanian.sbl -ada -P Lithuanian -o stemmer-lithuanian

You will then get two files: stemmer-lithuanian.ads and stemmer-lithuanian.adb. After integration of the generated files in your project, you can access the generated stemmer with:

  with Stemmer.Lithuanian;
  ..
    Ctx : Stemmer.Lithuanian.Context_Type;

Conclusion

Thanks to the Snowball compiler and its algorithms, it is possible to do some natural language analysis. Version 1.0 of the Ada Stemmer Library being available on GitHub, it is now possible to start doing some natural language analysis in Ada!

Easy reading and writing files with Ada Utility Library

9 August 2020 at 20:49

The Ada Utility Library provides several simple operations that simplify the reading and writing of files through a single procedure call. These operations are not new since I've implemented most of them 10 years ago!!!

Reading a file line by line by using Ada.Text_IO is quite easy but annoying due to the fact that you have to open the file, iterate over the content getting one line at a time and then closing the file. To simplify this process, the Util.Files package of Ada Utility Library provides a simple Read_File procedure that uses a procedure as parameter and that procedure will be called for each line that is read.

with Util.Files;

  procedure Read_Line (Line : in String) is ...;

  Util.Files.Read_File (Path => "file.txt",
                        Process => Read_Line'Access);

Another Read_File procedure allows to read the file and get its content in a vector of strings so that once the file is read the vector contains each line. Yet, another Read_File procedure reads the file content in an Unbounded_String. You will use that later form as follows:

with Util.Files;
with Ada.Strings.Unbounded;

  Content : Ada.Strings.Unbounded.Unbounded_String;

  Util.Files.Read_File (Path => "file.txt",
                        Into => Content);

Very often it is also necessary to write some content in a file. Again, this is easy to do but a simple Write_File procedure that takes the file path and the content to write is easier to use in several cases:

with Util.Files;

  Util.Files.Write_File (Path => "file.txt",
                         Content => "something");

The Ada Utility Library contains other useful operations and features that have helped me in implementing various projects such as Ada Web Application and Ada Keystore. Have a look at the Ada Utility Library Programmer's Guide!

โŒ
โŒ