❌ About FreshRSS

Normal view

There are new articles available, click to refresh the page.
Before yesterdayNews from the Ada programming language world

wolfSSL Ada/SPARK language bindings

17 March 2023 at 05:35

I was looking for DTLS libraries and found this recent post on the wolfSSL website.

Exciting news in wolfSSL language bindings: we are currently exploring the possibility of adding bindings for the Ada and Spark languages!

Ada is a programming language known for its explicitness, strong typing, and abundance of compile-time checks. It is widely used in safety-critical and high-integrity software. Spark, on the other hand, is a smaller subset of Ada that offers the invaluable ability to formally prove the correctness of your software.

We believe that wolfSSL bindings would be immensely valuable to the Ada and Spark communities. These bindings would provide a production-ready, robust, and well-tested TLS stack that supports the latest protocols (TLS1.3/DTLS1.3). Additionally, it would open the door to obtaining FIPS 140-3 and DOI-178C certifications for Ada and Spark applications that use TLS for their encrypted communications, or that want to use our wolfCrypt implementation for their cryptographic operations, such as encrypting data at rest.

As wolfSSL already supports post-quantum TLS 1.3 and DTLS 1.3, these bindings would also naturally allow you to make your Ada and SPARK applications quantum-safe.

Are you interested in an Ada/Spark wrapper? If so, please do not hesitate to contact us at [email protected] with any questions, comments, or suggestions.

Source: wolfSSL ADA/Spark language bindings – wolfSSL

1 post - 1 participant

Read full topic

Advent of Computing Podcast

6 February 2023 at 05:25

This podcast did an episode about the origins of Ada.

I don’t like the way he pronounces Ada and I could be pedantic about some of the broad generalizations he makes, but overall I think this is a nice summary. Sounds like it’s going to be the beginning of a series, with a future episode about tasking. I’m looking forward to it!

8 posts - 6 participants

Read full topic

Architecture dependent representation clause

2 February 2023 at 21:43

I’m writing a wrapper around some Linux ioctl(2) calls and am using a representation clause to replace the macros in ioctl.h Β« asm-generic Β« uapi Β« include - kernel/git/stable/linux.git - Linux kernel stable tree

However, this header is architecture dependent and has different field sizes and constants for some older architectures: parisc, mips, powerpc, alpha, sparc.

   --  /usr/include/asm-generic/ioctl.h
   type IOC_DIR_Field is (None, Write, Read)
      with Size => 2;
   for IOC_DIR_Field use (0, 1, 2);
   --  parisc: Read and Write are swapped
   --  mips, powerpc, alpha, sparc: IOC_DIR_Field'Size = 3, Write is 4

   type IOC is record
      NR    : UInt8;
      TYP   : UInt8;
      SIZE  : UInt14;         --  13 bits on mips, powerpc, alpha, sparc
      DIR   : IOC_DIR_Field;  --  3 bits on mips, powerpc, alpha, sparc
   end record
      with Size => 32;
   for IOC use record
      NR    at 0 range 0 .. 7;
      TYP   at 1 range 0 .. 7;
      SIZE  at 2 range 0 .. 13;
      DIR   at 2 range 14 .. 15;
   end record;

Should I create a separate package for each architecture to define these fields, or is there a better way to make a representation clause architecture dependent?

6 posts - 3 participants

Read full topic


9 December 2022 at 23:51

I’ve been playing with optimizations for an instance of Ada.Containers.Hashed_Sets for the Advent of Code Day 9 puzzle.

RM A.18.1 The Package Containers says:

type Hash_Type is mod implementation-defined;

With the Implementation advice:

Hash_Type'Modulus should be at least 2**32.

The annotated RM expands upon this:

This is not a requirement so that these types can be declared properly on machines with native sizes that are not 32 bits. For instance, a 24-bit target could use 2**24 for Hash_Type'Modulus.

In GNAT’s a-contai.ads, the definition is simply:

type Hash_Type is mod 2**32;

Would it make sense to use System.Max_Binary_Modulus instead, so that 64-bit systems can benefit from a larger hash space?

type Hash_Type is mod System.Max_Binary_Modulus;

If we want to retain the minimum 32-bit size, a conditional could be added:

type Hash_Type is mod (if System.Max_Binary_Modulus >= 32 then System.Max_Binary_Modulus else 32);

1 post - 1 participant

Read full topic

2022 Day 9: Rope Bridge

9 December 2022 at 06:32

I think the problem statement was intentionally difficult to parse this time. Those conditionals are a lot simpler than it makes them sound.

I kept the visited positions in a Hashed_Set and coming up with a good Hash function for a pair of Integers was new for me. I ended up making a lot of assumptions (Integer'Size = 32 and Hash_Type'Size = 64) and just shifted one of the integers left 32 bits. A cursory Google search turned up the Cantor Pairing function, but I wasn’t able to make this work with negative integers in a short amount of time. Clearly I need to do more reading.

Part 2 caught me off guard, I hadn’t anticipated I’d need to deal with more elements.

Note: I’m going to stop using the spoiler blur on the daily posts. If you’ve gotten this far, I assume you know that these posts contain spoilers.

13 posts - 8 participants

Read full topic

2022 Day 1: Calorie Counting

1 December 2022 at 06:18

[JeremyGrosser][1][Ada]advent/day1_2.adb at 176068a2109faca088baa96b7cb77a078264693c Β· JeremyGrosser/advent Β· GitHub
Kind of a rough start for me this year, I pulled in my Stream I/O library and test framework from last year, but I think that just made things more complicated than they need to be.

23 posts - 11 participants

Read full topic

Debian Ada package removals

27 November 2022 at 18:13


AdaControl, GNAT Studio, GtkAda, and many other Ada packages are slated for removal from Debian with the transition to GCC 12.

If you feel strongly about keeping these things available in an OS package manager rather than Alire, now’s the time to speak up.

3 posts - 3 participants

Read full topic

About the Advent of Code category

15 November 2022 at 03:09

Advent of Code is a series of daily programming puzzles from December 1st to 25th, becoming progressively more difficult each day.

I’d like to encourage conversation about the problems and solutions in Ada and SPARK on this forum.

I’ve created a new Advent of Code category. If you don’t see a topic for today’s puzzle, feel free to create one. Discussion of general techniques and tools not tied to a specific puzzle is also welcome.

I’ve installed the β€œSpoiler Alert” plugin, you should use this when posting completed solutions. See below for an example of how to use this feature, along with syntax highlighting.

Screenshot 2022-11-14 19.39.12

1 post - 1 participant

Read full topic

Unusual hardware register behavior

9 November 2022 at 17:47

On the RP2040, each GPIO pin can trigger four different interrupts: high level, low level, rising edge, and falling edge. Each pin has a 4-bit field for selecting which of these interrupts is enabled. I modeled it as an array of records.

with System;

procedure Main is
    type INT_Field is record
        Low_Level, High_Level, Falling_Edge, Rising_Edge : Boolean := False;
    end record
        with Size => 4;
    for INT_Field use record
        Low_Level    at 0 range 0 .. 0;
        High_Level   at 0 range 1 .. 1;
        Falling_Edge at 0 range 2 .. 2;
        Rising_Edge  at 0 range 3 .. 3;
    end record;

    type GPIO_Pin is range 0 .. 30;
    type INT_Register is array (GPIO_Pin) of INT_Field
        with Component_Size => 4;

    type GPIO_Peripheral is record
        INTE : INT_Register; --  Write to enable interrupts
        INTS : INT_Register; --  Masked status of enabled interrupts
        INTR : INT_Register; --  Unmasked interrupt status, write to clear
    end record;

    GPIO : GPIO_Peripheral
        with Import, Address => System'To_Address (16#4001_40F0#);
    GPIO.INTE (7).Rising_Edge := True;
end Main;

I’m simplifying things a bit here for clarity. You can find the actual implementation on GitHub

On the surface, this looks like it should work. Compiling with GNAT, this results in a strb (Store Byte) instruction after shifting some bits around.

Now this is where the bug comes in… The RP2040 datasheet explains:

2.1.4. Narrow IO Register Writes

Memory-mapped IO registers on RP2040 ignore the width of bus read/write accesses. They treat all writes as though they were 32 bits in size. This means software can not use byte or halfword writes to modify part of an IO register: any write to an address where the 30 address MSBs match the register address will affect the contents of the entire register.

By β€œaffect the contents of the entire register,” it means that an 8-bit write to a 32-bit register will change all of the bytes of the register to the same value.

If our register value is 16#0000_0000# and the strb instruction is setting bit 8 so that we have 16#0000_0100#, the resulting value is 16#0101_0101# instead.

My solution is to add another array to group fields into 32-bit array elements, so that we can use the Volatile_Full_Access aspect to guarantee that the whole word is written using the str instruction.

    type INT_Register is array (0 .. 7) of INT_Field
        with Volatile_Full_Access, Component_Size => 4, Size => 32;

    type INT_Group is array (0 .. 3) of INT_Register
        with Component_Size => 32;

This solution works, but has left me wondering how we can prevent this from happening again in the future. This is not the first bug I’ve encountered caused by the narrow write behavior of the I/O memory.

I have a few questions for discussion:

  • Is there a way to tell the compiler that a memory range only supports writes of a specific size?
  • If not, can we at least generate a warning?
  • SPARK has the Async_Writers aspect, which indicates that some external process can modify the register value. If we were to verify this code, it seems like every register would need this aspect due to this narrow write behavior.
  • Philosophically, is it possible to make any guarantees about the software without access to the hardware HDL sources? It seems like there’s a bit too much human error possible in the process of writing and interpreting documentation about hardware behavior.

6 posts - 5 participants

Read full topic