Back to blog

FPGA Data Transfer demo #5

Reading data from FPGA

The FPGADevice class uses Opal Kelly Python API to:
– load bitfile and configure the board
using:

okCFrontPanel.ConfigureFPGA(bit_file_path)

– control data generation
by integrating:

okCFrontPanel.SetWireInValue(endpoint_addr, value)
okCFrontPanel.ActivateTriggerIn(endpoint_addr, bit)

– receive data generated by the module

okCFrontPanel.ReadFromBlockPipeOut(endpoint_addr, block_size, data)

Current transfer (data) length setting is 1024 bytes. This value should be adjusted to meet the end application needs. For example, it can be increased to enable proper throughput with higher data generation rate. The corresponding parameter on the FPGA side should be also changed then.
Received data is then pushed to further processing.

Data unpacker

Received buffers are then fed to the “Data unpacker”‘s queue. The “Data unpacker” operates in a separate thread and process buffers by unpacking them from thepacket structure (described in post#3 ).
Unpacked data is then sent to the Data Manager’s queue.

Data manager

The main (and only) job of this module is to feed the buffers from its queue to the queues of all further processors connected to it. Currently, these are: plot data source and hdf5 writer.

Plotting

Plot data source stores data, and returns them on request. Currently acquiring the data for specified channel and time span is implemented. It is added as a data handle for Plotting widget of the Gui application. pyqtgraph python library is used to display the data.

Writing to HDF

Why HDF5?
HDF5 is the standardized format used widely in academic and industry fields.
It has no data size limit or maximum allowed dimensions so it’s perfect for big data such as multiple or high-frequency time series.
It organises the data into datasets (multidimensional arrays of specified datatype) which can be gathered into different groups.
Also, dataset chunking is supported which leads to better performance and possible compression.
Additionally, every complex data type within HDF5 file may have some defined attributes.

In our case, the dataset dimensions are NUMBER_OF_SOURCES (constant value for a single dataset) by NUMBER_OF_TIME_STAMPS (which is expanded every time new data appear). Two attributes: “start_time” and “sampling_interval” – are there to enable time vector reconstruction.

Reading from HDF

You can open the resulting files using the popular HDFView (version >= 3.0) or ViTables, but we found that HDF5View’s plotting abilities are rather limited, and ViTables does not provide one. Knowing that and having the online data visualisation already implemented we wrote our own module for viewing data stored in HDF5 files.
For the purpose of this demo, a whole .h5 file is read into a PlotData object, therefore, the user is able to explore the whole timespan of the selected channel. It does not lead to any performance threats provided that we are dealing with a relatively small amount of data.

[code language=”python”]
class HDF5FileReader:
def __init__(self, file_path):
self.file = h5py.File(file_path, ‘r’)

group_name = list(self.file.keys())[0]
self.group = self.file[group_name]
data_set_name = list(self.group.keys())[0]
self.data_set = self.group[data_set_name]

self.number_of_sources = self.data_set.shape[1]
self.time_span = self.data_set.shape[0]

self.start_time = self.data_set.attrs[‘start_time’]
self.sampling_interval = self.data_set.attrs[‘sampling_interval’]
self.sampling_rate = int(1/self.sampling_interval)
(…)

[/code]

The above code snipped shows how to read the data from HDF5 file using h5py python module.

Recent entries

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.