Doxygen Book
_GstTensorFilterFrameworkInfo Struct Reference

Tensor_Filter Subplugin framework related information. More...

#include <nnstreamer_plugin_api_filter.h>

Collaboration diagram for _GstTensorFilterFrameworkInfo:

Public Attributes

const char * name
 
int allow_in_place
 
int allocate_in_invoke
 
int run_without_model
 
int verify_model_path
 
const accl_hwhw_list
 
int num_hw
 
accl_hw accl_auto
 
accl_hw accl_default
 
const GstTensorFilterFrameworkStatisticsstatistics
 

Detailed Description

Tensor_Filter Subplugin framework related information.

All the information except the supported accelerator is provided statically. Accelerators can be provided based on static or dynamic check dependent on framework support.

Definition at line 158 of file nnstreamer_plugin_api_filter.h.

Member Data Documentation

◆ accl_auto

accl_hw _GstTensorFilterFrameworkInfo::accl_auto

accelerator to be used in auto mode (acceleration to be used but accelerator is not specified for the filter) - default -1 implies use first entry from hw_list.

Definition at line 167 of file nnstreamer_plugin_api_filter.h.

◆ accl_default

accl_hw _GstTensorFilterFrameworkInfo::accl_default

accelerator to be used by default (valid user input is not provided) - default -1 implies use first entry from hw_list.

Definition at line 168 of file nnstreamer_plugin_api_filter.h.

◆ allocate_in_invoke

int _GstTensorFilterFrameworkInfo::allocate_in_invoke

TRUE(nonzero) if invoke_NN is going to allocate output ptr by itself and return the address via output ptr. Do not change this value after cap negotiation is complete (or the stream has been started).

Definition at line 162 of file nnstreamer_plugin_api_filter.h.

◆ allow_in_place

int _GstTensorFilterFrameworkInfo::allow_in_place

TRUE(nonzero) if in-place transfer of input-to-output is allowed. Not supported in main, yet.

Definition at line 161 of file nnstreamer_plugin_api_filter.h.

◆ hw_list

const accl_hw* _GstTensorFilterFrameworkInfo::hw_list

List of supported hardware accelerators by the framework. Positive response of this check does not guarantee successful running of model with this accelerator. Subplugin is supposed to allocate/deallocate.

Definition at line 165 of file nnstreamer_plugin_api_filter.h.

◆ name

const char* _GstTensorFilterFrameworkInfo::name

Name of the neural network framework, searchable by FRAMEWORK property. Subplugin is supposed to allocate/deallocate.

Definition at line 160 of file nnstreamer_plugin_api_filter.h.

◆ num_hw

int _GstTensorFilterFrameworkInfo::num_hw

number of hardware accelerators in the hw_list supported by the framework.

Definition at line 166 of file nnstreamer_plugin_api_filter.h.

◆ run_without_model

int _GstTensorFilterFrameworkInfo::run_without_model

TRUE(nonzero) when the neural network framework does not need a model file. Tensor-filter will run invoke_NN without model.

Definition at line 163 of file nnstreamer_plugin_api_filter.h.

◆ statistics

const GstTensorFilterFrameworkStatistics* _GstTensorFilterFrameworkInfo::statistics

usage statistics by the framework. This is shared across all opened instances of this framework.

Definition at line 169 of file nnstreamer_plugin_api_filter.h.

◆ verify_model_path

int _GstTensorFilterFrameworkInfo::verify_model_path

TRUE(nonzero) when the NNS framework, not the sub-plugin, should verify the path of model files.

Definition at line 164 of file nnstreamer_plugin_api_filter.h.


The documentation for this struct was generated from the following file: