KPU

KPU is a general-purpose neural network processor, which can do convolutional neural network calculation at low power consumption, for example obtain the size, coordinates and types of detected objects or detect and classify faces and objects.

  • KPU has the following features:
    • Supports fixed-point models trained by mainstream framework with some restrictions
    • There is no direct limit on the number of network layers. It supports separate configuration of each layer of convolutional neural network parameters, including the number of input and output channels, input and output line width and column height.
    • Supports two convolution kernels 1x1 and 3x3
    • Support any form of activation function
    • The maximum supported neural network parameter size in real-time work is 5.5MiB to 5.9MiB
    • Maximum supported network parameter size when working in non-real time (Flash capacity - software volume)

1. Module Method

1.1. Loading the model

Load a model from flash or file system

import KPU as kpu
task = kpu.load(offset or file_path)

Parameters

  • offtset: The offset of the model in flash, such as 0xd00000 indicates that the model is flashed at the beginning of 13M
  • file_path: The model is the file name in the file system, such as "/sd/xxx.kmodel"
Back
  • kpu_net: kpu network object

1.2. Initializing the yolo2 network

Passing initialization parameters for the yolo2 network model

import KPU as kpu
task = kpu.load(offset or file_path)
anchor = (1.889, 2.5245, 2.9465, 3.94056, 3.99987, 5.3658, 5.155437, 6.92275, 6.718375, 9.01025)
kpu.init_yolo2(task, 0.5, 0.3, 5, anchor)

Parameters

  • kpu_net: kpu network object

  • threshold: probability threshold

  • nms_value: box_iou threshold

  • anchor_num: number of anchors

  • anchor: anchor parameters are consistent with model parameters

1.3. initialization

import KPU as kpu
task = kpu.load(offset or file_path)
kpu.deinit(task)

Parameters

kpu_net: kpu_net object returned by kpu_load

1.4. Running yolo2 network

import KPU as kpu
import image
task = kpu.load(offset or file_path)
anchor = (1.889, 2.5245, 2.9465, 3.94056, 3.99987, 5.3658, 5.155437, 6.92275, 6.718375, 9.01025)
kpu.init_yolo2(task, 0.5, 0.3, 5, anchor)
img = image.Image()
kpu.run_yolo2(task, img) #This is not right, please refer to the routine

Parameters

  • kpu_net: kpu_net object returned by kpu_load
  • image_t: image captured from sensor
Back
  • list: list of kpu_yolo2_find

1.5. Network forward operation (forward)

Calculate the loaded network model to the specified number of layers 3, and output the feature map of the target layer

import KPU as kpu
task = kpu.load(offset or file_path)
……
fmap=kpu.forward(task,img,3)

Parameters

  • kpu_net: kpu_net object
  • image_t: image captured from sensor
  • int: specifies the number of layers to calculate to the network
Back
  • fmap: Feature map object, containing the feature map of all channels of the current layer

1.6. fmap feature map

Take the specified channel data of the feature map to the image object

img=kpu.fmap(fmap,1)

Parameters

  • fmap: feature map object
  • int: specify the channel number of the feature map
Back
  • img_t: The grayscale image generated by the corresponding map of the feature map

1.7. fmap_free Release Feature Map

Release feature map object

kpu.fmap_free(fmap)

Parameters

  • fmap: feature map object
Back
  • none

1.8. netinfo

Get the network structure information of the model

info=kpu.netinfo(task)
layer0=info[0]

Parameters

  • kpu_net: kpu_net object
Back
  • netinfo list: a list of all layers of information, including information:
    Index: the number of layers of the current layer in the network
    Wi: input width
    Hi: input height
    Wo: output width
    Ho: output height
    Chi: number of input channels
    Cho: number of output channels
    Dw: whether it is a depth wise layer
    Kernel_type: convolution kernel type, 0 is 1x1, 1 is 3x3
    Pool_type: pooling type, 0 is not pooled; 1:2x2 max pooling; 2:...
    Para_size: the number of bytes of the convolution parameter of the current layer
    

2. Routine

Running face recognition demo

Model download address: http://dl.sipeed.com/MAIX/MaixPy/model/face_model_at_0x300000.kfpkg

import sensor
import image
import lcd
import KPU as kpu

lcd.init()
sensor.reset()
sensor.set_pixformat(sensor.RGB565)
sensor.set_framesize(sensor.QVGA)
sensor.run(1)
task = kpu.load(0x300000) #使用kfpkg将 kmodel 与 maixpy 固件打包下载到 flash
anchor = (1.889, 2.5245, 2.9465, 3.94056, 3.99987, 5.3658, 5.155437, 6.92275, 6.718375, 9.01025)
a = kpu.init_yolo2(task, 0.5, 0.3, 5, anchor)
while(True):
    img = sensor.snapshot()
    code = kpu.run_yolo2(task, img)
    if code:
        for i in code:
            print(i)
            a = img.draw_rectangle(i.rect())
    a = lcd.display(img)
a = kpu.deinit(task)

Operational feature map

Model download address: http://dl.sipeed.com/MAIX/MaixPy/model/face_model_at_0x300000.kfpkg

The model is an 8-bit fixed-point model, about 380KB in size, and the layer information is:

1 2 : 160x120
3 4 5 6 : 80x60
7 8 9 10 :40x30
11~16 : 20x15
import sensor
import image
import lcd
import KPU as kpu
index=3  
lcd.init()
sensor.reset()
sensor.set_pixformat(sensor.RGB565)
sensor.set_framesize(sensor.QVGA)
sensor.run(1)
task=kpu.load(0x300000)
img=image.Image()
info=kpu.netinfo(task)
layer=info[index]
w=layer.wo()
h=layer.ho()
num=int(320*240/w/h)
list=[None]*num
x_step=int(320/w)
y_step=int(240/h)
img_lcd=image.Image()
while True:
    img=sensor.snapshot()
    fmap=kpu.forward(task,img,index)
    for i in range(0,num):
        list[i]=kpu.fmap(fmap,i)
    for i in range(0,num):
        list[i].stretch(64,255)
    for i in range(0,num):
        a=img_lcd.draw_image(list[i],((i%x_step)*w,(int(i/x_step))*h))
       lcd.display(img_lcd)
       kpu.fmap_free(fmap)
powered by GitbookFile Modify: 2020-08-26 15:56:59

results matching ""

    No results matching ""