current position:Home>Ten minutes will take you to understand the whole process of cann application development

Ten minutes will take you to understand the whole process of cann application development

2022-01-26 23:29:52 Huawei cloud developer community

​​​​​​​ Abstract :CANN As a rising AI Processor engine , Support a variety of mainstream AI frame , Include MindSpore、TensorFlow、Pytorch、Caffe etc. , And provide 1200 Multiple basic operators .

2021 year 7 month 8 Japan , The fourth world artificial intelligence conference was held in Shanghai . Hu houkun, the rotating chairman of Huawei, delivered a speech at the opening ceremony , Mentioned : Huawei has launched a heterogeneous computing architecture CANN(ComputeArchitecture for Neural Networks),CANN As a rising AI Processor engine , Support a variety of mainstream AI frame , Include MindSpore、TensorFlow、Pytorch、Caffe etc. , And provide 1200 Multiple basic operators . meanwhile ,CANN Open and easy to use ACL(Ascend ComputingLanguage) Programming interface (API) , The graph level and operator level compilation optimization of the network model are provided 、 Automatic tuning and other functions .CANN Support a variety of... On the AI frame , Down service AI Processor and programming , It's ascension AI Processor is a key platform for computing efficiency .

How to understand CANN, How to use CANN, Especially how based on CANN Open and easy to use ACL Programming interface to realize the rapid deployment of neural network ?

I believe most of us developers are not unfamiliar with training a model , But for deploying the model to the edge side , Make an application , To generate value ( Or help you make money ?), Maybe it's not very clear . but , Once it comes to helping you make money , I'm sure you'll be interested , Am I right? ?

what ? To make money ! Speaking of this , I knew you weren't sleepy !

We found that many online experience cases have been provided in shengteng community , These cases can not only experience the effect online , Online experiments can also be carried out , At the same time, the source code is open , You can learn from the resources provided by the open source community and carry out secondary development based on these cases , A real experience 、 Study 、 Three steps for development !

So let's choose some interesting cases to take you to learn based on CANN The rise of AI Application development process , What I want to introduce to you today is this cartoon image generation application that can attract you , Whether it's a beautiful little sister or a lovely little cute pet, it can help you generate seconds , Here's the picture , Select preset pictures or upload your own private photos , Click on “ Generate immediately ” You can experience .

First, let's introduce how to find the application :

Enter shengteng community www.hiascend.com/

developer -> The application case ,Ok, You can already see the dog , The title of the case is “ Cartoon image generation ”, Click in and you'll see .

Experience doesn't say , Select the image to be processed on the left , You can also upload your own pictures , Then click generate , The picture will be uploaded to Huawei cloud shengteng AI On computing resources , After reasoning, it will be sent back to the front-end display , Thousands of miles apart, it can still provide a second experience , Here's a great compliment !

So let's get down to business , Show you about CANN Application development process of . The complete development process is as follows :

0) Project preparation

Hardware : The server and reasoning card are ready , Installing the operating system , configure network .

The model after training :Caffe、MindSpore、TensorFlow Model ;

1) Model transformation

Offline model :Ascend310 Operator list , ATC Transformation tool .

2) Operator development and debugging

Custom operator development :TBE DSL,TBE TIK And other operator development tools .

3) Business development and debugging

ACL Interface : Resource initialization , The data transfer , Data preprocessing , Model reasoning , Data post-processing, etc .

4) performance tuning

performance optimization : Bottleneck analysis , Memory optimization , Model optimization, etc .

It looks complicated , Right ?? But what about us , To solve the main contradiction ( Complete the application development first ), Other optimization processes ( precision 、 Performance etc. ) Keep it and digest it slowly , The coming days would be long , Isn't it ?

No more nonsense , Suppose we already have a model , It also meets our needs , The model is also rising AI Models already supported by the processor ( There is no need to do operator development ), All we have to do now is understand the model , Analyze its pretreatment process ( Prepare data for the model ), Post processing ( Result display ), The offline model is obtained by model transformation , Then there is code development ( In fact, it's called CANN ACL Various API Complete model loading 、 It's just a process of reasoning ), Here is chosen python,CANN Of pyACL To use .

To sum up , In this case, we only need to complete the following process :

1. Understand the model 2. Model transformation 3. be based on CANN ACL Interface for code development

1. Understand the model

YangChen、Yong-Jin LIU The algorithm proposed by et al CartoonGAN:Generative Adversarial Networks for Photo Cartoonization be based on GAN The Internet , Iterative training generator and discriminator , The antagonistic loss constraint provided by the discriminator , Finally, convert natural pictures into cartoon style pictures , The effect is amazing .

The following figure shows the overall framework of cartoon image generation :

With GenerativeAdversarial Networks(GAN) Based on , Its architecture includes a generator (Generator) And a discriminator (Discriminator), Two networks are trained iteratively , The antagonistic loss constraint provided by the discriminator , Finally, the network can convert natural pictures into cartoon style pictures , And keep the content information of the original picture .

chart 1 The overall architecture of cartoon image generation

Generator structure

The generator adopts self encoder , It is a full convolution structure , Including encoder, as shown in the figure 3, The decoder is shown in the figure 4.

chart 3 Encoder structure

chart 4 Decoder structure

The encoder input is a real picture , Its architecture consists of 7×7 Convolution kernel , In steps of 1 The convolution layer begins , The next two steps are 2 Downsampling block , Feature extraction , Useful information for style image generation will be extracted in this process . And then 4 A remnant (Residual Block), Used to reconstruct content features .

The decoder architecture consists of 5 Start with a residual block , Through two up sampling blocks ( In steps of 2) To achieve the reconstruction of different style features , Finally, add a layer and use 7×7 Convolution kernel 、 In steps of 1 The convolution of layer , Get the final output of the generated cartoon picture .

At present, our goal is to make the trained model into a AI application , So all you need to know here is the generator , As can be seen from the above structure , The input of the model is the image , The output is also an image , What about the pretreatment and post-processing process ?

Preprocessing and postprocessing of the original model

stay Github Can find its test script :github.com/taki0112/Ca…

Combined with the previous understanding , And through the study of the test script , Be able to understand the methods of image preprocessing and post-processing

Rise CANN The software stack provides two sets of tools for data preprocessing , One of them is called AIPP(AI Preprocessing) ; The other is called DVPP(Digital Vision Preprocessing).

DVPP yes CANN Digital visual preprocessing module in software stack , Rise 310 AI The processor supports matting and scaling 、jpeg codec 、 Video codec 、png Decoding and other functions . Please refer to the documentation of shengteng community for details :

support.huaweicloud.com/devg-cannAp…

AIPP Used in AI Core Complete image preprocessing on , Including color gamut conversion ( Convert image format )、 Image normalization ( Minus mean / Multiplication factor ) And matting ( Specify the starting point of matting , Pull out the image of the size required by the neural network ) etc. , static state AIPP Can be in ATC Model conversion is completed by setting parameters , It's more convenient . Please refer to the documentation of shengteng community for details :support.huaweicloud.com/tg-cannAppl…

ad locum , Let's assume that you already know DVPP and AIPP, So based on the understanding of the original model , During image preprocessing DVPP and AIPP The division of labor is as follows ( The principle of division of labor :DVPP+AIPP The processing of is equivalent to the preprocessing process of the original model ):

DVPP:

1. decode :JPEG The picture is decoded into YUV420SP, Output width 128, high 16 alignment , for example : If the input original image size is (500, 375) After decoding, the image size becomes (512, 384)

2. Image zoom : Use DVPP Of VPC The interface scales the image to the size required by the model (256,256), Required width 16, high 2 alignment , take (512,384) Valid data in the image (500,375) Zoom to (256,256)

notes : You can also consider equal scaling here , However, due to equal scaling, it involves using AIPP tailoring , Later on .

AIPP:

1. Gamut Conversion :YUV->RGB

2. Image standardization ,AIPP To configure mean_chn=[127.5,127.5,127.5]min_chn=[0.5,0.5,0.5] var_reci_chn=[0.007874,0.007874,0.007874]

2) post-processing

According to the original model, the image post-processing flow , Convert image pixels to [0,255], And save it . The process directly uses scipy Library implementation .

2. Model transformation

The original network model is TensorFlow Framework model , And shengteng CANN The model required for the software stack is .om Offline model , therefore , Need to pass through ATC The model conversion tool will .pb The file is converted to Ascend 310 Processor supported Davinci Model file .

ATC Model conversion instructions :

atc --output_type=FP32 --input_shape="train_real_A:1,256,256,3"--input_format=NHWC --output="./cartoonization"--soc_version=Ascend310 --insert_op_conf=./insert_op.cfg --framework=3--model="./cartoonization.pb" --precision_mode=allow_fp32_to_fp16

Description of important parameters :

--model: Original model file path and file name .

--output: Path and file name of the converted offline model .

--precision_mode=allow_fp32_to_fp16: Set the accuracy mode of the network model , Give priority to maintaining the accuracy of the original drawing , If the operator in the network model supports float32, The original precision is retained float32; If the operator in the network model does not support float32, Then directly reduce the accuracy to float16.

--insert_op_conf: Insert the configuration file path and file name of the operator , for example AIPP Preprocessing operator .

AIPP The configuration file , Process according to the original model AIPP File configuration , The configuration file of this model is shown in the figure below :

More parameter descriptions can be read :support.huaweicloud.com/atctool-can…

3. application development

The application adopts the modular design shown in the figure below , Through the coordination between each module, the reasoning output of a picture is completed .

The main function points of each module are as follows :

1. Run resource request ;

2. Load model file , Apply for model input and output memory ;

3. Data acquisition , Get the original image for reasoning ;

4. Data preprocessing , The input image of the model is preprocessed ;

5. Model reasoning , Input the preprocessed data into the model for reasoning ;

6. Reasoning result analysis , Post process the reasoning results .

The source code is not here show 了 , Shengteng community provides a better learning path , You can learn the development process directly through online experiments , Directly click the online experiment on the upper right , You can experience it for free .

After the experience , You can also move to the open source warehouse to obtain the source code , If there are any questions , Encourage everyone to mention issue Communicate directly with developers .

gitee.com/ascend/samp…

Related links

The reference links of the model paper are as follows :

openaccess.thecvf.com/content_cvp…

The deployment link of the original model is as follows :

gitee.com/syz82521194…

The relevant source code can be obtained from the following address of the open source warehouse :

gitee.com/ascend/samp…

Click to follow , The first time to learn about Huawei's new cloud technology ~

copyright notice
author[Huawei cloud developer community],Please bring the original link to reprint, thank you.
https://en.cdmana.com/2022/01/202201262329501522.html

Random recommended