Ying-Kang

Ying-Kang

Member Since 4 years ago

Experience Points
3
follower
Lessons Completed
2
follow
Lessons Completed
21
stars
Best Reply Awards
4
repos

13 contributions in the last year

Pinned
⚡ 2019deecamp夏令营广州站49组<神来之笔——自动生成海报>项目相关资料
⚡ tainchi completetion data preprocess
Activity
Jan
14
1 week ago
Activity icon
issue

Ying-Kang issue ayatough/vscode-image-tile-viewer

Ying-Kang
Ying-Kang

bug occured when click image shown in the preview image tile

I was struggling finding such image tile preview plugins until I find this

It's a great job for achieving the preview like this

However, when I trid to view single image in the preview window, bug occured

I found out that the path in the preview window using "\"( which was commonly used in windows)

while the absolute path in linux using "/"

which make it hard to view single img triggered from "image tile viewer" by default viewer

# source img path:
/usr/images/a.jpg

# preview img's path in image tile viewer: 
\usr\image\a.jpg

# which made it impossible for understanding path for linux

I'd appreciate it if you could fix the bug

btw, it's eager for me to call "image tile preview" by right click if you could help

I'm glad to recommend this plugin for my colleagues

and sincerely waaaiting for your response

Thanks

Jan
12
2 weeks ago
Nov
29
1 month ago
Activity icon
issue

Ying-Kang issue comment alibaba/MNN

Ying-Kang
Ying-Kang

Don't support type [ArgMax], 552

平台(如果交叉编译请再附上交叉编译目标平台):

ios

Github版本:

latest

编译方式:

官网推荐方式

编译日志:

编译没有报错

问题描述

推理结果可视化正确,但是在加载模型的时候会有这个报错 请问这个是什么原因

Ying-Kang
Ying-Kang

希望官方能实现下gpu的argmax,目前仅仅支持cpu的argmax,这个操作还是比较常规的,好多模型里面都有涉及,希望未来GPU能尽快支持上来

Activity icon
issue

Ying-Kang issue alibaba/MNN

Ying-Kang
Ying-Kang

ONNX模型转MNN模型后在ios端推理结果错误

平台(如果交叉编译请再附上交叉编译目标平台):

  • linux

Github版本:

  • master
  • 71cd04e91c9281412ff8f7c2b29cd9019195131e

编译方式:

  • linux编译,没有报错

编译日志:

  • linux编译,没有报错

详细描述

转换并验证成功的onnx模型转换为mnn模型之后

与编译出来的推理库一起放到ios端应用里推理

得不到正确结果

模型是做图像分割的,节点输出的各个维度都对,但是模型得到的结果和onnx输出的结果对不上,尝试过检查:

  • onnx模型的分割结果没有问题,可视化达到预期

  • mnn转换过程没有报错,参考官方文档使用fastTestOnnx.py,TEST_SUCCESS,输出结果如下 转换成功的截图 image 验证通过截图 image

  • ios侧与团队小伙伴检查过输出和输入维度,均是mnn内部自动获取,仅有的地方已经check,ios的简单调用代码如下:

//
//  DuFootMeasureModel.m
//  DuMNNDemo
//
//  Created by admin on 11/24/21.
//

#import <opencv2/opencv.hpp>
#import "DuFootMeasureModel.h"
#import <UIKit/UIKit.h>
#import <AVFoundation/AVFoundation.h>
#import <MNN/HalideRuntime.h>
#import <MNN/MNNDefine.h>
#import <MNN/ErrorCode.hpp>
#import <MNN/ImageProcess.hpp>
#import <MNN/Interpreter.hpp>
#import <MNN/Tensor.hpp>
#import "UIImage+Utility.h"
#import "UIImage+Resize.h"

@interface DuFootMeasureModel ()
{
    std::shared_ptr<MNN::Interpreter>net_;
    MNN::Session *session_;
}

@property (nonatomic, strong) NSArray *outputKeys;
@property (nonatomic, strong) UIImage *resultImage ;
@end

@implementation DuFootMeasureModel

/// 创建model
-(instancetype)initWithModelPath:(NSString *)modelPath {
    self = [super init];
    if (self) {
        if (![[NSFileManager defaultManager] fileExistsAtPath:modelPath]) {
            NSLog(@"---- model 文件不存在!!");
            return nil;
        }
        self.outputKeys = @[@"BG", @"A1", @"A2", @"A3", @"A4", @"B1", @"B2", @"C1", @"C2", @"D1", @"D2", @"E1", @"E2", @"E3", @"E4", @"F1", @"F2", @"G1", @"G2", @"H1", @"H2", @"J1", @"J2", @"J3", @"J4", @"K1", @"K2", @"P", @"O", @"I", @"Z", @"N", @"0", @"foot"];
        [self loadSessionWithModelPath:modelPath];
    }
    return self;
}

/// 创建模型和session
- (void)loadSessionWithModelPath:(NSString *)modelPath {
    net_ = std::shared_ptr<MNN::Interpreter>(MNN::Interpreter::createFromFile(modelPath.UTF8String));
    
    //  2、创建 session
    MNN::ScheduleConfig config;
    config.type = MNN_FORWARD_METAL;//  GPU
    config.backupType = MNN_FORWARD_CPU;// GPU跑不起来就用CPU
    config.numThread = 4;

    MNN::BackendConfig backendConfig;
    backendConfig.memory = MNN::BackendConfig::Memory_High;  // 内存
    backendConfig.power = MNN::BackendConfig::Power_High;  // 功耗
    backendConfig.precision = MNN::BackendConfig::PrecisionMode::Precision_High;  // 精度
    config.backendConfig = &backendConfig;
    
    session_ = net_->createSession(config);
}

/// 跑图片
- (NSDictionary *)inferImageWithPath:(NSString *)imagePath {
    if (![[NSFileManager defaultManager] fileExistsAtPath:imagePath]) {
        NSLog(@"ERROR:image 不存在:%@", imagePath);
        return nil;
    }
    UIImage *image = [UIImage imageWithContentsOfFile:imagePath];
    if (!image) {
        NSLog(@"ERROR:文件不是图片! %@", imagePath);
        return nil;
    }
    
    //  resize image
    MNN::Tensor *inputTensor = net_->getSessionInput(session_, nullptr);
    auto dims = inputTensor->shape();
    MNN::Tensor::DimensionType dimsType = inputTensor->getDimensionType();
//    NSLog(@"--- dims type: %d", dimsType); // 1: CAFFE
    int width = 0, height = 0;
    switch (dimsType) {
        case MNN::Tensor::DimensionType::TENSORFLOW:{
            height = dims[1];
            width = dims[2];
        }
            break;
        case MNN::Tensor::DimensionType::CAFFE:{
            height = dims[2];
            width = dims[3];
        }
            break;
        default:{
            height = dims[2];
            width = dims[3];
        }
            break;
    }
//    NSLog(@"--- width: %d, height: %d", width, height); // 512, 512
    if (width != image.size.width || height != image.size.height) {
        image = [image resizeWithSize:CGSizeMake(width, height)];
    }
    //  获取输入 data
    unsigned char *rgba = (unsigned char *)calloc(width * height * 4, sizeof(unsigned char));
    {
        CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.CGImage);
        CGContextRef contextRef = CGBitmapContextCreate(rgba,
                                                        width,
                                                        height,
                                                        8,
                                                        width * 4,
                                                        colorSpace,
                                                        kCGImageAlphaNoneSkipLast | kCGBitmapByteOrderDefault);
        
        CGContextDrawImage(contextRef, CGRectMake(0, 0, width, height), image.CGImage);
        CGContextRelease(contextRef);
    }
    const float means[3]   = {103.94f, 116.78f, 123.68f};
    const float normals[3] = {0.017f, 0.017f, 0.017f};
    auto pretreat = std::shared_ptr<MNN::CV::ImageProcess>( MNN::CV::ImageProcess::create(MNN::CV::RGBA, MNN::CV::RGB, means, 3, normals, 3));
    
//    MNN::CV::Matrix matrix;
//    matrix.postScale((width - 1) / 223.0, (height - 1) / 223.0);
//    pretreat->setMatrix(matrix);
//    inputTensor->print();
    pretreat->convert(rgba, width, height, 0, inputTensor);
    
    free(rgba);
    
    CFAbsoluteTime start = CFAbsoluteTimeGetCurrent();
    //  output
    MNN::Tensor *output = net_->getSessionOutput(session_, nullptr);
    MNN::Tensor outCache(output);
    
    MNN::Tensor inputCache(inputTensor);
    inputTensor->copyToHostTensor(&inputCache);
    
    //  run
    inputTensor->copyFromHostTensor(&inputCache);
    net_->runSession(session_);
    output->copyToHostTensor(&outCache);
    
    NSLog(@"--- 推理耗时: %f", (CFAbsoluteTimeGetCurrent() - start) * 1000.0);
    
    int *outputData = (int *)outCache.buffer().host;
    int outWidth = outCache.width();
    int outHeight = outCache.height();
    
    //  debug code
    int *tmpData = new int[outWidth * outHeight];
    for (int i = 0; i < width * height; i ++) {
//        NSLog(@"---%d", outputData[i]);
        tmpData[i] = outputData[i] == 0 ? 0 : 255;
    }
    UIImage *retImage = utility::UIImageWithDataRGBA(tmpData, outWidth, outHeight);
    self.resultImage = retImage;
    
    free(tmpData);
    
    
    return [self calculateCenterPointWithData:outputData width:outWidth height:outHeight];
}

   
}


/// 获取结果图片
- (UIImage *)getResultImage {
    return self.resultImage;
}

@end

上面的代码是截取的调用部分,debug的注释日志也在

模型在onnx环境下输入是rgb的图像,shape是(1, 3, 512, 512), ,类型是uint8 输出是(512, 512)的图像,类型也是uint8, 拜托帮忙看看 onnx和mnn的输出精度差距很大, 多了很多假阳性的分割, 不能投入到生产中使用 且目前无法定位到问题,提供我的模型在这里 model.mnn.zip 希望能帮忙解决落地的最后一个环节,感激

Activity icon
issue

Ying-Kang issue comment alibaba/MNN

Ying-Kang
Ying-Kang

ONNX模型转MNN模型后在ios端推理结果错误

平台(如果交叉编译请再附上交叉编译目标平台):

  • linux

Github版本:

  • master
  • 71cd04e91c9281412ff8f7c2b29cd9019195131e

编译方式:

  • linux编译,没有报错

编译日志:

  • linux编译,没有报错

详细描述

转换并验证成功的onnx模型转换为mnn模型之后

与编译出来的推理库一起放到ios端应用里推理

得不到正确结果

模型是做图像分割的,节点输出的各个维度都对,但是模型得到的结果和onnx输出的结果对不上,尝试过检查:

  • onnx模型的分割结果没有问题,可视化达到预期

  • mnn转换过程没有报错,参考官方文档使用fastTestOnnx.py,TEST_SUCCESS,输出结果如下 转换成功的截图 image 验证通过截图 image

  • ios侧与团队小伙伴检查过输出和输入维度,均是mnn内部自动获取,仅有的地方已经check,ios的简单调用代码如下:

//
//  DuFootMeasureModel.m
//  DuMNNDemo
//
//  Created by admin on 11/24/21.
//

#import <opencv2/opencv.hpp>
#import "DuFootMeasureModel.h"
#import <UIKit/UIKit.h>
#import <AVFoundation/AVFoundation.h>
#import <MNN/HalideRuntime.h>
#import <MNN/MNNDefine.h>
#import <MNN/ErrorCode.hpp>
#import <MNN/ImageProcess.hpp>
#import <MNN/Interpreter.hpp>
#import <MNN/Tensor.hpp>
#import "UIImage+Utility.h"
#import "UIImage+Resize.h"

@interface DuFootMeasureModel ()
{
    std::shared_ptr<MNN::Interpreter>net_;
    MNN::Session *session_;
}

@property (nonatomic, strong) NSArray *outputKeys;
@property (nonatomic, strong) UIImage *resultImage ;
@end

@implementation DuFootMeasureModel

/// 创建model
-(instancetype)initWithModelPath:(NSString *)modelPath {
    self = [super init];
    if (self) {
        if (![[NSFileManager defaultManager] fileExistsAtPath:modelPath]) {
            NSLog(@"---- model 文件不存在!!");
            return nil;
        }
        self.outputKeys = @[@"BG", @"A1", @"A2", @"A3", @"A4", @"B1", @"B2", @"C1", @"C2", @"D1", @"D2", @"E1", @"E2", @"E3", @"E4", @"F1", @"F2", @"G1", @"G2", @"H1", @"H2", @"J1", @"J2", @"J3", @"J4", @"K1", @"K2", @"P", @"O", @"I", @"Z", @"N", @"0", @"foot"];
        [self loadSessionWithModelPath:modelPath];
    }
    return self;
}

/// 创建模型和session
- (void)loadSessionWithModelPath:(NSString *)modelPath {
    net_ = std::shared_ptr<MNN::Interpreter>(MNN::Interpreter::createFromFile(modelPath.UTF8String));
    
    //  2、创建 session
    MNN::ScheduleConfig config;
    config.type = MNN_FORWARD_METAL;//  GPU
    config.backupType = MNN_FORWARD_CPU;// GPU跑不起来就用CPU
    config.numThread = 4;

    MNN::BackendConfig backendConfig;
    backendConfig.memory = MNN::BackendConfig::Memory_High;  // 内存
    backendConfig.power = MNN::BackendConfig::Power_High;  // 功耗
    backendConfig.precision = MNN::BackendConfig::PrecisionMode::Precision_High;  // 精度
    config.backendConfig = &backendConfig;
    
    session_ = net_->createSession(config);
}

/// 跑图片
- (NSDictionary *)inferImageWithPath:(NSString *)imagePath {
    if (![[NSFileManager defaultManager] fileExistsAtPath:imagePath]) {
        NSLog(@"ERROR:image 不存在:%@", imagePath);
        return nil;
    }
    UIImage *image = [UIImage imageWithContentsOfFile:imagePath];
    if (!image) {
        NSLog(@"ERROR:文件不是图片! %@", imagePath);
        return nil;
    }
    
    //  resize image
    MNN::Tensor *inputTensor = net_->getSessionInput(session_, nullptr);
    auto dims = inputTensor->shape();
    MNN::Tensor::DimensionType dimsType = inputTensor->getDimensionType();
//    NSLog(@"--- dims type: %d", dimsType); // 1: CAFFE
    int width = 0, height = 0;
    switch (dimsType) {
        case MNN::Tensor::DimensionType::TENSORFLOW:{
            height = dims[1];
            width = dims[2];
        }
            break;
        case MNN::Tensor::DimensionType::CAFFE:{
            height = dims[2];
            width = dims[3];
        }
            break;
        default:{
            height = dims[2];
            width = dims[3];
        }
            break;
    }
//    NSLog(@"--- width: %d, height: %d", width, height); // 512, 512
    if (width != image.size.width || height != image.size.height) {
        image = [image resizeWithSize:CGSizeMake(width, height)];
    }
    //  获取输入 data
    unsigned char *rgba = (unsigned char *)calloc(width * height * 4, sizeof(unsigned char));
    {
        CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.CGImage);
        CGContextRef contextRef = CGBitmapContextCreate(rgba,
                                                        width,
                                                        height,
                                                        8,
                                                        width * 4,
                                                        colorSpace,
                                                        kCGImageAlphaNoneSkipLast | kCGBitmapByteOrderDefault);
        
        CGContextDrawImage(contextRef, CGRectMake(0, 0, width, height), image.CGImage);
        CGContextRelease(contextRef);
    }
    const float means[3]   = {103.94f, 116.78f, 123.68f};
    const float normals[3] = {0.017f, 0.017f, 0.017f};
    auto pretreat = std::shared_ptr<MNN::CV::ImageProcess>( MNN::CV::ImageProcess::create(MNN::CV::RGBA, MNN::CV::RGB, means, 3, normals, 3));
    
//    MNN::CV::Matrix matrix;
//    matrix.postScale((width - 1) / 223.0, (height - 1) / 223.0);
//    pretreat->setMatrix(matrix);
//    inputTensor->print();
    pretreat->convert(rgba, width, height, 0, inputTensor);
    
    free(rgba);
    
    CFAbsoluteTime start = CFAbsoluteTimeGetCurrent();
    //  output
    MNN::Tensor *output = net_->getSessionOutput(session_, nullptr);
    MNN::Tensor outCache(output);
    
    MNN::Tensor inputCache(inputTensor);
    inputTensor->copyToHostTensor(&inputCache);
    
    //  run
    inputTensor->copyFromHostTensor(&inputCache);
    net_->runSession(session_);
    output->copyToHostTensor(&outCache);
    
    NSLog(@"--- 推理耗时: %f", (CFAbsoluteTimeGetCurrent() - start) * 1000.0);
    
    int *outputData = (int *)outCache.buffer().host;
    int outWidth = outCache.width();
    int outHeight = outCache.height();
    
    //  debug code
    int *tmpData = new int[outWidth * outHeight];
    for (int i = 0; i < width * height; i ++) {
//        NSLog(@"---%d", outputData[i]);
        tmpData[i] = outputData[i] == 0 ? 0 : 255;
    }
    UIImage *retImage = utility::UIImageWithDataRGBA(tmpData, outWidth, outHeight);
    self.resultImage = retImage;
    
    free(tmpData);
    
    
    return [self calculateCenterPointWithData:outputData width:outWidth height:outHeight];
}

   
}


/// 获取结果图片
- (UIImage *)getResultImage {
    return self.resultImage;
}

@end

上面的代码是截取的调用部分,debug的注释日志也在

模型在onnx环境下输入是rgb的图像,shape是(1, 3, 512, 512), ,类型是uint8 输出是(512, 512)的图像,类型也是uint8, 拜托帮忙看看 onnx和mnn的输出精度差距很大, 多了很多假阳性的分割, 不能投入到生产中使用 且目前无法定位到问题,提供我的模型在这里 model.mnn.zip 希望能帮忙解决落地的最后一个环节,感激

Ying-Kang
Ying-Kang

好的,感谢,学到了,替换成相应的数值之后成功对齐了

Activity icon
issue

Ying-Kang issue comment alibaba/MNN

Ying-Kang
Ying-Kang

ONNX模型转MNN模型后在ios端推理结果错误

平台(如果交叉编译请再附上交叉编译目标平台):

  • linux

Github版本:

  • master
  • 71cd04e91c9281412ff8f7c2b29cd9019195131e

编译方式:

  • linux编译,没有报错

编译日志:

  • linux编译,没有报错

详细描述

转换并验证成功的onnx模型转换为mnn模型之后

与编译出来的推理库一起放到ios端应用里推理

得不到正确结果

模型是做图像分割的,节点输出的各个维度都对,但是模型得到的结果和onnx输出的结果对不上,尝试过检查:

  • onnx模型的分割结果没有问题,可视化达到预期

  • mnn转换过程没有报错,参考官方文档使用fastTestOnnx.py,TEST_SUCCESS,输出结果如下 转换成功的截图 image 验证通过截图 image

  • ios侧与团队小伙伴检查过输出和输入维度,均是mnn内部自动获取,仅有的地方已经check,ios的简单调用代码如下:

//
//  DuFootMeasureModel.m
//  DuMNNDemo
//
//  Created by admin on 11/24/21.
//

#import <opencv2/opencv.hpp>
#import "DuFootMeasureModel.h"
#import <UIKit/UIKit.h>
#import <AVFoundation/AVFoundation.h>
#import <MNN/HalideRuntime.h>
#import <MNN/MNNDefine.h>
#import <MNN/ErrorCode.hpp>
#import <MNN/ImageProcess.hpp>
#import <MNN/Interpreter.hpp>
#import <MNN/Tensor.hpp>
#import "UIImage+Utility.h"
#import "UIImage+Resize.h"

@interface DuFootMeasureModel ()
{
    std::shared_ptr<MNN::Interpreter>net_;
    MNN::Session *session_;
}

@property (nonatomic, strong) NSArray *outputKeys;
@property (nonatomic, strong) UIImage *resultImage ;
@end

@implementation DuFootMeasureModel

/// 创建model
-(instancetype)initWithModelPath:(NSString *)modelPath {
    self = [super init];
    if (self) {
        if (![[NSFileManager defaultManager] fileExistsAtPath:modelPath]) {
            NSLog(@"---- model 文件不存在!!");
            return nil;
        }
        self.outputKeys = @[@"BG", @"A1", @"A2", @"A3", @"A4", @"B1", @"B2", @"C1", @"C2", @"D1", @"D2", @"E1", @"E2", @"E3", @"E4", @"F1", @"F2", @"G1", @"G2", @"H1", @"H2", @"J1", @"J2", @"J3", @"J4", @"K1", @"K2", @"P", @"O", @"I", @"Z", @"N", @"0", @"foot"];
        [self loadSessionWithModelPath:modelPath];
    }
    return self;
}

/// 创建模型和session
- (void)loadSessionWithModelPath:(NSString *)modelPath {
    net_ = std::shared_ptr<MNN::Interpreter>(MNN::Interpreter::createFromFile(modelPath.UTF8String));
    
    //  2、创建 session
    MNN::ScheduleConfig config;
    config.type = MNN_FORWARD_METAL;//  GPU
    config.backupType = MNN_FORWARD_CPU;// GPU跑不起来就用CPU
    config.numThread = 4;

    MNN::BackendConfig backendConfig;
    backendConfig.memory = MNN::BackendConfig::Memory_High;  // 内存
    backendConfig.power = MNN::BackendConfig::Power_High;  // 功耗
    backendConfig.precision = MNN::BackendConfig::PrecisionMode::Precision_High;  // 精度
    config.backendConfig = &backendConfig;
    
    session_ = net_->createSession(config);
}

/// 跑图片
- (NSDictionary *)inferImageWithPath:(NSString *)imagePath {
    if (![[NSFileManager defaultManager] fileExistsAtPath:imagePath]) {
        NSLog(@"ERROR:image 不存在:%@", imagePath);
        return nil;
    }
    UIImage *image = [UIImage imageWithContentsOfFile:imagePath];
    if (!image) {
        NSLog(@"ERROR:文件不是图片! %@", imagePath);
        return nil;
    }
    
    //  resize image
    MNN::Tensor *inputTensor = net_->getSessionInput(session_, nullptr);
    auto dims = inputTensor->shape();
    MNN::Tensor::DimensionType dimsType = inputTensor->getDimensionType();
//    NSLog(@"--- dims type: %d", dimsType); // 1: CAFFE
    int width = 0, height = 0;
    switch (dimsType) {
        case MNN::Tensor::DimensionType::TENSORFLOW:{
            height = dims[1];
            width = dims[2];
        }
            break;
        case MNN::Tensor::DimensionType::CAFFE:{
            height = dims[2];
            width = dims[3];
        }
            break;
        default:{
            height = dims[2];
            width = dims[3];
        }
            break;
    }
//    NSLog(@"--- width: %d, height: %d", width, height); // 512, 512
    if (width != image.size.width || height != image.size.height) {
        image = [image resizeWithSize:CGSizeMake(width, height)];
    }
    //  获取输入 data
    unsigned char *rgba = (unsigned char *)calloc(width * height * 4, sizeof(unsigned char));
    {
        CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.CGImage);
        CGContextRef contextRef = CGBitmapContextCreate(rgba,
                                                        width,
                                                        height,
                                                        8,
                                                        width * 4,
                                                        colorSpace,
                                                        kCGImageAlphaNoneSkipLast | kCGBitmapByteOrderDefault);
        
        CGContextDrawImage(contextRef, CGRectMake(0, 0, width, height), image.CGImage);
        CGContextRelease(contextRef);
    }
    const float means[3]   = {103.94f, 116.78f, 123.68f};
    const float normals[3] = {0.017f, 0.017f, 0.017f};
    auto pretreat = std::shared_ptr<MNN::CV::ImageProcess>( MNN::CV::ImageProcess::create(MNN::CV::RGBA, MNN::CV::RGB, means, 3, normals, 3));
    
//    MNN::CV::Matrix matrix;
//    matrix.postScale((width - 1) / 223.0, (height - 1) / 223.0);
//    pretreat->setMatrix(matrix);
//    inputTensor->print();
    pretreat->convert(rgba, width, height, 0, inputTensor);
    
    free(rgba);
    
    CFAbsoluteTime start = CFAbsoluteTimeGetCurrent();
    //  output
    MNN::Tensor *output = net_->getSessionOutput(session_, nullptr);
    MNN::Tensor outCache(output);
    
    MNN::Tensor inputCache(inputTensor);
    inputTensor->copyToHostTensor(&inputCache);
    
    //  run
    inputTensor->copyFromHostTensor(&inputCache);
    net_->runSession(session_);
    output->copyToHostTensor(&outCache);
    
    NSLog(@"--- 推理耗时: %f", (CFAbsoluteTimeGetCurrent() - start) * 1000.0);
    
    int *outputData = (int *)outCache.buffer().host;
    int outWidth = outCache.width();
    int outHeight = outCache.height();
    
    //  debug code
    int *tmpData = new int[outWidth * outHeight];
    for (int i = 0; i < width * height; i ++) {
//        NSLog(@"---%d", outputData[i]);
        tmpData[i] = outputData[i] == 0 ? 0 : 255;
    }
    UIImage *retImage = utility::UIImageWithDataRGBA(tmpData, outWidth, outHeight);
    self.resultImage = retImage;
    
    free(tmpData);
    
    
    return [self calculateCenterPointWithData:outputData width:outWidth height:outHeight];
}

   
}


/// 获取结果图片
- (UIImage *)getResultImage {
    return self.resultImage;
}

@end

上面的代码是截取的调用部分,debug的注释日志也在

模型在onnx环境下输入是rgb的图像,shape是(1, 3, 512, 512), ,类型是uint8 输出是(512, 512)的图像,类型也是uint8, 拜托帮忙看看 onnx和mnn的输出精度差距很大, 多了很多假阳性的分割, 不能投入到生产中使用 且目前无法定位到问题,提供我的模型在这里 model.mnn.zip 希望能帮忙解决落地的最后一个环节,感激

Ying-Kang
Ying-Kang

补充一下onnx模型: bisenet_fcn_90.onnx.zip

补充一下同一张图片的onnx和mnn的输出结果 image

推测是模型转换出现问题,但是脚本却是test success 属实很困扰啊

Nov
28
1 month ago
Activity icon
issue

Ying-Kang issue alibaba/MNN

Ying-Kang
Ying-Kang

ONNX模型转MNN模型后在ios端推理结果错误

平台(如果交叉编译请再附上交叉编译目标平台):

  • linux

Github版本:

  • master
  • 71cd04e91c9281412ff8f7c2b29cd9019195131e

编译方式:

  • linux编译,没有报错

编译日志:

  • linux编译,没有报错

详细描述

转换并验证成功的onnx模型转换为mnn模型之后

与编译出来的推理库一起放到ios端应用里推理

得不到正确结果

模型是做图像分割的,节点输出的各个维度都对,但是模型得到的结果和onnx输出的结果对不上,尝试过检查:

  • onnx模型的分割结果没有问题,可视化达到预期

  • mnn转换过程没有报错,参考官方文档使用fastTestOnnx.py,TEST_SUCCESS,输出结果如下 转换成功的截图 image 验证通过截图 image

  • ios侧与团队小伙伴检查过输出和输入维度,均是mnn内部自动获取,仅有的地方已经check,ios的简单调用代码如下:

//
//  DuFootMeasureModel.m
//  DuMNNDemo
//
//  Created by admin on 11/24/21.
//

#import <opencv2/opencv.hpp>
#import "DuFootMeasureModel.h"
#import <UIKit/UIKit.h>
#import <AVFoundation/AVFoundation.h>
#import <MNN/HalideRuntime.h>
#import <MNN/MNNDefine.h>
#import <MNN/ErrorCode.hpp>
#import <MNN/ImageProcess.hpp>
#import <MNN/Interpreter.hpp>
#import <MNN/Tensor.hpp>
#import "UIImage+Utility.h"
#import "UIImage+Resize.h"

@interface DuFootMeasureModel ()
{
    std::shared_ptr<MNN::Interpreter>net_;
    MNN::Session *session_;
}

@property (nonatomic, strong) NSArray *outputKeys;
@property (nonatomic, strong) UIImage *resultImage ;
@end

@implementation DuFootMeasureModel

/// 创建model
-(instancetype)initWithModelPath:(NSString *)modelPath {
    self = [super init];
    if (self) {
        if (![[NSFileManager defaultManager] fileExistsAtPath:modelPath]) {
            NSLog(@"---- model 文件不存在!!");
            return nil;
        }
        self.outputKeys = @[@"BG", @"A1", @"A2", @"A3", @"A4", @"B1", @"B2", @"C1", @"C2", @"D1", @"D2", @"E1", @"E2", @"E3", @"E4", @"F1", @"F2", @"G1", @"G2", @"H1", @"H2", @"J1", @"J2", @"J3", @"J4", @"K1", @"K2", @"P", @"O", @"I", @"Z", @"N", @"0", @"foot"];
        [self loadSessionWithModelPath:modelPath];
    }
    return self;
}

/// 创建模型和session
- (void)loadSessionWithModelPath:(NSString *)modelPath {
    net_ = std::shared_ptr<MNN::Interpreter>(MNN::Interpreter::createFromFile(modelPath.UTF8String));
    
    //  2、创建 session
    MNN::ScheduleConfig config;
    config.type = MNN_FORWARD_METAL;//  GPU
    config.backupType = MNN_FORWARD_CPU;// GPU跑不起来就用CPU
    config.numThread = 4;

    MNN::BackendConfig backendConfig;
    backendConfig.memory = MNN::BackendConfig::Memory_High;  // 内存
    backendConfig.power = MNN::BackendConfig::Power_High;  // 功耗
    backendConfig.precision = MNN::BackendConfig::PrecisionMode::Precision_High;  // 精度
    config.backendConfig = &backendConfig;
    
    session_ = net_->createSession(config);
}

/// 跑图片
- (NSDictionary *)inferImageWithPath:(NSString *)imagePath {
    if (![[NSFileManager defaultManager] fileExistsAtPath:imagePath]) {
        NSLog(@"ERROR:image 不存在:%@", imagePath);
        return nil;
    }
    UIImage *image = [UIImage imageWithContentsOfFile:imagePath];
    if (!image) {
        NSLog(@"ERROR:文件不是图片! %@", imagePath);
        return nil;
    }
    
    //  resize image
    MNN::Tensor *inputTensor = net_->getSessionInput(session_, nullptr);
    auto dims = inputTensor->shape();
    MNN::Tensor::DimensionType dimsType = inputTensor->getDimensionType();
//    NSLog(@"--- dims type: %d", dimsType); // 1: CAFFE
    int width = 0, height = 0;
    switch (dimsType) {
        case MNN::Tensor::DimensionType::TENSORFLOW:{
            height = dims[1];
            width = dims[2];
        }
            break;
        case MNN::Tensor::DimensionType::CAFFE:{
            height = dims[2];
            width = dims[3];
        }
            break;
        default:{
            height = dims[2];
            width = dims[3];
        }
            break;
    }
//    NSLog(@"--- width: %d, height: %d", width, height); // 512, 512
    if (width != image.size.width || height != image.size.height) {
        image = [image resizeWithSize:CGSizeMake(width, height)];
    }
    //  获取输入 data
    unsigned char *rgba = (unsigned char *)calloc(width * height * 4, sizeof(unsigned char));
    {
        CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.CGImage);
        CGContextRef contextRef = CGBitmapContextCreate(rgba,
                                                        width,
                                                        height,
                                                        8,
                                                        width * 4,
                                                        colorSpace,
                                                        kCGImageAlphaNoneSkipLast | kCGBitmapByteOrderDefault);
        
        CGContextDrawImage(contextRef, CGRectMake(0, 0, width, height), image.CGImage);
        CGContextRelease(contextRef);
    }
    const float means[3]   = {103.94f, 116.78f, 123.68f};
    const float normals[3] = {0.017f, 0.017f, 0.017f};
    auto pretreat = std::shared_ptr<MNN::CV::ImageProcess>( MNN::CV::ImageProcess::create(MNN::CV::RGBA, MNN::CV::RGB, means, 3, normals, 3));
    
//    MNN::CV::Matrix matrix;
//    matrix.postScale((width - 1) / 223.0, (height - 1) / 223.0);
//    pretreat->setMatrix(matrix);
//    inputTensor->print();
    pretreat->convert(rgba, width, height, 0, inputTensor);
    
    free(rgba);
    
    CFAbsoluteTime start = CFAbsoluteTimeGetCurrent();
    //  output
    MNN::Tensor *output = net_->getSessionOutput(session_, nullptr);
    MNN::Tensor outCache(output);
    
    MNN::Tensor inputCache(inputTensor);
    inputTensor->copyToHostTensor(&inputCache);
    
    //  run
    inputTensor->copyFromHostTensor(&inputCache);
    net_->runSession(session_);
    output->copyToHostTensor(&outCache);
    
    NSLog(@"--- 推理耗时: %f", (CFAbsoluteTimeGetCurrent() - start) * 1000.0);
    
    int *outputData = (int *)outCache.buffer().host;
    int outWidth = outCache.width();
    int outHeight = outCache.height();
    
    //  debug code
    int *tmpData = new int[outWidth * outHeight];
    for (int i = 0; i < width * height; i ++) {
//        NSLog(@"---%d", outputData[i]);
        tmpData[i] = outputData[i] == 0 ? 0 : 255;
    }
    UIImage *retImage = utility::UIImageWithDataRGBA(tmpData, outWidth, outHeight);
    self.resultImage = retImage;
    
    free(tmpData);
    
    
    return [self calculateCenterPointWithData:outputData width:outWidth height:outHeight];
}

   
}


/// 获取结果图片
- (UIImage *)getResultImage {
    return self.resultImage;
}

@end

上面的代码是截取的调用部分,debug的注释日志也在

模型在onnx环境下输入是rgb的图像,shape是(1, 3, 512, 512), ,类型是uint8 输出是(512, 512)的图像,类型也是uint8, 拜托帮忙看看 onnx和mnn的输出精度差距很大, 多了很多假阳性的分割, 不能投入到生产中使用 且目前无法定位到问题,提供我的模型在这里 model.mnn.zip 希望能帮忙解决落地的最后一个环节,感激

Activity icon
issue

Ying-Kang issue Tencent/TNN

Ying-Kang
Ying-Kang

ONNX模型转TNN模型后在ios端推理结果错误

1. 环境(environment)

  • Build OS and Version: Ubuntu 18.04
  • RunTime OS Version: IOS
  • RunTime DEVICE: ARM

2. Github版本 master 6e93212b8c8f3438620a02372ac47fcc720fadae

3. 编译方式(compile method) 官方镜像

4. 编译日志(build log) 官方镜像

5. 详细描述bug 情况 (Describe the bug)

  • 编译出来的推理库和转换的模型放到ios端应用里推理得不到正确结果,模型是做图像分割的,节点输出的各个维度都对,但是没有任何分割结果输出,应该是模型转换或者输入预处理还是有问题。经验证原始onnx模型没有问题,在通过mnn框架的转换与部署到安卓端推理可以得到正确的结果,而在tnn上就不行,而onnx用的是同一套。麻烦帮忙分析一下问题出在哪

  • 初步自查tnn和onnx是align的,运行日志 image

使用的ios端的主要代码如下,不用其余的辅助文件(数据除外)就能运行

#import <opencv2/opencv.hpp>
#import "DuTNNImageViewController.h"
#import <tnn/tnn.h>
#import <tnn/utils/mat_utils.h>
#import <Metal/Metal.h>
#import "UIImage+Utility.h"
#import "UIImage+Resize.h"
#include <algorithm>

#define ModelName @"fast_scnn_8937"

@interface DuTNNImageViewController ()

@property (weak, nonatomic) IBOutlet UIImageView *imageView;
@end

@implementation DuTNNImageViewController
{
    tnn::DeviceType device_type_;
    std::shared_ptr<tnn::TNN> net_;
    std::shared_ptr<tnn::Instance>net_instance_;
}

- (void)viewDidLoad {
    [super viewDidLoad];
    self.view.backgroundColor = [UIColor cyanColor];
    
//    self.imageView.image = [UIImage imageNamed:@"0.jpeg"];
    
    [self loadModel];
}

-(void)viewWillAppear:(BOOL)animated {
    [super viewWillAppear:animated];
    
    NSString *imagePath = [[NSBundle mainBundle] pathForResource:@"0.jpeg" ofType:nil];
    [self runImageWithPath:imagePath];

//    dispatch_async(dispatch_get_global_queue(0, 0), ^{
//    });
}

- (void)loadModel {
    //  初始化 model
    NSString *libPath = [[[NSBundle mainBundle] pathForResource:@"tnn" ofType:@"bundle"] stringByAppendingPathComponent:@"tnn.metallib"];
    NSString *modelPath = [[NSBundle mainBundle] pathForResource:ModelName ofType:@"tnnmodel"];
    NSString *protoPath = [[NSBundle mainBundle] pathForResource:ModelName  ofType:@"tnnproto"];
    
    std::string protoContent = [NSString stringWithContentsOfFile:protoPath encoding:NSUTF8StringEncoding error:nil].UTF8String;
    
    NSData *modelData = [NSData dataWithContentsOfFile:modelPath];
    std::string modelContent = std::string((const char *)modelData.bytes, modelData.length);
    
    tnn::ModelConfig config;
    config.model_type = tnn::MODEL_TYPE_TNN;
    config.params = {protoContent, modelContent};
    auto net = std::make_shared<TNN_NS::TNN>();
    tnn:: Status status = net->Init(config);
    if (status == tnn::TNN_OK) {
        NSLog(@"  model 初始化成功~");
    } else {
        NSLog(@"  model 初始化失败~");
    }
    net_ = net;
    
//    // CPU
    device_type_ = tnn::DEVICE_ARM;
    // GPU
//    device_type_ = tnn::DEVICE_METAL;
        
    tnn::NetworkConfig netConfig;
    netConfig.library_path = {libPath.UTF8String};
    netConfig.device_type = device_type_;
    netConfig.precision = tnn::PRECISION_AUTO;
    
    std::shared_ptr<TNN_NS::Instance> instance;
    instance = net_->CreateInst(netConfig, status);
    if (status != TNN_NS::TNN_OK || !instance) {
        NSLog(@"网格有错 ~");
    }else {
        NSLog(@"网格初始化成功 ~");
    }
    net_instance_ = instance;
}

- (void)runImageWithPath:(NSString *)imagePath {
    UIImage *image = [UIImage imageWithContentsOfFile:imagePath];
    auto image_data = utility::UIImageGetData(image);
    
    int imageH = (int)CGImageGetHeight(image.CGImage);
    int imageW  = (int)CGImageGetWidth(image.CGImage);
    if (imageW != 512 || imageH != 512) {
        [image resizeWithSize:CGSizeMake(512, 512)];
        imageW = imageH = 512;
    }
    
    tnn::DimsVector image_dims = {1, 4, 512, 512};
    
    std::shared_ptr<TNN_NS::Mat> image_mat = nullptr;
    if (device_type_ == tnn::DEVICE_ARM) {//    CPU
        image_mat = std::make_shared<TNN_NS::Mat>(tnn::DEVICE_ARM, TNN_NS::N8UC4, image_dims, image_data.get());
    }else { // GPU
        image_mat = std::make_shared<TNN_NS::Mat>(tnn::DEVICE_METAL, TNN_NS::N8UC4, image_dims);
        id<MTLTexture> texture_rgba = (__bridge id<MTLTexture>)image_mat->GetData();
        if (!texture_rgba) {
            NSLog(@"Error texture input rgba is nil");
            return;
        }
        
        [texture_rgba replaceRegion:MTLRegionMake2D(0, 0, image_dims[3], image_dims[2]) mipmapLevel:0 withBytes:image_data.get() bytesPerRow:image_dims[3] * 4];
    }
    
    // input shape
    tnn:: DimsVector shape = {};
    tnn::BlobMap blobMap = {};
    if (net_instance_) {
        net_instance_ ->GetAllInputBlobs(blobMap);
    }
    if (blobMap.size() > 0) {
        if (blobMap.begin()->second) {
            shape = blobMap.begin()->second->GetBlobDesc().dims;
        }
    }
    
    auto input_mat = std::make_shared<TNN_NS::Mat>(image_mat->GetDeviceType(), TNN_NS::N8UC4, shape);
    // process
    tnn::Status status = tnn::TNN_OK;
    
    void * command_queue = nullptr;
    net_instance_->GetCommandQueue(&command_queue);
    if (status != tnn::TNN_OK) {
        NSLog(@"Error: get command queue 出错~");
    }
    
    tnn::ResizeParam resize_param;
    resize_param.type = TNN_NS::INTERP_TYPE_LINEAR;
    
    auto dst_dims = input_mat->GetDims();
    auto src_dims = image_mat->GetDims();
    resize_param.scale_w = dst_dims[3] / static_cast<float>(src_dims[3]);
    resize_param.scale_h = dst_dims[2] / static_cast<float>(src_dims[2]);
    
    status = tnn::MatUtils::Resize(*(image_mat.get()), *(input_mat.get()), resize_param, command_queue);
    if (status != TNN_NS::TNN_OK){
        NSLog(@"Error: resize 出错~");
    }
    
    //  输入参数
    std::vector<std::string> input_names;
    tnn::BlobMap input_blob_map;
    net_instance_->GetAllInputBlobs(input_blob_map);
    for (const auto& item : input_blob_map) {
        auto name = item.first;
        
        tnn::MatConvertParam input_convert_param;
        status = net_instance_->SetInputMat(input_mat, input_convert_param);
        if (status != TNN_NS::TNN_OK){
            NSLog(@"Error: SetInputMat 出错~");
        }
    }
    
    //  forward
    status = net_instance_->ForwardAsync(nullptr);
    if (status != TNN_NS::TNN_OK) {
        NSLog(@"Error: Forward 出错: %s ~", status.description().c_str());
        return;
    }
    
    //  输出
    auto input_device_type = input_mat->GetDeviceType();
    auto output_mat = std::make_shared<TNN_NS::Mat>(input_device_type, TNN_NS::N8UC4, shape);
    tnn::BlobMap output_blob_map;
    net_instance_->GetAllOutputBlobs(output_blob_map);
    for (const auto& item : output_blob_map) {
        auto name = item.first;
        tnn::MatConvertParam output_convert_param;
        std::shared_ptr<TNN_NS::Mat> output_mat = nullptr;
        status = net_instance_->GetOutputMat(output_mat, output_convert_param, "", tnn::DEVICE_ARM, tnn::N8UC4);
//        status = net_instance_->GetOutputMat(output_mat);
        NSLog(@"GetOutputMat name:  %s", name.c_str());
        if (status != TNN_NS::TNN_OK) {
            NSLog(@"Error: %s - GetOutputMat 出错: %s ~", name.c_str(), status.description().c_str());
        }else {
            UIImage *output_image = utility::UIImageWithDataRGBA(output_mat->GetData(), output_mat->GetHeight(), output_mat->GetWidth());
            self.imageView.image = output_image;
            
            
//            // 将 output mat 输出
            NSLog(@"--- 输出 output_mat ~");
            
            NSLog(@"Batch:%d - Channel:%d - Width:%d - Height:%d - DeviceType:%d - MatType:%d ~", output_mat->GetBatch(), output_mat->GetChannel(), output_mat->GetWidth(), output_mat->GetHeight(),output_mat->GetDeviceType(), output_mat->GetMatType());
            
            int width = output_mat->GetWidth();
            int height = output_mat->GetHeight();
            
            float *array = new float[width * height];
            array = (float *)(output_mat->GetData());
            for (int i = 0; i < height; i ++) {
                std::cout << std::endl;
                for (int j = 0; j < width; j ++) {
                    std::cout << array[j + i*width] << " ";
                }
            }
//            for (int i = 0; i < height; i ++) {
//                std::cout << "\n" << std::endl ;
//                for (int j = 0; j < width; j ++) {
//                    std::cout << (int)array[j*3 + i*width*3] << (int)array[j*3 + i*width*3 + 1] << (int)array[j*3 + i*width*3+2] << " " ;
//                }
//            }
        }
    }
}

@end

提供下我的模型拜托官方帮忙检查下 model.zip 模型的输入是{1, 3, 512, 512}, 输出是{1, 512, 512} proto里面的信息也是正确的 就是无论如何都跑不起来 看到这个issue:https://github.com/Tencent/TNN/issues/1140 感觉到很类似,也尝试修改类型成{1, 4, 512, 512}和N8UC4,任然输出全0矩阵 希望官方能像处理这个issue一样帮我看看我的问题 感谢

Nov
26
2 months ago
Activity icon
issue

Ying-Kang issue comment alibaba/MNN

Ying-Kang
Ying-Kang

Don't support type [ArgMax], 552

平台(如果交叉编译请再附上交叉编译目标平台):

ios

Github版本:

latest

编译方式:

官网推荐方式

编译日志:

编译没有报错

问题描述

推理结果可视化正确,但是在加载模型的时候会有这个报错 请问这个是什么原因

Ying-Kang
Ying-Kang

有没有办法让gpu支持这个算子呢?如果自定义的话需要怎么实现呢?有无教程,这个算子应该还是比较简单的

Nov
25
2 months ago
started
started time in 2 months ago
started
started time in 2 months ago
Nov
24
2 months ago
Activity icon
issue

Ying-Kang issue alibaba/MNN

Ying-Kang
Ying-Kang

Don't support type [ArgMax], 552

平台(如果交叉编译请再附上交叉编译目标平台):

ios

Github版本:

latest

编译方式:

官网推荐方式

编译日志:

编译没有报错

问题描述

推理结果可视化正确,但是在加载模型的时候会有这个报错 请问这个是什么原因

Nov
23
2 months ago
Activity icon
issue

Ying-Kang issue comment Tencent/TNN

Ying-Kang
Ying-Kang

axis<1 and keep_dims=0 not supported!

1. 环境(environment)

  • Build OS and Version: Mac
  • RunTime OS Version: IOS
  • RunTime DEVICE: ARM/OPENCL/METAL

2. Github版本 最新版本

3. 编译方式(compile method) 使用的官方镜像转换,转换全程没报错

4. 编译日志(build log) 使用的官方镜像转换,转换全程没报错

5. 详细描述bug 情况 (Describe the bug) forward报错 middle_img_v2_03814616-5580-4785-b44c-44edd785c73g 模型输入是1 3 512 512,确保传入的shape是正确的,但是一直报上面的错,输出结果也不正确

6. 运行日志(runtime log) instance.Forward Error: code: 0x3000 msg: MetalArgMaxOrMinLayerAcc: axis<1 and keep_dims=0 not supported!

7. 截图(Screenshots) middle_img_v2_b545f018-1a68-4170-a217-c203ed2722eg

instance.Forward Error: code: 0x3000 msg: MetalArgMaxOrMinLayerAcc: axis<1 and keep_dims=0 not supported!

目前一直这个报错,暂时没法解决,求助

Ying-Kang
Ying-Kang

拜托有时间帮忙看看,感谢

Activity icon
issue

Ying-Kang issue comment Tencent/TNN

Ying-Kang
Ying-Kang

axis<1 and keep_dims=0 not supported!

1. 环境(environment)

  • Build OS and Version: Mac
  • RunTime OS Version: IOS
  • RunTime DEVICE: ARM/OPENCL/METAL

2. Github版本 最新版本

3. 编译方式(compile method) 使用的官方镜像转换,转换全程没报错

4. 编译日志(build log) 使用的官方镜像转换,转换全程没报错

5. 详细描述bug 情况 (Describe the bug) forward报错 middle_img_v2_03814616-5580-4785-b44c-44edd785c73g 模型输入是1 3 512 512,确保传入的shape是正确的,但是一直报上面的错,输出结果也不正确

6. 运行日志(runtime log) instance.Forward Error: code: 0x3000 msg: MetalArgMaxOrMinLayerAcc: axis<1 and keep_dims=0 not supported!

7. 截图(Screenshots) middle_img_v2_b545f018-1a68-4170-a217-c203ed2722eg

instance.Forward Error: code: 0x3000 msg: MetalArgMaxOrMinLayerAcc: axis<1 and keep_dims=0 not supported!

目前一直这个报错,暂时没法解决,求助

Ying-Kang
Ying-Kang

你好,请问有解决方案了吗?我最新尝试发现是gpu的问题,我使用cpu可以过,但是cpu的load 模型的输出,全是0 image 我使用convert的镜像自带的检查,检查了模型是没有问题的 image 我这个就是一个很简单的模型, 输入是(1, 3, 512, 512),输出是(1, 1, 512, 512) 输出的mat应该是一个不同类别从0开始编号的矩阵 但是现在看上去全是零,请问是怎么回事呢?

Nov
22
2 months ago
Activity icon
issue

Ying-Kang issue alibaba/MNN

Ying-Kang
Ying-Kang

Error is between ArgMax_269 and Unsqueeze_270

平台(如果交叉编译请再附上交叉编译目标平台):

linux

Github版本:

latest

编译方式:

手动编译,编译成功

报错

编译成功,并成功转换为mnn模型 但是使用fastTestOnnx.py验证onnx模型的时候报错 报错如下:

Error is between  ArgMax_269  and  Unsqueeze_270

完整的log如下:

Debug Mode:  True
onnx/test.onnx
/root/miniconda3/envs/mnn/lib/python3.7/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:353: UserWarning: Deprecation warning. This ORT build has ['CUDAExecutionProvider', 'CPUExecutionProvider'] enabled. The next release (ORT 1.10) will require explicitly setting the providers parameter (as opposed to the current behavior of providers getting set/registered by default based on the build flags) when instantiating InferenceSession.For example, onnxruntime.InferenceSession(..., providers=["CUDAExecutionProvider"], ...)
  "based on the build flags) when instantiating InferenceSession."
tensor(float)
['499']
inputs:
input
onnx/
outputs:
onnx/499.txt (1, 128, 64, 64)
onnx/
Test onnx
Start to Convert Other Model Format To MNN Model...
[11:58:27] /nvme/PROJ/ARProj/MNN/tools/converter/source/onnx/onnxConverter.cpp:30: ONNX Model ir version: 6
Start to Optimize the MNN Net...
inputTensors : [ input, ]
outputTensors: [ 499, ]
Converted Success!
input
output: 499
499: (1, 128, 64, 64, )
TEST_SUCCESS

Test Node : BatchNormalization_216 True
onnx/test.onnx
tensor(float)
['507']
inputs:
input
onnx/
outputs:
onnx/507.txt (1, 128, 64, 64)
onnx/
Test onnx
Start to Convert Other Model Format To MNN Model...
[11:58:30] /nvme/PROJ/ARProj/MNN/tools/converter/source/onnx/onnxConverter.cpp:30: ONNX Model ir version: 6
Start to Optimize the MNN Net...
inputTensors : [ input, ]
outputTensors: [ 507, ]
Converted Success!
input
output: 507
507: (1, 128, 64, 64, )
TEST_SUCCESS

Test Node : Relu_224 True
onnx/test.onnx
tensor(float)
['548']
inputs:
input
onnx/
outputs:
onnx/548.txt (1, 34, 512, 512)
onnx/
Test onnx
Start to Convert Other Model Format To MNN Model...
[11:58:51] /nvme/PROJ/ARProj/MNN/tools/converter/source/onnx/onnxConverter.cpp:30: ONNX Model ir version: 6
Start to Optimize the MNN Net...
inputTensors : [ input, ]
outputTensors: [ 548, ]
Converted Success!
input
output: 548
548: (1, 34, 512, 512, )
TEST_SUCCESS

Test Node : Sub_265 True
onnx/test.onnx
tensor(float)
['551']
inputs:
input
onnx/
outputs:
onnx/551.txt (1, 34, 512, 512)
onnx/
Test onnx
Start to Convert Other Model Format To MNN Model...
[11:59:14] /nvme/PROJ/ARProj/MNN/tools/converter/source/onnx/onnxConverter.cpp:30: ONNX Model ir version: 6
Start to Optimize the MNN Net...
inputTensors : [ input, ]
outputTensors: [ 551, ]
Converted Success!
input
output: 551
551: (1, 34, 512, 512, )
TEST_SUCCESS

Test Node : Div_268 True
onnx/test.onnx
tensor(float)
['552']
inputs:
input
onnx/
outputs:
onnx/552.txt (1, 512, 512)
onnx/
Test onnx
Start to Convert Other Model Format To MNN Model...
[11:59:20] /nvme/PROJ/ARProj/MNN/tools/converter/source/onnx/onnxConverter.cpp:30: ONNX Model ir version: 6
Start to Optimize the MNN Net...
inputTensors : [ input, ]
outputTensors: [ 552, ]
Converted Success!
input
output: 552
552: (1, 512, 512, )
TEST_SUCCESS

Test Node : ArgMax_269 True
Error is between  ArgMax_269  and  Unsqueeze_270

请问这个问题怎么解决

Nov
19
2 months ago
Activity icon
issue

Ying-Kang issue comment Tencent/TNN

Ying-Kang
Ying-Kang

axis<1 and keep_dims=0 not supported!

1. 环境(environment)

  • Build OS and Version: Mac
  • RunTime OS Version: IOS
  • RunTime DEVICE: ARM/OPENCL/METAL

2. Github版本 最新版本

3. 编译方式(compile method) 使用的官方镜像转换,转换全程没报错

4. 编译日志(build log) 使用的官方镜像转换,转换全程没报错

5. 详细描述bug 情况 (Describe the bug) forward报错 middle_img_v2_03814616-5580-4785-b44c-44edd785c73g 模型输入是1 3 512 512,确保传入的shape是正确的,但是一直报上面的错,输出结果也不正确

6. 运行日志(runtime log) instance.Forward Error: code: 0x3000 msg: MetalArgMaxOrMinLayerAcc: axis<1 and keep_dims=0 not supported!

7. 截图(Screenshots) middle_img_v2_b545f018-1a68-4170-a217-c203ed2722eg

instance.Forward Error: code: 0x3000 msg: MetalArgMaxOrMinLayerAcc: axis<1 and keep_dims=0 not supported!

目前一直这个报错,暂时没法解决,求助

Ying-Kang
Ying-Kang

model.zip 我提供下我的模型,请帮忙检查下问题

Activity icon
issue

Ying-Kang issue Tencent/TNN

Ying-Kang
Ying-Kang

axis<1 and keep_dims=0 not supported!

1. 环境(environment)

  • Build OS and Version: Mac
  • RunTime OS Version: IOS
  • RunTime DEVICE: ARM/OPENCL/METAL

2. Github版本 最新版本

3. 编译方式(compile method) 使用的官方镜像转换,转换全程没报错

4. 编译日志(build log) 使用的官方镜像转换,转换全程没报错

5. 详细描述bug 情况 (Describe the bug) forward报错 middle_img_v2_03814616-5580-4785-b44c-44edd785c73g 模型输入是1 3 512 512,确保传入的shape是正确的,但是一直报上面的错,输出结果也不正确

6. 运行日志(runtime log) instance.Forward Error: code: 0x3000 msg: MetalArgMaxOrMinLayerAcc: axis<1 and keep_dims=0 not supported!

7. 截图(Screenshots) middle_img_v2_b545f018-1a68-4170-a217-c203ed2722eg

instance.Forward Error: code: 0x3000 msg: MetalArgMaxOrMinLayerAcc: axis<1 and keep_dims=0 not supported!

目前一直这个报错,暂时没法解决,求助