Alink 现在支持了与 TF 脚本的对接,包括分布式训练的功能,也能够支持各种深度模型的推理功能。可以参考这里的文档:https://www.yuque.com/pinshu/alink_tutorial/gyybst
Pinned
Activity
Fanoid issue alibaba/Alink
How alink Operator rely on flink job execution?
I meet a problem.
The computer has alink 1.4 and pyflink 1.13 and flink 1.10.
When I run alink programs, it runs into a classnotfound exception. In my view, it is a jar confliction. The case is as below. it won't call the jars in pyflink. The job would be executed in flink 1.10(it is launched beforehand). So the classpath or memory somewhere has flink1.10, and alink choose flink1.10, not pyflink1.13.
I wanna know how the alink make the decision about which flink to choose, and what is the procedure?
Fanoid issue alibaba/Alink
Pipeline.load() return PipelineModel in PyAlink
I can't call fit() on pipeline created by Pipeline.load("pipeline.ak") in python, because return object doesn't have fit() method . But this method works in Java.
Reason
In Java, Pipeline.load() return a Pipeline object. In Python, it returns a PipelineModel object. One misspelled calss name in pyalink/alink/pipeline/base.py Pipeline.load() appears to cause this.
class Pipeline(Estimator):
......
@staticmethod
def collectLoad(operator: BatchOperator):
_j_pipeline_cls = get_java_class("com.alibaba.alink.pipeline.Pipeline")
j_pipeline = _j_pipeline_cls.collectLoad(operator.get_j_obj())
stages = Pipeline._check_lazy_params(j_pipeline)
return Pipeline(*stages, j_pipeline=j_pipeline)
@staticmethod
def load(file_path: Union[str, FilePath]):
_j_pipeline_cls = get_java_class("com.alibaba.alink.pipeline.PipelineModel")
j_pipeline = _j_pipeline_cls()
if isinstance(file_path, (str,)):
path = file_path
j_pipeline = _j_pipeline_cls.load(path)
elif isinstance(file_path, FilePath):
operator = file_path
j_pipeline = _j_pipeline_cls.load(operator.get_j_obj())
else:
raise ValueError("file_path must be str or FilePath")
stages = Pipeline._check_lazy_params(j_pipeline)
return Pipeline(*stages, j_pipeline=j_pipeline)
Solution
use collectLoad() instead of load() in pyalink.
Fanoid issue comment alibaba/Alink
Pipeline.load() return PipelineModel in PyAlink
I can't call fit() on pipeline created by Pipeline.load("pipeline.ak") in python, because return object doesn't have fit() method . But this method works in Java.
Reason
In Java, Pipeline.load() return a Pipeline object. In Python, it returns a PipelineModel object. One misspelled calss name in pyalink/alink/pipeline/base.py Pipeline.load() appears to cause this.
class Pipeline(Estimator):
......
@staticmethod
def collectLoad(operator: BatchOperator):
_j_pipeline_cls = get_java_class("com.alibaba.alink.pipeline.Pipeline")
j_pipeline = _j_pipeline_cls.collectLoad(operator.get_j_obj())
stages = Pipeline._check_lazy_params(j_pipeline)
return Pipeline(*stages, j_pipeline=j_pipeline)
@staticmethod
def load(file_path: Union[str, FilePath]):
_j_pipeline_cls = get_java_class("com.alibaba.alink.pipeline.PipelineModel")
j_pipeline = _j_pipeline_cls()
if isinstance(file_path, (str,)):
path = file_path
j_pipeline = _j_pipeline_cls.load(path)
elif isinstance(file_path, FilePath):
operator = file_path
j_pipeline = _j_pipeline_cls.load(operator.get_j_obj())
else:
raise ValueError("file_path must be str or FilePath")
stages = Pipeline._check_lazy_params(j_pipeline)
return Pipeline(*stages, j_pipeline=j_pipeline)
Solution
use collectLoad() instead of load() in pyalink.
Hi, this bug is fixed. Please install latest version of PyAlink.
Fanoid issue alibaba/Alink
BatchOperator#firstN stuck when using Flink >= 1.13.1
As the title says, BatchOperator#firstN
will cause the task stuck when using Flink >= 1.13.1.
Possible reason:
BatchOperator#firstNcalls
DataSet#first, then calls
FirstReducer#reduce. But
FirstReducer#reduce` does not exhaust the values iterable.
Possible workaround:
- Use Flink <= 1.13.0;
- Do not useBatchOperator#firstN`.
Fanoid issue comment alibaba/Alink
BatchOperator#firstN stuck when using Flink >= 1.13.1
As the title says, BatchOperator#firstN
will cause the task stuck when using Flink >= 1.13.1.
Possible reason:
BatchOperator#firstNcalls
DataSet#first, then calls
FirstReducer#reduce. But
FirstReducer#reduce` does not exhaust the values iterable.
Possible workaround:
- Use Flink <= 1.13.0;
- Do not useBatchOperator#firstN`.
Already fixed. Check latest version.
Fanoid issue alibaba/Alink
Support torch model inference
Support libtorch model inference through Alink.
Fanoid issue comment alibaba/Alink
Support torch model inference
Support libtorch model inference through Alink.
Already added. closed.
Fanoid issue alibaba/Alink
error: invalid path 'docs/cn/批组件/特征工程/特征构造: OverWindow (OverWindowBatchOp).md'
使用git 克隆代码会产生一个error: git clone https://github.com/alibaba/Alink.git Cloning into 'Alink'... remote: Enumerating objects: 19652, done. remote: Counting objects: 100% (13723/13723), done. remote: Compressing objects: 100% (7412/7412), done. remote: Total 19652 (delta 6743), reused 11736 (delta 5290), pack-reused 5929Receiving objects: 100% (19652/19652), 11.29 MiB | 1.75 MiB/s Receiving objects: 100% (19652/19652), 11.48 MiB | 1.65 MiB/s, done. Resolving deltas: 100% (9599/9599), done. error: invalid path 'docs/cn/批组件/特征工程/特征构造: OverWindow (OverWindowBatchOp).md' fatal: unable to checkout working tree warning: Clone succeeded, but checkout failed. You can inspect what was checked out with 'git status' and retry with 'git restore --source=HEAD :/'
Fanoid issue comment alibaba/Alink
error: invalid path 'docs/cn/批组件/特征工程/特征构造: OverWindow (OverWindowBatchOp).md'
使用git 克隆代码会产生一个error: git clone https://github.com/alibaba/Alink.git Cloning into 'Alink'... remote: Enumerating objects: 19652, done. remote: Counting objects: 100% (13723/13723), done. remote: Compressing objects: 100% (7412/7412), done. remote: Total 19652 (delta 6743), reused 11736 (delta 5290), pack-reused 5929Receiving objects: 100% (19652/19652), 11.29 MiB | 1.75 MiB/s Receiving objects: 100% (19652/19652), 11.48 MiB | 1.65 MiB/s, done. Resolving deltas: 100% (9599/9599), done. error: invalid path 'docs/cn/批组件/特征工程/特征构造: OverWindow (OverWindowBatchOp).md' fatal: unable to checkout working tree warning: Clone succeeded, but checkout failed. You can inspect what was checked out with 'git status' and retry with 'git restore --source=HEAD :/'
Already fixed, please checkout latest code.
Fanoid issue alibaba/Alink
文件格式有误导致git clone 出错
win10 系统 git clone 之分支代码 报错
[](
)
出错文件如下:
docs/cn/批组件/特征工程/特征构造
Fanoid issue comment alibaba/Alink
文件格式有误导致git clone 出错
win10 系统 git clone 之分支代码 报错
[](
)
出错文件如下:
docs/cn/批组件/特征工程/特征构造
Hi, this issue has been fixed.
Fanoid issue comment alibaba/Alink
alink webui 中拖动算子出现“实验数据建立中,请稍后报错”
@shaomengwang Hi: 登录9090页面,拖动算子为什么会出现“实验数据建立中”,查看server启动日志也没有抛错,是什么原因呢??
Please refer to discussion in this issue #204
Fanoid issue comment alibaba/Alink
alink中webui咨询
webui 例子中的项目启动之后,拖好流程,运行的时候一直报错,alink已经通过flink部署,是和flink版本有关系吗??还是需要什么别的配置???flink版本1.10
@StaveZhao https://miaowenting.site/2022/01/30/Alink%20%E5%BF%AB%E9%80%9F%E4%BD%BF%E7%94%A8%20WebUI/
就是按照这个链接安装的,就是出来的ip都是localhost,不是具体的ip,导致那个报错的
代码中有些地方前后端分离做得还有些问题。 临时改的话,可以将这个代码的第一行:https://github.com/alibaba/Alink/blob/master/webui/web/src/requests/request.ts 修改为你的目标IP,然后重新打包部署。
正规的做法需要在前端所在的 docker 镜像里给 nginx 增加到 server 的转发。
Fanoid issue comment alibaba/Alink
alink中webui咨询
webui 例子中的项目启动之后,拖好流程,运行的时候一直报错,alink已经通过flink部署,是和flink版本有关系吗??还是需要什么别的配置???flink版本1.10
webui 例子中的项目启动之后,拖好流程,运行的时候一直报错,alink已经通过flink部署,是和flink版本有关系吗??还是需要什么别的配置???flink版本1.10
Server 里默认用的是 1.9 的 Flink,你可能需要修改依赖后重新打包。
Fanoid push alibaba/Alink
commit sha: 53bef0f1d17c023af1ce30b4116e4c30b21f6c68
push time in 2 months agoFanoid issue alibaba/Alink
Support torch model inference
Support libtorch model inference through Alink.
Fanoid issue comment mars-project/mars
Hi, I am just curious about this: "What is interesting that the Graph in Mars can support loop, we may implement some loop semantic in the future. I think it might be helpful for some iterative algorithm especially for those in machine learning."
Is this feature valid in current release? I just can't find it in the documents.
请问有计划实现卷积神经网络、LSTM等算法吗?