Enhancement Optimize Node | UNION / | \ Empty Child Empty to Node | Child
Implement functionality where node graphs can "live" within the main node graph as a single node. This is similar to Blender's node group.
Preflight Checklist I agree to follow the Code of Conduct that this project adheres to. I have searched the issue tracker for an issue that matches the one I want to file, without success. Use case. Why is this important? As for now, users stick to the Manual approval mode for their productions because it is more predictable. However, manually updating a massive amount of nodes is painful. What users really do now is changing the approval mode from Manual to Automatic every time a new update is released. It is not convenient to do it all the time. Proposed Solution Provide a mechanism similar to #185. The example of a NodeGroup: apiVersion: deckhouse.io/v1 kind: NodeGroup metadata: name: test spec: nodeType: Static disruptions: approvalMode: Automatic automatic: windows: - from: "8:00" to: "15:00" days: - Tue - Sat Additional Information Do not forget about The NodeIsNotUpdating alert. Bashible random checksum updates
Hi, again (Im new with server editing, be indulgent pls) So i wanted to learn how the _pick_up() attr could be helpful, then i tried to see what it should make in my server, combined to a ba.newnode() thing BUT now i got THIS : Then, probably because i dont know how to use these attrs/arguments and/or functions, i may need someone qualificated in commands to tell me a list of accepted/valid nodes examples. Thanks by advance !
QosborMaanous Updated
Describe what you would like changed, and why. Why when I have a defense powered by two nodes when one of the two nodes is destroyed the other does not automatically connect to the defenses left without energy (obviously without disconnecting from the blocks to which it is already connected or if it has already reached the limit of the connections it can also cover the structures under beam that were previously powered by another node; if instead it has connections on directly attached structures it can automatically disconnect and reconnect with those that do not receive energy, only when a nearby node is destroyed)? Describe the changes you want to propose. Include possible alternatives. Why no?
Hello, I have converted my pytorch model into onnx. I checked that the onnx model is valid, and even successfully inferenced on onnxruntime. However, when I execute TensorRT parsing fails and gives me the following error: In node -1 (convertAxis): UNSUPPORTED_NODE: Assertion failed: axis >= 0 && axis < nbDims. I am using: Pytorch 1.5.1 ONNX 1.10.1 TensorRT 7 Also, is the gather node created by ONNX is supported by TensorRT. I get the gather node because I access a tensor by the following: x = y[:, a, b]. where x, y, a, and b are tensors. a are indices from 0 to 63, and b are indices from 0 up to 1023. x is of size [32, 50000] y is of size [32, 64, 1024] a is of size [50000] b is of size [50000] Any advice is appreciated!
yasser-h-khalil Updated
Hello, Is it possible to use the WASM Runtime with NodeJS? We are considering switching from Spine and are currently evaluating Rive. Our animations are all performed server side. We just need to be able to get the bones local transform to send to our client. It looks like your JS runtime only works in the client and requires a canvas. I had an issue open a while ago, but I cant seem to find it anymore. At that time you said it was something you were working on. Just wondering if that work ever got done. Thanks!
SupremeTechnopriest Updated
I'm using the HPC with Slurm. In the HPC, every node has 24 CPUs and I'm permitted to use 16 nodes simultaneously To test my code, I write a .sh file: #!/bin/bash #SBATCH -n 384 -N 16 #SBATCH --ntasks-per-node 24 #SBATCH --cpus-per-task=1 #SBATCH -J test #SBATCH -p work #SBATCH -t 00:15:00 julia 1.2\ th2testp.jl and a "1.2 th2testp.jl" file: using Distributed using JLD using ClusterManagers addprocs(SlurmManager(384),N=16,t="00:15:00") [email protected] (+) for i in workers() i end println(N_t) Then I get an error: WARNING: failed to select UTF-8 encoding, using ASCII ERROR: LoadError: TaskFailedException nested task error: IOError: connect: connection refused (ECONNREFUSED) Stacktrace: [1] worker_from_id(pg::Distributed.ProcessGroup, i::Int64) @ Distributed /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.6/Distributed/src/cluster.jl:1082 [2] worker_from_id @ /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.6/Distributed/src/cluster.jl:1079 [inlined] [3] #remote_do#154 @ /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.6/Distributed/src/remotecall.jl:486 [inlined] [4] remote_do @ /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.6/Distributed/src/remotecall.jl:486 [inlined] [5] kill @ /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.6/Distributed/src/managers.jl:675 [inlined] [6] create_worker(manager::SlurmManager, wconfig::WorkerConfig) @ Distributed /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.6/Distributed/src/cluster.jl:593 [7] setup_launched_worker(manager::SlurmManager, wconfig::WorkerConfig, launched_q::Vector{Int64}) @ Distributed /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.6/Distributed/src/cluster.jl:534 [8] (::Distributed.var"#41#44"{SlurmManager, Vector{Int64}, WorkerConfig})() @ Distributed ./task.jl:411 caused by: IOError: connect: connection refused (ECONNREFUSED) Stacktrace: [1] wait_connected(x::Sockets.TCPSocket) @ Sockets /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.6/Sockets/src/Sockets.jl:532 [2] connect @ /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.6/Sockets/src/Sockets.jl:567 [inlined] [3] connect_to_worker(host::String, port::Int64) @ Distributed /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.6/Distributed/src/managers.jl:639 [4] connect(manager::SlurmManager, pid::Int64, config::WorkerConfig) @ Distributed /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.6/Distributed/src/managers.jl:566 [5] create_worker(manager::SlurmManager, wconfig::WorkerConfig) @ Distributed /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.6/Distributed/src/cluster.jl:589 [6] setup_launched_worker(manager::SlurmManager, wconfig::WorkerConfig, launched_q::Vector{Int64}) @ Distributed /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.6/Distributed/src/cluster.jl:534 [7] (::Distributed.var"#41#44"{SlurmManager, Vector{Int64}, WorkerConfig})() @ Distributed ./task.jl:411 ...and 311 more exceptions. Stacktrace: [1] sync_end(c::Channel{Any}) @ Base ./task.jl:369 [2] macro expansion @ ./task.jl:388 [inlined] [3] addprocs_locked(manager::SlurmManager; kwargs::Base.Iterators.Pairs{Symbol, Int64, Tuple{Symbol}, NamedTuple{(:N,), Tuple{Int64}}}) @ Distributed /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.6/Distributed/src/cluster.jl:480 [4] addprocs(manager::SlurmManager; kwargs::Base.Iterators.Pairs{Symbol, Int64, Tuple{Symbol}, NamedTuple{(:N,), Tuple{Int64}}}) @ Distributed /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.6/Distributed/src/cluster.jl:444 [5] top-level scope @ ~/Yuby/SidebandCooling/1.2 th2testp.jl:4 in expression starting at /WORK/hust_jmcai_1/Yuby/SidebandCooling/1.2 th2testp.jl:4 connecting to worker 1 out of 384 connecting to worker 2 out of 384 connecting to worker 3 out of 384 ... connecting to worker 383 out of 384 connecting to worker 384 out of 384 srun: Job step aborted: Waiting up to 2 seconds for job step to finish. srun: error: Timed out waiting for job step to complete But when I change to use 10 nodes with 240 cpus. The error disappeared. And I got the right answer. What cause this?
Lightup1 Updated bug SLURM
Does anyone have a node list?
bigmac5753 Updated
Besides the usual text nodes there should be support for image nodes of the following features: text node can be converted by dragging an image file onto it delete button will remove the image and convert the node back to text image nodes are constrained to the aspect ratio of the image drag and drop other image onto image node to change it original image is referenced with relative path and not copied
rzllmr Updated enhancement
I created a single zone 6 node cluster but it created 6 worker/master nodes. Is it possible to use this script to create a 6 node cluster with 3 master/worker and 3 worker nodes? what settings are required in terraform.tfvars?
willbranney73 Updated
The node-selector has two bugs: If no node is selected, the node-selector doesn't work. It is not possible to select a node inside the node-editor with the mouse. It does only work with the keyboard. Bildschirmaufnahme.2021-10-22.um.13.28.50.mov
JaRi97 Updated bug
Sent by Mikhail Gorban (@Mishanya77). Created by fire. Hi. I have a problem with rewards for node. I run node 1.10.21, but I have reward only for 01.10.21 -- Mikhail
fire-bot Updated
I set up kubernetes cluster and use calico to build pod-network. But I got a message when accessing the node having worker node ip: dial tcp i/o timeout For example, kube-proxy is working finely like this: NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-system calico-kube-controllers-75f8f6cc59-tl62w 1/1 Running 0 26h user-system-product-name <none> <none> kube-system calico-node-lkjrt 0/1 CrashLoopBackOff 6 (109s ago) 7m48s mofl-c246-wu4 <none> <none> kube-system calico-node-tcvbx 1/1 Running 0 7m48s user-system-product-name <none> <none> kube-system coredns-78fcd69978-hbwqs 1/1 Running 0 26h user-system-product-name <none> <none> kube-system coredns-78fcd69978-l9zxj 1/1 Running 0 26h user-system-product-name <none> <none> kube-system etcd-user-system-product-name 1/1 Running 5 26h user-system-product-name <none> <none> kube-system kube-apiserver-user-system-product-name 1/1 Running 5 26h user-system-product-name <none> <none> kube-system kube-controller-manager-user-system-product-name 1/1 Running 5 26h user-system-product-name <none> <none> kube-system kube-proxy-f62rj 1/1 Running 0 26h user-system-product-name <none> <none> kube-system kube-proxy-pdr8l 1/1 Running 0 26h mofl-c246-wu4 <none> <none> kube-system kube-scheduler-user-system-product-name 1/1 Running 5 26h user-system-product-name <none> <none> But if I run kubectl logs kube-proxy-pdr8l -n kube-system, it returns Error from server: Get "": dial tcp i/o timeout I got a same output when running kubectl logs calico-node-xxxx -n kube-system. I guess that the port-forwarding to worker node makes some problems... Master node has static ip address but worker node is connected to network hub which port-forward to worker node (ex. [email protected]:40000~50000 -> current worker node) Could you gives some solutions? Expected Behavior Current Behavior Possible Solution Steps to Reproduce (for bugs) Context Your Environment Calico version Orchestrator version (e.g. kubernetes, mesos, rkt): Operating System and version: Link to your project (optional):
mf-giwoong-lee Updated
Ground node needs separate component_type and separate handling in drawing nodes.
dominc8 Updated
Requests https://serverless-stack.slack.com/archives/C01JVDJQU2C/p1634447270360600?thread_ts=1634297371.349000&cid=C01JVDJQU2C
thdxr Updated enhancement
Hi Guys, First thanks for the work seams to work pretty well on my hardware ! I have a question, I have to use node guarding instead of heart beat, any idea about how to do/setup this request ? Thx !
bbhk17 Updated
Nama Materi DOM: Nodes Deskripsi Folder: /learn/DOM/002_Nodes
reacto11mecha Updated learn
when restarting the core-geth, the node will redownload from block 0, how to avoid this, start from the block according to the local .ethnum
lijianl Updated
Story As a user of CentOS CI, I want to be able to use VMs, because I don't want to waste resources. I want to be able to use bare metal nodes, because I have requirements not met by VMs. Acceptance Criteria An abstract base model for nodes exist Specific models for bare metal and VM nodes exist Background In the previous implementation, the Host model described bare metal nodes (specifically SeaMicro chassis), it mixed generic and specific information. We want to support different types, so need base classes modelling shared properties as well as child classes modelling the differences. Hierarchically, it could be e.g.: unspecified node -> bare metal -> bare metal in a sea micro chassis unspecified node -> VM/cloud node -> libvirt/kvm node (No idea if we need that level of granularity.)
See background at netlify/team-dev#34 We should upgrade the minimal Node.js version to Node >=12.20.0 and make a major release. At the moment, framework-info is only used by: build-info: so support for Node 10 should be dropped first in build-info: netlify/build-info#180 netlify-cli: same thing for Netlify CLI: netlify/cli#3512 netlify-react-ui: I believe framework-info is imported from the browser, using the browser-specific code, so Node.js support should be irrelevant. @nasivuela Could you please confirm this is correct? Thanks!
ehmicky Updated type: chore
Create a mathematical function and method for the node class that allows for the node.cost to vary based on percentage of max inventory filled. Preferably this will model a supply and demand curve. As goods get developed more thoroughly different goods may use a different curve type (based on demand elasticity)
KeitaE Updated
Hello, We've started creating some really complex graphs on our project using GenericGraph as our base and its working perfectly. However, because of the complexity of these graphs, our team is requesting comment boxes, a node search in graph as a press "F" to focus on selected. This would be similar to how they are with the vanilla UE4 graphs like in behaviour trees. How would I go about implementing these features, especially the search and comment? Thanks
JonLangfordUK Updated
how to add multiple nodes? will this feature be available in the future?
adammau2 Updated question
Previous Next