zenbones

zenbones

Member Since 11 years ago

Experience Points
3
follower
Lessons Completed
1
follow
Lessons Completed
2
stars
Best Reply Awards
4
repos

3488 contributions in the last year

Pinned
⚡ The SmallMind tools and utilities codebase
⚡ Wicket extensions and integrations
⚡ Helm Charts
Activity
Oct
12
4 days ago
Activity icon
issue

zenbones issue fluent/fluent-bit

zenbones
zenbones

Build failure on ubuntu 20.4 (focal) with fluentbit 1.8

Our previously working ansible install now gives this error...

FAILED! => {"changed": false, "msg": "Failed to update apt cache: E:The repository 'https://packages.fluentbit.io/ubuntu/focal focal Release' does not have a Release file."}

Maybe there a missing or corrupt package?

Sep
24
3 weeks ago
Sep
17
4 weeks ago
Activity icon
issue

zenbones issue reactjs/reactjs.org

zenbones
zenbones

Adding ref with React.cloneElement() fails on class component with functional component error

In a class component with all class component children...

constructor(props) {
  super(props);

  this.myref = React.createRef();
}

render() {

  return (
    {React.cloneElement(this.props.children[0], {ref: myref})}
  );
}

...fails with Warning: Function components cannot be given refs. Attempts to access this ref will fail. Did you mean to use React.forwardRef()?

I get that React.cloneElement() is probably returning a functional wrapper that's confusing React, but given the power of this.props.children, what is the proper way to dynamically add refs? All of the actual children are class components if that matters.

Sep
7
1 month ago
Sep
1
1 month ago
Aug
25
1 month ago
Aug
24
1 month ago
Activity icon
issue

zenbones issue comment oracle/graal

zenbones
zenbones

Allow org.graalvm.polyglot.Context.Builder to specify both public and internal file systems (or take the fs specified as public only)

Currently, specifying a FileSystem in Context.Builder sets the same file system as both public and internal. That means that either the runtime system must live with the same view of the file system as the sandboxed code, or visa versa, and neither is appropriate. To properly handle the situation, the designer of the sandbox must be aware of what files the internal runtime will require (generally under the graalvm installation path), and allow them within the same view that the sandbox would like to present to the sandboxed code. This is very awkward and unnecessary. By taking the FileSystem passed into the context builder as the public version only, then I presume it will be enforced only upon the sandboxed code, which is more clearly what the sandbox designer wants, and the runtime will have access to the default file system limited only by its security context. Even better would be to allow the context to specify the public and internal file system separately and explicitly so that the correct semantics would be clearly surfaced and enforced, individually, on both.

zenbones
zenbones

I'm not sure how best to go about this, but I'm willing to correct the API of the polyglot context, and the implementation of the context builder, and push those back as a pull request.

Aug
23
1 month ago
Aug
20
1 month ago
Activity icon
issue

zenbones issue oracle/graal

zenbones
zenbones

Is Espresso confined by org.graalvm.polyglot.io.FileSystem?

org.graalvm.polyglot.Context allows the setting of a org.graalvm.polyglot.io.FileSystem, which allows for sandboxing polyglot languages. I presume that org.graalvm.polyglot.io.FileSystem is sufficient for the current crop of "scripting" languages, e.g. python, R, javascript, and ruby, as their file system 'objects' are not overly complex. However, I imagine that espresso code doing something as innocent as Paths.get("/some/path") is going to break out of the sandbox, as that code will engage the full power of Java's java.nio.file.FileSystem, which will grab the actual default FIleSystem for the os, which can't be backed or replaced by something based on org.graalvm.polyglot.io.FileSystem? Or am I mistaken in that assumption?

This was less of a problem as SecurityManager would at least allow sandboxing a thread to limited sub-sections of the file system, but, with the deprecation of SecurityManager, is there any hope of sandboxing Java on Java? I know the SecurityManager is brittle, difficult to maintain, and a drain on performance, but it also allowed sandboxing a Java thread, which is way more efficient than sandboxing the JVM itself via containerization, where we have to try to account for the complete overhead of the VM, including memory management, over and over again.

Espresso brings back a possible sandboxed environment for JVM bytecode-based languages, but only if the polyglot context holds for Espresso.

Activity icon
issue

zenbones issue comment oracle/graal

zenbones
zenbones

Is Espresso confined by org.graalvm.polyglot.io.FileSystem?

org.graalvm.polyglot.Context allows the setting of a org.graalvm.polyglot.io.FileSystem, which allows for sandboxing polyglot languages. I presume that org.graalvm.polyglot.io.FileSystem is sufficient for the current crop of "scripting" languages, e.g. python, R, javascript, and ruby, as their file system 'objects' are not overly complex. However, I imagine that espresso code doing something as innocent as Paths.get("/some/path") is going to break out of the sandbox, as that code will engage the full power of Java's java.nio.file.FileSystem, which will grab the actual default FIleSystem for the os, which can't be backed or replaced by something based on org.graalvm.polyglot.io.FileSystem? Or am I mistaken in that assumption?

This was less of a problem as SecurityManager would at least allow sandboxing a thread to limited sub-sections of the file system, but, with the deprecation of SecurityManager, is there any hope of sandboxing Java on Java? I know the SecurityManager is brittle, difficult to maintain, and a drain on performance, but it also allowed sandboxing a Java thread, which is way more efficient than sandboxing the JVM itself via containerization, where we have to try to account for the complete overhead of the VM, including memory management, over and over again.

Espresso brings back a possible sandboxed environment for JVM bytecode-based languages, but only if the polyglot context holds for Espresso.

Aug
19
1 month ago
Activity icon
issue

zenbones issue comment oracle/graal

zenbones
zenbones

Allow org.graalvm.polyglot.Context.Builder to specify both public and internal file systems (or take the fs specified as public only)

Currently, specifying a FileSystem in Context.Builder sets the same file system as both public and internal. That means that either the runtime system must live with the same view of the file system as the sandboxed code, or visa versa, and neither is appropriate. To properly handle the situation, the designer of the sandbox must be aware of what files the internal runtime will require (generally under the graalvm installation path), and allow them within the same view that the sandbox would like to present to the sandboxed code. This is very awkward and unnecessary. By taking the FileSystem passed into the context builder as the public version only, then I presume it will be enforced only upon the sandboxed code, which is more clearly what the sandbox designer wants, and the runtime will have access to the default file system limited only by its security context. Even better would be to allow the context to specify the public and internal file system separately and explicitly so that the correct semantics would be clearly surfaced and enforced, individually, on both.

zenbones
zenbones

Specifically, in PolygotEngineImpl, there is currently no good way through this code...

            if (!ALLOW_IO) {
                if (fileSystem == null) {
                    fileSystem = FileSystems.newNoIOFileSystem();
                }
                fs = fileSystem;
                internalFs = fileSystem;
            } else if (allowHostIO) {
                fs = fileSystem != null ? fileSystem : FileSystems.newDefaultFileSystem();
                internalFs = fs;
            } else {
                fs = FileSystems.newNoIOFileSystem();
                internalFs = FileSystems.newLanguageHomeFileSystem();
            }

What I want to end up with, from the sandbox developer point of view, is the sandboxed code with the custom FileSystem I'm setting, and the host runtime using either a file system restricted only by SecurityPolicy, or, possibly FileSystems.newLanguageHomeFileSystem (not sure of its particulars, but it sounds right). That possibility exists in none of the possible choices from the if/then/else above.

  1. ALLOW_IO, from what I can see, is never false, and, if it were, it would either deny io to the internal file system as well, which is probably untenable for any language implementation, or it would provide the same file system to both public and internal, which is what we want to avoid.
  2. If allowHostIO is true, then both public and internal file systems are the same, either the custom file system, or both unrestricted (except for SecurityManager), which is not what we want.
  3. Or, we got the proper internal file system (FileSystems.newLanguageHomeFileSystem), but deny all access to the sandbox, which has its uses, but is not the situation many will want to produce.

My other problem is that IO usually includes both file system and sockets. If I deny IO, I would assume I'm denying both file system and TCP/UDP socket access, but those are both quite different. If I allow IO, even if I could get the custom file system set as the public file system only, it would seem I've now allowed unrestricted socket access. What's required is...

  1. Internal file system should default to FileSystems.newLanguageHomeFileSystem or FileSystems.newDefaultFileSystem. Why a user would want to change the internal file system at all, I don't know. We just want it to work.
  2. Public file system should default to FileSystems.newDefaultFileSystem, should take the custom file system if set, unless allowIO in the context is false in which case it should be set to FileSystems.newNoIOFileSystem.
  3. The option in context should be changed from allowIO to allowFileIO to be clear about meaning.
  4. We need a new allowSocketIO in the context, and a new SocketIOPolicy that can allow or deny the sandboxed code access to TCP/UDP/whatever by host address, name and/or port.

In general, it's poor practice in APIs to have both a boolean switch and object policy enforcement. I've noted the API has started to switch to using policies only with statically available allow/deny all options. This is much clearer.

Aug
18
1 month ago
Activity icon
issue

zenbones issue kubernetes-csi/csi-driver-smb

zenbones
zenbones

Periodic failure to mount, still

Using helm chart version 1.2.0...

DRIVER INFORMATION:
-------------------
Build Date: "2021-07-19T02:02:33Z"
Compiler: gc
Driver Name: smb.csi.k8s.io
Driver Version: v1.2.0
Git Commit: c29bb959d9ffdba993543b52087aa23e08e0ef10
Go Version: go1.16
Platform: linux/amd64

Streaming logs below:
I0719 21:43:37.935095       1 mount_linux.go:192] Detected OS without systemd
I0719 21:43:37.935112       1 driver.go:93] Enabling controller service capability: CREATE_DELETE_VOLUME
I0719 21:43:37.935119       1 driver.go:112] Enabling volume access mode: SINGLE_NODE_WRITER
I0719 21:43:37.935124       1 driver.go:112] Enabling volume access mode: SINGLE_NODE_READER_ONLY
I0719 21:43:37.935128       1 driver.go:112] Enabling volume access mode: MULTI_NODE_READER_ONLY
I0719 21:43:37.935132       1 driver.go:112] Enabling volume access mode: MULTI_NODE_SINGLE_WRITER
I0719 21:43:37.935136       1 driver.go:112] Enabling volume access mode: MULTI_NODE_MULTI_WRITER
I0719 21:43:37.935142       1 driver.go:103] Enabling node service capability: GET_VOLUME_STATS
I0719 21:43:37.935146       1 driver.go:103] Enabling node service capability: STAGE_UNSTAGE_VOLUME
I0719 21:43:37.935363       1 server.go:118] Listening for connections on address: &net.UnixAddr{Name:"//csi/csi.sock", Net:"unix"}
I0719 21:43:38.525044       1 utils.go:118] GRPC call: /csi.v1.Identity/GetPluginInfo
I0719 21:43:38.525060       1 utils.go:119] GRPC request: {}
I0719 21:43:38.527484       1 utils.go:125] GRPC response: {"name":"smb.csi.k8s.io","vendor_version":"v1.2.0"}
I0719 21:43:38.702536       1 utils.go:118] GRPC call: /csi.v1.Identity/GetPluginInfo
I0719 21:43:38.702559       1 utils.go:119] GRPC request: {}
I0719 21:43:38.703193       1 utils.go:125] GRPC response: {"name":"smb.csi.k8s.io","vendor_version":"v1.2.0"}
I0719 21:43:39.094923       1 utils.go:118] GRPC call: /csi.v1.Node/NodeGetInfo
I0719 21:43:39.094947       1 utils.go:119] GRPC request: {}
I0719 21:43:39.095492       1 utils.go:125] GRPC response: {"node_id":"ip-10-0-2-49.ec2.internal"}
I0719 21:55:17.773168       1 utils.go:118] GRPC call: /csi.v1.Node/NodeStageVolume
I0719 21:55:17.773191       1 utils.go:119] GRPC request: {"secrets":"***stripped***","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/prod1-smb-pv/globalmount","volume_capability":{"AccessType":{"Mount":{"mount_flags":["dir_mode=0777","uid=300","gid=300","vers=1.0"]}},"access_mode":{"mode":5}},"volume_context":{"source":"//forio-files.forio.internal/epicenter-files"},"volume_id":"prod1-smb-pv"}
I0719 21:55:17.775057       1 nodeserver.go:180] NodeStageVolume: targetPath(/var/lib/kubelet/plugins/kubernetes.io/csi/pv/prod1-smb-pv/globalmount) volumeID(prod1-smb-pv) context(map[source://forio-files.forio.internal/epicenter-files]) mountflags([dir_mode=0777 uid=300 gid=300 vers=1.0]) mountOptions([dir_mode=0777 uid=300 gid=300 vers=1.0])
I0719 21:55:17.775119       1 mount_linux.go:175] Mounting cmd (mount) with arguments (-t cifs -o dir_mode=0777,uid=300,gid=300,vers=1.0,<masked> //forio-files.forio.internal/epicenter-files /var/lib/kubelet/plugins/kubernetes.io/csi/pv/prod1-smb-pv/globalmount)
E0719 21:55:28.078512       1 mount_linux.go:179] Mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t cifs -o dir_mode=0777,uid=300,gid=300,vers=1.0,<masked> //forio-files.forio.internal/epicenter-files /var/lib/kubelet/plugins/kubernetes.io/csi/pv/prod1-smb-pv/globalmount
Output: mount error(2): No such file or directory
Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)

E0719 21:55:28.078574       1 utils.go:123] GRPC error: rpc error: code = Internal desc = volume(prod1-smb-pv) mount "//forio-files.forio.internal/epicenter-files" on "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/prod1-smb-pv/globalmount" failed with mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t cifs -o dir_mode=0777,uid=300,gid=300,vers=1.0,<masked> //forio-files.forio.internal/epicenter-files /var/lib/kubelet/plugins/kubernetes.io/csi/pv/prod1-smb-pv/globalmount
Output: mount error(2): No such file or directory
Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)
I0719 21:55:28.610886       1 utils.go:118] GRPC call: /csi.v1.Node/NodeStageVolume
I0719 21:55:28.610919       1 utils.go:119] GRPC request: {"secrets":"***stripped***","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/prod1-smb-pv/globalmount","volume_capability":{"AccessType":{"Mount":{"mount_flags":["dir_mode=0777","uid=300","gid=300","vers=1.0"]}},"access_mode":{"mode":5}},"volume_context":{"source":"//forio-files.forio.internal/epicenter-files"},"volume_id":"prod1-smb-pv"}
I0719 21:55:28.613301       1 nodeserver.go:180] NodeStageVolume: targetPath(/var/lib/kubelet/plugins/kubernetes.io/csi/pv/prod1-smb-pv/globalmount) volumeID(prod1-smb-pv) context(map[source://forio-files.forio.internal/epicenter-files]) mountflags([dir_mode=0777 uid=300 gid=300 vers=1.0]) mountOptions([dir_mode=0777 uid=300 gid=300 vers=1.0])
I0719 21:55:28.613341       1 mount_linux.go:175] Mounting cmd (mount) with arguments (-t cifs -o dir_mode=0777,uid=300,gid=300,vers=1.0,<masked> //forio-files.forio.internal/epicenter-files /var/lib/kubelet/plugins/kubernetes.io/csi/pv/prod1-smb-pv/globalmount)
E0719 21:55:38.829260       1 mount_linux.go:179] Mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t cifs -o dir_mode=0777,uid=300,gid=300,vers=1.0,<masked> //forio-files.forio.internal/epicenter-files /var/lib/kubelet/plugins/kubernetes.io/csi/pv/prod1-smb-pv/globalmount
Output: mount error(2): No such file or directory
Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)

E0719 21:55:38.829304       1 utils.go:123] GRPC error: rpc error: code = Internal desc = volume(prod1-smb-pv) mount "//forio-files.forio.internal/epicenter-files" on "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/prod1-smb-pv/globalmount" failed with mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t cifs -o dir_mode=0777,uid=300,gid=300,vers=1.0,<masked> //forio-files.forio.internal/epicenter-files /var/lib/kubelet/plugins/kubernetes.io/csi/pv/prod1-smb-pv/globalmount
Output: mount error(2): No such file or directory
Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)
I0719 21:55:39.848092       1 utils.go:118] GRPC call: /csi.v1.Node/NodeStageVolume
I0719 21:55:39.848121       1 utils.go:119] GRPC request: {"secrets":"***stripped***","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/prod1-smb-pv/globalmount","volume_capability":{"AccessType":{"Mount":{"mount_flags":["dir_mode=0777","uid=300","gid=300","vers=1.0"]}},"access_mode":{"mode":5}},"volume_context":{"source":"//forio-files.forio.internal/epicenter-files"},"volume_id":"prod1-smb-pv"}
I0719 21:55:39.849635       1 nodeserver.go:180] NodeStageVolume: targetPath(/var/lib/kubelet/plugins/kubernetes.io/csi/pv/prod1-smb-pv/globalmount) volumeID(prod1-smb-pv) context(map[source://forio-files.forio.internal/epicenter-files]) mountflags([dir_mode=0777 uid=300 gid=300 vers=1.0]) mountOptions([dir_mode=0777 uid=300 gid=300 vers=1.0])
I0719 21:55:39.849667       1 mount_linux.go:175] Mounting cmd (mount) with arguments (-t cifs -o dir_mode=0777,uid=300,gid=300,vers=1.0,<masked> //forio-files.forio.internal/epicenter-files /var/lib/kubelet/plugins/kubernetes.io/csi/pv/prod1-smb-pv/globalmount)
E0719 21:55:50.094767       1 mount_linux.go:179] Mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t cifs -o dir_mode=0777,uid=300,gid=300,vers=1.0,<masked> //forio-files.forio.internal/epicenter-files /var/lib/kubelet/plugins/kubernetes.io/csi/pv/prod1-smb-pv/globalmount
Output: mount error(2): No such file or directory
Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)

So maybe the 'fixed' behavior I saw was just luck? It's odd that the initial containers never seem to show this issue, i.e. the nodes created at cluster creation do not seem to do this (again, maybe just too few sampling points, but it's what I've got). Nodes created via autoscaler events do this... maybe some permission difference in EKS?

Activity icon
issue

zenbones issue comment kubernetes-csi/csi-driver-smb

zenbones
zenbones

Periodic failure to mount, still

Using helm chart version 1.2.0...

DRIVER INFORMATION:
-------------------
Build Date: "2021-07-19T02:02:33Z"
Compiler: gc
Driver Name: smb.csi.k8s.io
Driver Version: v1.2.0
Git Commit: c29bb959d9ffdba993543b52087aa23e08e0ef10
Go Version: go1.16
Platform: linux/amd64

Streaming logs below:
I0719 21:43:37.935095       1 mount_linux.go:192] Detected OS without systemd
I0719 21:43:37.935112       1 driver.go:93] Enabling controller service capability: CREATE_DELETE_VOLUME
I0719 21:43:37.935119       1 driver.go:112] Enabling volume access mode: SINGLE_NODE_WRITER
I0719 21:43:37.935124       1 driver.go:112] Enabling volume access mode: SINGLE_NODE_READER_ONLY
I0719 21:43:37.935128       1 driver.go:112] Enabling volume access mode: MULTI_NODE_READER_ONLY
I0719 21:43:37.935132       1 driver.go:112] Enabling volume access mode: MULTI_NODE_SINGLE_WRITER
I0719 21:43:37.935136       1 driver.go:112] Enabling volume access mode: MULTI_NODE_MULTI_WRITER
I0719 21:43:37.935142       1 driver.go:103] Enabling node service capability: GET_VOLUME_STATS
I0719 21:43:37.935146       1 driver.go:103] Enabling node service capability: STAGE_UNSTAGE_VOLUME
I0719 21:43:37.935363       1 server.go:118] Listening for connections on address: &net.UnixAddr{Name:"//csi/csi.sock", Net:"unix"}
I0719 21:43:38.525044       1 utils.go:118] GRPC call: /csi.v1.Identity/GetPluginInfo
I0719 21:43:38.525060       1 utils.go:119] GRPC request: {}
I0719 21:43:38.527484       1 utils.go:125] GRPC response: {"name":"smb.csi.k8s.io","vendor_version":"v1.2.0"}
I0719 21:43:38.702536       1 utils.go:118] GRPC call: /csi.v1.Identity/GetPluginInfo
I0719 21:43:38.702559       1 utils.go:119] GRPC request: {}
I0719 21:43:38.703193       1 utils.go:125] GRPC response: {"name":"smb.csi.k8s.io","vendor_version":"v1.2.0"}
I0719 21:43:39.094923       1 utils.go:118] GRPC call: /csi.v1.Node/NodeGetInfo
I0719 21:43:39.094947       1 utils.go:119] GRPC request: {}
I0719 21:43:39.095492       1 utils.go:125] GRPC response: {"node_id":"ip-10-0-2-49.ec2.internal"}
I0719 21:55:17.773168       1 utils.go:118] GRPC call: /csi.v1.Node/NodeStageVolume
I0719 21:55:17.773191       1 utils.go:119] GRPC request: {"secrets":"***stripped***","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/prod1-smb-pv/globalmount","volume_capability":{"AccessType":{"Mount":{"mount_flags":["dir_mode=0777","uid=300","gid=300","vers=1.0"]}},"access_mode":{"mode":5}},"volume_context":{"source":"//forio-files.forio.internal/epicenter-files"},"volume_id":"prod1-smb-pv"}
I0719 21:55:17.775057       1 nodeserver.go:180] NodeStageVolume: targetPath(/var/lib/kubelet/plugins/kubernetes.io/csi/pv/prod1-smb-pv/globalmount) volumeID(prod1-smb-pv) context(map[source://forio-files.forio.internal/epicenter-files]) mountflags([dir_mode=0777 uid=300 gid=300 vers=1.0]) mountOptions([dir_mode=0777 uid=300 gid=300 vers=1.0])
I0719 21:55:17.775119       1 mount_linux.go:175] Mounting cmd (mount) with arguments (-t cifs -o dir_mode=0777,uid=300,gid=300,vers=1.0,<masked> //forio-files.forio.internal/epicenter-files /var/lib/kubelet/plugins/kubernetes.io/csi/pv/prod1-smb-pv/globalmount)
E0719 21:55:28.078512       1 mount_linux.go:179] Mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t cifs -o dir_mode=0777,uid=300,gid=300,vers=1.0,<masked> //forio-files.forio.internal/epicenter-files /var/lib/kubelet/plugins/kubernetes.io/csi/pv/prod1-smb-pv/globalmount
Output: mount error(2): No such file or directory
Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)

E0719 21:55:28.078574       1 utils.go:123] GRPC error: rpc error: code = Internal desc = volume(prod1-smb-pv) mount "//forio-files.forio.internal/epicenter-files" on "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/prod1-smb-pv/globalmount" failed with mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t cifs -o dir_mode=0777,uid=300,gid=300,vers=1.0,<masked> //forio-files.forio.internal/epicenter-files /var/lib/kubelet/plugins/kubernetes.io/csi/pv/prod1-smb-pv/globalmount
Output: mount error(2): No such file or directory
Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)
I0719 21:55:28.610886       1 utils.go:118] GRPC call: /csi.v1.Node/NodeStageVolume
I0719 21:55:28.610919       1 utils.go:119] GRPC request: {"secrets":"***stripped***","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/prod1-smb-pv/globalmount","volume_capability":{"AccessType":{"Mount":{"mount_flags":["dir_mode=0777","uid=300","gid=300","vers=1.0"]}},"access_mode":{"mode":5}},"volume_context":{"source":"//forio-files.forio.internal/epicenter-files"},"volume_id":"prod1-smb-pv"}
I0719 21:55:28.613301       1 nodeserver.go:180] NodeStageVolume: targetPath(/var/lib/kubelet/plugins/kubernetes.io/csi/pv/prod1-smb-pv/globalmount) volumeID(prod1-smb-pv) context(map[source://forio-files.forio.internal/epicenter-files]) mountflags([dir_mode=0777 uid=300 gid=300 vers=1.0]) mountOptions([dir_mode=0777 uid=300 gid=300 vers=1.0])
I0719 21:55:28.613341       1 mount_linux.go:175] Mounting cmd (mount) with arguments (-t cifs -o dir_mode=0777,uid=300,gid=300,vers=1.0,<masked> //forio-files.forio.internal/epicenter-files /var/lib/kubelet/plugins/kubernetes.io/csi/pv/prod1-smb-pv/globalmount)
E0719 21:55:38.829260       1 mount_linux.go:179] Mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t cifs -o dir_mode=0777,uid=300,gid=300,vers=1.0,<masked> //forio-files.forio.internal/epicenter-files /var/lib/kubelet/plugins/kubernetes.io/csi/pv/prod1-smb-pv/globalmount
Output: mount error(2): No such file or directory
Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)

E0719 21:55:38.829304       1 utils.go:123] GRPC error: rpc error: code = Internal desc = volume(prod1-smb-pv) mount "//forio-files.forio.internal/epicenter-files" on "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/prod1-smb-pv/globalmount" failed with mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t cifs -o dir_mode=0777,uid=300,gid=300,vers=1.0,<masked> //forio-files.forio.internal/epicenter-files /var/lib/kubelet/plugins/kubernetes.io/csi/pv/prod1-smb-pv/globalmount
Output: mount error(2): No such file or directory
Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)
I0719 21:55:39.848092       1 utils.go:118] GRPC call: /csi.v1.Node/NodeStageVolume
I0719 21:55:39.848121       1 utils.go:119] GRPC request: {"secrets":"***stripped***","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/prod1-smb-pv/globalmount","volume_capability":{"AccessType":{"Mount":{"mount_flags":["dir_mode=0777","uid=300","gid=300","vers=1.0"]}},"access_mode":{"mode":5}},"volume_context":{"source":"//forio-files.forio.internal/epicenter-files"},"volume_id":"prod1-smb-pv"}
I0719 21:55:39.849635       1 nodeserver.go:180] NodeStageVolume: targetPath(/var/lib/kubelet/plugins/kubernetes.io/csi/pv/prod1-smb-pv/globalmount) volumeID(prod1-smb-pv) context(map[source://forio-files.forio.internal/epicenter-files]) mountflags([dir_mode=0777 uid=300 gid=300 vers=1.0]) mountOptions([dir_mode=0777 uid=300 gid=300 vers=1.0])
I0719 21:55:39.849667       1 mount_linux.go:175] Mounting cmd (mount) with arguments (-t cifs -o dir_mode=0777,uid=300,gid=300,vers=1.0,<masked> //forio-files.forio.internal/epicenter-files /var/lib/kubelet/plugins/kubernetes.io/csi/pv/prod1-smb-pv/globalmount)
E0719 21:55:50.094767       1 mount_linux.go:179] Mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t cifs -o dir_mode=0777,uid=300,gid=300,vers=1.0,<masked> //forio-files.forio.internal/epicenter-files /var/lib/kubelet/plugins/kubernetes.io/csi/pv/prod1-smb-pv/globalmount
Output: mount error(2): No such file or directory
Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)

So maybe the 'fixed' behavior I saw was just luck? It's odd that the initial containers never seem to show this issue, i.e. the nodes created at cluster creation do not seem to do this (again, maybe just too few sampling points, but it's what I've got). Nodes created via autoscaler events do this... maybe some permission difference in EKS?

zenbones
zenbones

Turns out the behavior from the different subnets was a false, in that both subnets will periodically fail to mount, but, I was able to get ssh access to the nodes and was able to reproduce the problem via manual mount on the node. So the csi driver is only reporting what's happening on the node. Not your problem. Closing this ticket.

Aug
16
2 months ago
Activity icon
issue

zenbones issue oracle/graal

zenbones
zenbones

Allow org.graalvm.polyglot.Context.Builder to specify both public and internal file systems (or take the fs specified as public only)

Currently, specifying a FileSystem in Context.Builder sets the same file system as both public and internal. That means that either the runtime system must live with the same view of the file system as the sandboxed code, or visa versa, and neither is appropriate. To properly handle the situation, the designer of the sandbox must be aware of what files the internal runtime will require (generally under the graalvm installation path), and allow them within the same view that the sandbox would like to present to the sandboxed code. This is very awkward and unnecessary. By taking the FileSystem passed into the context builder as the public version only, then I presume it will be enforced only upon the sandboxed code, which is more clearly what the sandbox designer wants, and the runtime will have access to the default file system limited only by its security context. Even better would be to allow the context to specify the public and internal file system separately and explicitly so that the correct semantics would be clearly surfaced and enforced, individually, on both.

Activity icon
issue

zenbones issue oracle/graal

zenbones
zenbones

Is Espresso confined by org.graalvm.polyglot.io.FileSystem?

org.graalvm.polyglot.Context allows the setting of a org.graalvm.polyglot.io.FileSystem, which allows for sandboxing polyglot languages. I presume that org.graalvm.polyglot.io.FileSystem is sufficient for the current crop of "scripting" languages, e.g. python, R, javascript, and ruby, as their file system 'objects' are not overly complex. However, I imagine that espresso code doing something as innocent as Paths.get("/some/path") is going to break out of the sandbox, as that code will engage the full power of Java's java.nio.file.FileSystem, which will grab the actual default FIleSystem for the os, which can't be backed or replaced by something based on org.graalvm.polyglot.io.FileSystem? Or am I mistaken in that assumption?

This was less of a problem as SecurityManager would at least allow sandboxing a thread to limited sub-sections of the file system, but, with the deprecation of SecurityManager, is there any hope of sandboxing Java on Java? I know the SecurityManager is brittle, difficult to maintain, and a drain on performance, but it also allowed sandboxing a Java thread, which is way more efficient than sandboxing the JVM itself via containerization, where we have to try to account for the complete overhead of the VM, including memory management, over and over again.

Espresso brings back a possible sandboxed environment for JVM bytecode-based languages, but only if the polyglot context holds for Espresso.

Previous