!VRULIdgoKmKPzJZzjj:nixos.org

Nix Hackers

927 Members
For people hacking on the Nix package manager itself193 Servers

Load older messages


SenderMessageTime
19 Nov 2024
@jade_:matrix.orgjade_
In reply to @connorbaker:matrix.org

Yesterday I got an error in a Nix build which was using Ninja (nothing new there, bad CMake config). However, I noticed I was able to reliably reproduce a Nix error: https://gist.github.com/ConnorBaker/5cebac5224ab430e67ee25d7a5bd0224

bad JSON log message from builder: [json.exception.parse_error.101] parse error at line 1, column 65539: syntax error while parsing array - invalid literal; last read: <snipped>

Any idea if there's a limit on the length of the output from a build? I believe this was using a remote builder if that's any help.

we have fixed this bug in lix
08:51:37
@jade_:matrix.orgjade_ it clumsily parses anything that says @nix in the output as json, which is used for setPhase inside stdenv that shows the phase in the progress bar, like, unpackPhase, etc 08:52:48
@jade_:matrix.orgjade_this used to cause the entire build to fail08:53:07
@jade_:matrix.orgjade_https://gerrit.lix.systems/c/lix/+/2057 probably08:53:43
@matthewcroughan:defenestrate.itmatthewcroughan @fosdem

got this for the first time today

nixinate-phone> building '/nix/store/z03lxdp085hqpaxglqm7iphvdsvcdgvs-cardinal-24.09.drv'...
nixinate-phone> bad JSON log message from builder: [json.exception.parse_error.101] parse error at line 1, column 4099: syntax error while parsing array - invalid literal; last read: '"\u001b[01m\u001b[K/nix/store/pxb3zpbg0qdccadh884fag33va0xb4ds-gcc-13.3.0/include/c++/13.3.0/variant:1522:33:\u001b[m\u001b[K   required from '\u001b[01m\u001b[Kstd::enable_if_t<(is_constructible_v<_Tp, _Args ...> && __exactly_once<_Tp>), _Tp&> std::variant<_Types>::\u001b[01;32m\u001b[Kemplace\u001b[m\u001b[K(_Args&& ...) \u001b[35m\u001b[K[with _Tp = RTNeural::ModelT<float, 1, 1, RTNeural::GRULayerT<float, 1, 32, RTNeural::SampleRateCorrectionMode::None>, RTNeural::DenseT<float, 32, 1> >; _Args = {}; _Types = {NullModel, RTNeural::ModelT<float, 1, 1, RTNeural::GRULayerT<float, 1, 8, RTNeural::SampleRateCorrectionMode::None>, RTNeural::DenseT<float, 8, 1> >, RTNeural::ModelT<float, 2, 1, RTNeural::GRULayerT<float, 2, 8, RTNeural::SampleRateCorrectionMode::None>, RTNeural::DenseT<float, 8, 1> >, RTNeural::ModelT<float, 3, 1, RTNeural::GRULayerT<float, 3, 8, RTNeural::SampleRateCorrectionMode::None>, RTNeural::DenseT<float, 8, 1> >, RTNeural::ModelT<float, 1, 1, RTNeural::GRULayerT<float, 1, 12, RTNeural::SampleRateCorrectionMode::None>, RTNeural::DenseT<float, 12, 1> >, RTNeural::ModelT<float, 2, 1, RTNeural::GRULayerT<float, 2, 12, RTNeural::SampleRateCorrectionMode::None>, RTNeural::DenseT<float, 12, 1> >, RTNeural::ModelT<float, 3, 1, RTNeural::GRULayerT<float, 3, 12, RTNeural::SampleRateCorrectionMode::None>, RTNeural::DenseT<float, 12, 1> >, RTNeural::ModelT<float, 1, 1, RTNeural::GRULayerT<float, 1, 16, RTNeural::SampleRateCorrectionMode::None>, RTNeural::DenseT<float, 16, 1> >, RTNeural::ModelT<float, 2, 1, RTNeural::GRULayerT<float, 2, 16, RTNeural::SampleRateCorrectionMode::None>, RTNeural::DenseT<float, 16, 1> >, RTNeural::ModelT<float, 3, 1, RTNeural::GRULayerT<float, 3, 16, RTNeural::SampleRateCorrectionMode::None>, RTNeural::DenseT<float, 16, 1> >, RTNeural::ModelT<float, 1, 1, RTNeural::GRULayerT<float, 1, 20, RTNeural::SampleRateCorrectionMode::None>, RTNeural::DenseT<float, 20, 1> >, RTNeural::ModelT<float, 2, 1, RTNeural::GRULayerT<float, 2, 20, RTNeural::SampleRateCorrectionMode::None>, RTNeural::DenseT<float, 20, 1> >, RTNeural::ModelT<float, 3, 1, RTNeural::GRULayerT<float, 3, 20, RTNeural::SampleRateCorrectionMode::None>, RTNeural::DenseT<float, 20, 1> >, RTNeural::ModelT<float, 1, 1, RTNeural::GRULayerT<float, 1, 32, RTNeural::SampleRateCorrectionMode::None>, RTNeural::DenseT<float, 32, 1> >, RTNeural::ModelT<float, 2, 1, RTNeural::GRULayerT<float, 2, 32, RTNeural::SampleRateCorrectionMode::None>, RTNeural::DenseT<float, 32, 1> >, RTNeural::ModelT<float, 3, 1, RTNeural::GRULayerT<float, 3, 32, RTNeural::SampleRateCorrectionMode::None>, RTNeural::DenseT<float, 32, 1> >, RTNeural::ModelT<float, 1, 1, RTNeural::GRULayerT<float, 1, 40, RTNeural::SampleRateCorrectionMode::None>, RTNeural::DenseT<float, 40, 1> >, RTNeural::ModelT<float, 2, 1, RTNeural::GRULayerT<float, 2, 40, RTNeural::SampleRateCorrectionMode::None>, RTNeural::DenseT<float, 40, 1> >, RTNeural::ModelT<float, 3, 1, RTNeural::GRULayerT<float, 3, 40, RTNeural::SampleRateCorrectionMode::None>, RTNeural::DenseT<float, 40, 1> >, RTNeural::ModelT<float, 1, 1, RTNeural::GRULayerT<float, 1, 64, RTNeural::SampleRateCorrectionMode::None>, RTNeural::DenseT<float, 64, 1> >, RTNeural::ModelT<float, 2, 1, RTNeural::GRULayerT<float, 2, 64, RTNeural::SampleRateCorrectionMode::None>, RTNeural::DenseT<float, 64, 1> >, RTNeural::ModelT<float, 3, 1, RTNeural::GRULayerT<float, 3, 64, RTNeural::SampleRateCorrectionMode::None>, RTNeural::DenseT<float, 64, 1> >, RTNeural::ModelT<float, 1, 1, RTNeural::LSTMLayerT<float, 1, 8, RTNeural::SampleRateCorrectionMode::None>, RTNeural::DenseT<float, 8, 1> >, RTNeural::ModelT<float, 2, 1, RTNeural::LSTMLayerT<float, 2, 8, RTNeural::SampleRateCorrectionMode::None>, RTNeural::DenseT<float, 8, 1> >, RTNeural::ModelT<float, 3, 1, RTNeural::LSTMLayerT<float, 3, 8, RTNeural::SampleRateCorrectionMode::None>, RTNeural::DenseT<float, 8, 1> >, RTNeural::ModelT<float, 1, 1, RTNeural::LSTMLayerT<float, 1, 12, RTNeural::SampleRateCorrectionMode::None>, RTNeural::DenseT<float, 12, 1> >, RTNeural::ModelT@nix {"a'; expected ']'
16:22:34
@xokdvium:matrix.orgSergei Zimmerman (xokdvium)

Hi. Wanted to share a pretty curious finding. I've been looking into speeding up the flex generated lexer by using the full scanner tables --full (a.k.a. -Cf) and --fast (-CF) https://westes.github.io/flex/manual/Options-for-Scanner-Speed-and-Size.html#Options-for-Scanner-Speed-and-Size. They can give pretty nice uplifts in performance while compiling into a bit larger binaries.

I've run into a funny issue, which really looks like either a flex bug or strange lexer rules on the cppnix side. When built with -Cf the scanner fails to match comments in the derivation primop:

nix-instantiate --parse ../flake.nix
--accepting rule at line 314 ("
")
--accepting rule at line 160 ("# This is the implementation of the ")
error: syntax error, unexpected DOLLAR_CURLY
       at <nix/derivation-internal.nix>:2:1:
            1|
            2| # This is the implementation of the ‘derivation’ builtin function.
             | ^
            3| # It's actually a wrapper around the ‘derivationStrict’ primop.

Looks like the generated scanner trips up lots of other cases with comments as well. Has anyone else previously looked into using full scanners previously?

22:37:43
@xokdvium:matrix.orgSergei Zimmerman (xokdvium) *

Hi. Wanted to share a pretty curious finding. I've been looking into speeding up the flex generated lexer by using the full scanner tables --full (a.k.a. -Cf) and --fast (-CF) https://westes.github.io/flex/manual/Options-for-Scanner-Speed-and-Size.html#Options-for-Scanner-Speed-and-Size. They can give pretty nice uplifts in performance while compiling into a bit larger binaries.

I've run into a funny issue, which really looks like either a flex bug or strange lexer rules on the cppnix side. When built with -Cf the scanner fails to match comments in the derivation primop:

nix-instantiate --parse ../flake.nix
--accepting rule at line 314 ("
")
--accepting rule at line 160 ("# This is the implementation of the ")
error: syntax error, unexpected DOLLAR_CURLY
       at <nix/derivation-internal.nix>:2:1:
            1|
            2| # This is the implementation of the ‘derivation’ builtin function.
             | ^
            3| # It's actually a wrapper around the ‘derivationStrict’ primop.

Looks like the generated scanner trips up lots of other cases with comments as well. Has anyone else previously looked into using full scanners?

22:43:58
20 Nov 2024
@inayet:matrix.orgInayet removed their profile picture.00:59:23
@dre:imad.nycimadnyc joined the room.01:39:00
@dre:imad.nycimadnycHey, I was looking into https://github.com/NixOS/nix/issues/11903 and I'm super confused -- where IS system? The author looks at it with a double dash, but even without it I can't find any reference to system either as a LongFlag or as a Setting01:41:07
@dre:imad.nycimadnyc changed their display name from dre to imadnyc.01:41:50
@dre:imad.nycimadnyc Actually, I'm really confused on how --help decides to display anything 01:44:59
@emilazy:matrix.orgemily ~everything in man nix.conf can be used as an --<option> 01:54:42
@emilazy:matrix.orgemily and system is documented there 01:55:08
@dre:imad.nycimadnyc Unless I'm confused, I meant in the nix repo, where is system as a flag that can be accepted defined. For example, man nix.conf tells me that it takes allow-unsafe-native-code-during-evaluation as a flag, which is defined here: https://github.com/NixOS/nix/blob/32becc87fef7340600df75ffed6e7c6bc56aa827/src/libexpr/eval-settings.hh#L54 01:58:17
@dre:imad.nycimadnyc The only mention is https://github.com/NixOS/nix/blob/32becc87fef7340600df75ffed6e7c6bc56aa827/src/libmain/common-args.cc , where it's mentioning erasing system for nix-env but I don't know where to find that? 01:59:34
@emilazy:matrix.orgemilyright, sorry. my guess is that it's not an eval setting and is somewhere else since it's not strictly about eval(?), but I'm afraid I don't know where in the codebase it is exactly01:59:44
@dre:imad.nycimadnyc Do you think you could point me to the entrypoint for --help in nix3 commands? I see showManPage, but as far as I can tell that's just nix2 02:02:20
@aloisw:kde.org@aloisw:kde.org
In reply to @dre:imad.nyc
Unless I'm confused, I meant in the nix repo, where is system as a flag that can be accepted defined. For example, man nix.conf tells me that it takes allow-unsafe-native-code-during-evaluation as a flag, which is defined here: https://github.com/NixOS/nix/blob/32becc87fef7340600df75ffed6e7c6bc56aa827/src/libexpr/eval-settings.hh#L54
system is defined in the libstore settings: https://github.com/NixOS/nix/blob/32becc87fef7340600df75ffed6e7c6bc56aa827/src/libstore/globals.hh#L188
05:48:53
@aloisw:kde.org@aloisw:kde.org
In reply to @dre:imad.nyc
Do you think you could point me to the entrypoint for --help in nix3 commands? I see showManPage, but as far as I can tell that's just nix2
showHelp: https://github.com/NixOS/nix/blob/32becc87fef7340600df75ffed6e7c6bc56aa827/src/nix/main.cc#L242
05:55:23
@updown:envs.netupdown joined the room.06:57:39
@Ericson2314:matrix.orgJohn Ericson Eelco: OK https://github.com/NixOS/nix/issues/11928 is the ticket I would like to fix involving CA things. Don't worry about that. 21:35:34
@Ericson2314:matrix.orgJohn Ericsonhttps://github.com/NixOS/nix/issues/11927 This is the plain code cleanup, no new features ticket that I would like to be done before attempting it 21:36:03
@connorbaker:matrix.orgconnor (burnt/out) (UTC-8)As I've been working on the persistent lists PR, I've been thinking about the data structures we use currently for lists and attribute sets. Lists: https://github.com/NixOS/nix/blob/0498e2259a0e6ae2f6e121579d35ace79c5b3ef2/src/libexpr/value.hh#L136 Attribute sets: https://github.com/NixOS/nix/blob/0498e2259a0e6ae2f6e121579d35ace79c5b3ef2/src/libexpr/attr-set.hh Has anyone got metrics on what the most commonly used operations are on lists and attribute sets? I'd like to think that we could find out what operations we need to make cheap and select a data structure which provides a low cost for those operations.22:25:53
@tomberek:matrix.orgtomberekMost common is opUpdate. I've got some valgrind stuff somewhere. 23:25:07
@xokdvium:matrix.orgSergei Zimmerman (xokdvium)
In reply to @connorbaker:matrix.org
As I've been working on the persistent lists PR, I've been thinking about the data structures we use currently for lists and attribute sets. Lists: https://github.com/NixOS/nix/blob/0498e2259a0e6ae2f6e121579d35ace79c5b3ef2/src/libexpr/value.hh#L136 Attribute sets: https://github.com/NixOS/nix/blob/0498e2259a0e6ae2f6e121579d35ace79c5b3ef2/src/libexpr/attr-set.hh Has anyone got metrics on what the most commonly used operations are on lists and attribute sets? I'd like to think that we could find out what operations we need to make cheap and select a data structure which provides a low cost for those operations.
I doubt that this information is readily available. It should be possible to either instrument cppnix to collect this information and/or use profilers (think gprof/callgrind). callgrind would be the easiest approach
23:26:06
@tomberek:matrix.orgtomberek

I used this:

valgrind --tool=callgrind ./build/src/nix/nix eval --file '<nixpkgs/nixos>' system --arg configuration '{boot.isContainer=true;system.stateVersion="24.11";}' --argstr system "x86_64-linux"

and got these graphs: https://github.com/NixOS/nix/issues/9034#issue-1910269659

23:33:08
21 Nov 2024
@nerves:bark.lgbt@nerves:bark.lgbt changed their profile picture.02:51:38
@matthewcroughan:defenestrate.itmatthewcroughan @fosdem
[nixos@nixos:~]$ nix-shell -p pciutils
error:
       … while calling the 'import' builtin
         at «string»:1:18:
            1| {...}@args: with import <nixpkgs> args; (pkgs.runCommandCC or pkgs.runCommand) "shell" { buildInputs = [ (pciutils) ]; } ""
             |                  ^

       … while realising the context of a path

       … while calling the 'findFile' builtin
         at «string»:1:25:
            1| {...}@args: with import <nixpkgs> args; (pkgs.runCommandCC or pkgs.runCommand) "shell" { buildInputs = [ (pciutils) ]; } ""
             |                         ^

       error: experimental Nix feature 'flakes' is disabled; add '--extra-experimental-features flakes' to enable it
15:16:11
@matthewcroughan:defenestrate.itmatthewcroughan @fosdemDoes this mean that Nix just doesn't work out of the box now because of flakes? Or have I misconfigured something?15:16:21

Show newer messages


Back to Room ListRoom Version: 6