• 0 Posts
  • 8 Comments
Joined 3 years ago
cake
Cake day: June 17th, 2023

help-circle
  • The ownership model is Rust’s core innovation. Every heap allocation has exactly one owner — the variable binding that “holds” it. Ownership can be moved to another binding, at which point the original is invalidated. It can never be silently copied (unless the type implements Copy).

    While each of these is not wrong in isolation, together they are. If we are talking about data stored on the heap that last bit is not true. Types that hold a raw pointer cannot be made Copy. Only simple types can be made Copy, ones that don’t own any non direct data and as such can be stored on the stack and simply memcpyed to get a copy.



  • nous@programming.devtoLinux@lemmy.mlGentoo or LFS?
    link
    fedilink
    English
    arrow-up
    0
    ·
    12 days ago

    LSF is not a distro. It is a instruction manual and teaching aid. Don’t use it as a base for you main OS. And IMO Gentoo does not really teach you more then Arch does. It gives a bit of flexibility that not many care about (how things are compiled) at a very big cost (of having to compile everything yourself). I would not use either unless compiling things is your hobby.

    Sure, try them in a VM if you really want to. But I would not really consider that moving on from your current distro nor do you really need to do that.



  • Debian has two main versions, stable - which is released every two years and supported for a long time. And unstable which is basically a rolling release and constantly changes adopting things to test them before the next stable release. There is also testing, but that is just to place thing in before promoting them to stable so has the same release cadence as stable.

    Two years of fixed versions on a desktop is a very long time to be stuck on some packages - epically ones you use regularly. Most people want to use things that are newer then that, either new applications released or new features for apps they use in the past two years.

    Ubuntu also has two release versions (that not really the right term though). They have a LTS version which is released every two years much like Debian is. But they also have a interim release that is released every 6 months. This gives users access to a lot newer versions of software and stuff that has been released more recently. Note that the LTS versions are just the same as the interim versions, its just that LTS versions are supported for a longer period of time, so you can use it for longer.

    For the Ubuntu releases they basically take a snapshot of the Debian unstable version, and from that point on they maintain their own security patches for the versions they picked. They can share some of this work with Debians patches and backports, but since Debian stable and Ubuntu are based off different versions Ubuntu still needs to do a lot of work with figuring out which ones they need to apply to their stuff as well as ensuring things work on the versions they picked. Both distros do a lot of work in this regard and do work with each other where it makes sense.

    Ubuntu also adds a few things on top of Debian. Some extra packages, does a few things that make the disto a bit more user friendly etc.

    Any other distro that wants to base off one of these has to make the choice

    • Do they want a very slow release cadence matching Debian (or Ubuntu LTS).
    • Or do they want a faster release cadence of Ubuntu without doing much extra work as they can build off the work that Ubuntu is doing on top of Debian.
    • Or do they want to take on all that extra work themselves and have more control over the versions included in their repos.

    For a lot of distro maintainers basing off Ubuntu gives them a newer set of packages to work with while doing a lot less work doing all that work themselves. Then they can focus on the value adds they want to add ontop of the distro rather then redoing the work Ubuntu already does or sticking with much older versions.

    The value add work that needs to be done on either base I dont think is hugely different. You can take the core packages you want and change a few settings, or remake a few meta packages that you dont want from Ubuntu. This is really all stuff you will be doing which ever one you pick. It is a lot more work keeping up with security patching everything.


  • parse_oui_database takes in a file path as a &String that is used to open the file in a parsing function. IMO there are a number of problems here.

    First, you should almost never take in a &String as a function argument. This basically means you have a reference to a owned object. It forces an allocation of everything passed into the function only to take a reference of it. It excludes types that are simply &strs forcing the caller to convert them to a full String - which involves an allocation. The function should just taking in a &str as it is cheap to convert a String to a &str (cheaper to use than a &String as well as &String have a double redirection).

    Sometimes it might be even better might be to take in a impl<AsRef(str)> which means the function can take in anything that can be converted into a &str without the caller needing to do it directly. Though on larger functions like this that might not always be the best idea as it makes it generic and so will be monomorphised for every type you pass into it. This can bloat a binary if you do it on lots of large functions with lots of different input types. You can also get the best of both worlds with a generic wrapper function to a concrete implementation - so the large function has a concrete type &str and a warpper that takes a impl <AsRef<str>> and calls the inner function. Though in this case it is probably easier to just take in a &str and manually convert at all the one call sites.

    Second. String/&str are not the write types for paths. Those would be PathBuf and &Path which both work like String and &str (so all the above applies to these as well). These are generally better to use as paths in most OSs dont have to be unicode. Which means there are files (though very rarely) which cannot be represented as a String. This is why File::open takes in a AsRef<Path> which your function can also.

    Lastly. I would not conflate opening a file with parsing it. These should be two different functions. This makes the code a bit more flexible - you can get the file to parse from other sources. One big advantage to this would be for testing where you can just have the test data as strings in the test. It also makes the returned error type simpler as one function can deal with io errors and the other with parsing errors. And in this case you can just parse the file directly from the internet request rather than saving it to a file first (though there are other reasons you may or may not want to do this).