Debian already has docker packaged. That’s more convenient.
Debian already has docker packaged. That’s more convenient.
Debian with the docker convenience script.
They seem to be moving away from this, and it’s not longer the first option on their install page
On their debian page
Use a convenience script. Only recommended for testing and development environments
Also, it should be noted about the first option they recommend, Docker Desktop, that Docker Desktop is proprietary.
I recommend just getting the docker.io
and docker-compose
from debian’s repositories.
Ubuntu in WSL comes with systemd enabled. Debian doesn’t, and you have to enable it yourself.
That’s why I chose to have people use Ubuntu in WSL, despite the other downsides. One less step to setup a Linux environment on Windows makes the process smoother.
Wish I could transcend into declarativity but the thread’s nix survivor ratio is grim
Yeah lol.
I will say, that for my server, I decided to use kubernetes + fluxcd for declaratively. My entire kubernetes “state” is declared in a git repo, and this is the popular, industry standard for things like this, called GitOps. It makes it very easy to add an app, since it’s just adding a folder + some new config files. And unlike Nix, Kubernetes and Flux are very well documented with much tooling as well. Nix doesn’t really have a working LSP or good code autocomplete, but with kubernetes, I can just start typing in a yaml file and then hit tab and it spits out the template for me. Code autocompletion with kubernetes feels much more similar to the tooling of other, more mature tooling
It’s not as declarative as nix though. There are things missing, like OCI containers could theoretically shift if you don’t rely on hashes and some other nitpicks. But declarativity is a spectrum, and I feel like, outside of scientific scenarios (think simulations where versioning, hardware, runtime etc being the same is very important), I think many non-nixos solutions are declarative enough.
Advice online seemed like i needed to basically create a nix flake for the app. I still havent gotten it installed because i have no idea what nix flakes are.
So, the problem is that flakes are technically an “experimental” feature, and thus are not allowed to be included as a primary solution in the official documentation. But, basically everybody uses flakes, so it leads to this crazy documentation split, and is a big part of why documentation on Nix is so part.
Some stuff can only be done with flakes, some stuff only with non-flakes and you have to figure out which is which on your own, while also dealing with the poor documentation for either.
The advice you received was wrong. You could also use a combination of a default.nix
file and a shell.nix
file to create a package and development environment for your app. But, the documentation is so poor that it’s unlikely you will learn this, and figuring out how to do this on your own, is again, a massive time sink.
So, I use Arch, but I don’t use the AUR at all. Instead, I use nixpkgs to get stuff (admittedly only like 3 packages) not in the Arch repos.
The main reason for this is the quality of AUR packages. Although I don’t really fear a malicious package, I do remember hearing about a package that moved a users /bin to /opt during the install phase.
Something like that is literally impossible with Nix, due to the way that applications aren’t really installed to the system. But, nixpkgs also requires some level of vetting the package quality, which is also nice.
I also use nix for managing all my development environments. For example, my blog github repo, has a few nix files at it’s root, and you should just be able to type nix-shell
in folder, and then you will get an identical environment to me.
declarative rollbackable immutability sounds really freakin’ AWESOME
I have BTRFS snapshots set up, and with grub-btrfs, I can even boot from them and revert to an older kernel (my /boot is stored on BTRFS).
However, I have given up on NixOS, for many reasons. The documentation is very poor, and it’s more complexity than it’s worth, to make my whole OS reproducible, rather than just my development environments. In addition to that, their are also issues with running certain apps that expect to see a normal FIlesystem Hierarchy, which nix does not provide. Although you can work around this with stuff like steam-run
or creating a fake FHS using nix, I would rather not play that game.
But, considering I installed some stuff in an Ubuntu 22 distrobox recently, because that was what VScode and Unity official provide repos for, maybe this doesn’t really matter. You can probably use distrobox on Nixos, but I’ve seen issues about GPU acceleration with distrobox (and other non-nix apps) as well.
OP seems to be trying to install older projects, rather than creating a new project.
Yes. Firstly, it’s about release cycles. Centos Stream is a rolling release distro (although it rolls very, very slowly). But what this means, is that there isn’t a true guarantee of application/ABI/API compatibility between current versions of Centos Stream and future versions.
In constrast to this, Centos 8 and previous were complete clones of Red Hat Enterprise Linux, which was a stable release distro. During the 10 year lifecycle of each RHEL release, there was a guarantee certain application/ABI/API compatibility not changing, which is what stability in the Linux/software world is defined as.
Centos 8 was a free alternative, for institutions unwilling, or unable to pay for RHEL stable releases. But, with the death of Centos, an alternative was needed. Alma Linux, Rocky Linux, and Scientific Linux (designed for labs and universities), were rebuilds of RHEL. This meant that, they would take RHEL’s open source code, and recompile it and distribute it in a way that guaranteed application/ABI/API compatibility with RHEL, for the same lifecycle of a RHEL release.
So Alma Linux and Rocky Linux fill that gap… but recently, RHEL said that they are adjusting policies to make it much harder for people to make rebuilds (likely targeting Oracle Linux, which is a RHEL rebuild), but this change may affect Alma and Rocky as well.
Rocky said they were going to keep bug-for-bug compatibility, like they used to, but Alma says they are going to do something different. Although they still intend to be ABI compatible, Alma has decided to make some changes to the base system, such as reimplimenting and continuing to support things that Red Hat saw unfit to continue existing in RHEL. One example of this is SPICE, which is a graphics protocol used for low latency display of virtual machines. It had many usecases, and I am very excited to see it back in a distro in the Red Hat ecosystem.
https://help.kagi.com/orion/faq/faq.html#oss
We’re working on it! We’ve started with some of our components and intend to open more in the future.
The idea that “open-source = trustworthy” only goes so far. For example, the same tech company that offers a popular open-source browser also has the largest ad/tracking network in history, with that browser playing a significant role in it. Another company with a closed-source browser (using WebKit like Orion) is on the forefront of privacy awareness and technologies in its products.
So, does anyone here remember when all chromium browsers had a secret api that sent extra data to google? Brave, Opera, and Edge got hit by this one, but I think Vivaldi dodged it. They all removed this after they found out, but still…
When it comes to things like browsers, due to the sheer complexity and difficulty to truly audit chromium, I don’t really consider chromium to be “open source” in the same sense as many other apps. Legally, you can see and edit the code. But in practice, it’s impossible to audit all of it, and the development is controlled by a single corporation who puts secrets in it, or removes features that harm their interests (manifest v3). Personally, I consider Minecraft Java to be closer to open source than chromium is.
To say that:
The idea that “open-source = trustworthy” only goes so far
is really just a cop-out and excuse for not being transparent with their code and what they are doing.
I honestly don’t know how this could turn out.
It could be an amazing change that results in much more progress for hardware acceleration on guests of various types (since that is what vmware is good at) in kvm…
Or it could mean that they are dropping that feature from vmware altogether.
Regardless, I like this change because it means I would be able to run vmware machines and libvirt kvm machines at the same time, at least when I am forced to use vmware workstation.
I also dislike proprietary software in general, so I think less proprietary software and more FOSS is a good thing.
I found this: https://github.com/tenclass/mvisor-win-vgpu-driver
But it is for another foss kvm based hypervisor called mvisor.
I disagree, because they are not the same thing.
Immutable means read only root.
Atomic means that updates are done in a snapshotted manner somehow. It usually means that if an update fails, your system is not in a half working state, but instead will be reverted to the last working state, and that updates are all or nothing.
I create a btrfs snapshot before updates on my Arch Linux system. This is atomic, but not immutable.*
There is also “image based” which distros like ublue (immutable, atomic) are, but Nixos (also immutable and atomic) are not.
*only really before big updates tbh, but I know some people do configure snapshits before all updates.
Crowdstrike didn’t target anyone either. Yet, a mistake in code that privileged, resulted in massive outages. Intel ME runs at even higher privileges, in even more devices.
I am opposed to stuff like kernel level code, exactly for that reason. Mistakes can be just as harmful as malice, but both are parts of human nature. The software we design should protect us from ourselves, not expose us to more risk.
There is no such thing as a back door that “good guys” can access, but the bad guys cannot. Intel ME is exactly that, a permanent back door into basically every system. A hack of ME would take down basically all cyber infrastructure.
https://wiki.archlinux.org/title/List_of_applications/Internet#Pastebin_services
That pages shows how to use curl to upload to 0x0.st.
I’ve used the pastebinit program listed on that page to upload to paste.debian.net, but it supports other sites as well.
Because forgejo’s ssh isn’t for a normal ssh service, but rather so that users can access git over ssh.
Now technically, a bastion should work, but it’s not really what people want when they are trying to set up git over ssh. Since git/ssh is a service, rather than an administrative tool, why shouldn’t it be configured within the other tools used for exposes services? (Reverse proxy/caddy).
And in addition to that, people most probably want git/ssh to be available publicly, which a bastion host doesn’t do.
So based on what you’ve said in the comments, I am guessing you are managing all your users with Nixos, in the Nixos config, and want to share these users to other services?
Yeah, I don’t even know sharing Unix users is possible. EDIT: It seems to be based on comments below.
But what I do know is possible, is for Unix/Linux to get it’s users from LDAP. Even sudo is able to read from LDAP, and use LDAP groups to authorize users as being able to sudo.
Setting these up on Nixos is trivial. You can use the users.ldap set of options on Nixos to configure authentication against an external LDAP user. Then, you can configure sudo
After all of that, you could declaratively configure an LDAP server using Nixos, including setting up users. For example, it looks like you can configure users and groups fro the kanidm ldap server
Or you could have a config file for the openldap server
RE: Manage auth at the reverse proxy: If you use Authentik as your LDAP server, it can reverse proxy services and auth users at that step. A common setup I’ve seen is to run another reverse proxy in front of authentik, and then just point that reverse proxy at authentik, and then use authentik to reverse proxy just the services you want behind a login page.
The solution to what you want is not to analyze the code projects automagically, but rather to run them in a container/virtual machine. Running them in an environment which restricts what they can access limits the harm an intentional — or accidental bug can do.
There is no way to automatically analyze code for malice, or bugs with 100% reliability.
OP is on OpenWRT (a router distro), and Alpine. Those distros don’t come with very much by default, and perl is not a core dependency for any of their default tools. Neither is python.
Based on the way the cosmo project has statically linked builds of python, but not perl, I’m guessing it’s more difficult to create a statically linked perl. This means that it’s more difficult to put perl on a system where it isn’t already there, and that system doesn’t have a package manager*, than python or other options.
*or the the user doesn’t want to use a package manager. OP said they just want to copy a binary around. Can you do that with perl?
Is there a specific android app you need?
https://gitlab.com/android_translation_layer/android_translation_layer/
And of course waydroid. Both these solutions let you run android app on Linux, but like wine, they won’t work for every app.
Waydroid probably works for all apps not dependent on google though. But it’s more difficult to set up than the android translation layer.