

that’s pretty obvious. Their body panels are falling off and showing how little there actually is their vehicles :D
that’s pretty obvious. Their body panels are falling off and showing how little there actually is their vehicles :D
Assuming that Tesla goes bankrupt, actually shuts down forever, and shuts its servers down…
At a minimum someone would have to find out where the software sends and receives data from. Then you’d have to reverse engineer the software to control the vehicles.
Then you’d have to reprogram the software to send to your C&C server. I don’t think it would really take all that much to host that… it’s getting there that’s difficult.
I’d have to have friends across the internet that wanted files first…
@xanza@lemm.ee has a great response and also suggests using AdGuard Home instead, which is what I run as well. The biggest benefits the AGH has over PiHole for my family is the fact that you can very easily define a Client and the ips that pertain to that client… so I can define a single client for all of my devices , a single client for each of my kids, etc.
Then from there I can block specific services like social media platforms per client group or allow them. And similar to PiHole, I can setup all the blocklists that I want and it’ll block them across all clients.
For my kids, this means it’s blocking all those pesky ads that pop up in games getting them to go and download more mind numbing and draining games…
Finally, I can keep tabs on my network traffic and see what individual devices are accessing what domains; however, this doesn’t mean that I can see the individual web pages.
I have two AGH instances setup on two different hosts, and an additional AdGuardHome-sync container that syncs between the two instances, to make sure that all settings are mirrored.
Honestly I think this might be a better way than what I’m using now. I’ve subbed to dockerrelease.io and releasealert.dev … get spammed all day everyday because the devs keep pushing all sorts of updates to old branches… or because those sites aren’t configured well.
I agree that you’ll want to figure out inter-pod networking.
In docker, you can create a specific “external” network (external to the docker container is my understanding) and then you can attach the docker compose stack to that network and talk using the hostnames of the containers.
Personally, I would avoid host network mode as you expose those containers to the world (good if you want that, bad if you don’t)… possibly the same with using the public IP address of your instance.
You could alternatively bind the ports to 127.0.0.1 which would restrict them from exposing to internet… (see above)
So just depends on how you want to approach it.
I am running AdGuard Home DNS, not PiHole… but same idea. I have AGH running in two LXCs on proxmox (containers). I have all DHCP zones configured to point to both instances, and I never reboot both at the same time. Additionally, I watch the status of the service to make sure it’s running before I reboot the other instance.
Outside of that, there’s really no other approach.
You would still need at least 2 DNS servers, but you could setup some sort of virtual IP or load balancing IP and configure DHCP to point to that IP, so when one instance goes down then it fails over to the other instance.
Like others, I started with owncloud but when Nextcloud forked I switched within a year. I haven’t looked back and is working without any issues and is performant.
I don’t really care about the enterprise shit since it’s not being shoved in my face 🤷🏼♂️
Man I’m lame.
Used to be {env}-function##
Now it’s {env}-{vlanlocation}-function##
VLAN location such as DMZ, Infra, Jump for jump boxes, IOTSec or IOTInsec, Etc
I use both WireGuard and OpenVPN to vpn into my home network.
However, it doesn’t matter whether you use a domain or just up… if you get blocked from accessing either / both … you’re screwed. 🤷🏼♂️
You are looking for a disaster recovery plan. I believe you are going down the right path, but it’s something that will take time.
I backup important files to my local NAS or directly store them on the local NAS.
This NAS then backs up to an off site cloud backup provider BackBlaze B2 storage.
Finally, I have a virtual machine that has all the same directories mounted and backs up to a different cloud provider.
It’s not quite 3-2-1… but it works.
I only backup important files. I do not do full system backups for my windows clients. I do technically backup full Linux vms from within Proxmox to my NAS…but that’s because I’m lazy and didn’t write a backup script to back up specific files and such. The idea of being able to pull a full system image quickly from a cloud provider will bite you in the ass.
In theory, when backing up containers, you want to backup the configurations, data, and the databases… but you shouldn’t worry about backing up the container image. That can usually be pulled when necessary. I don’t store any of my docker container data in volumes… I use the folder mapping from host to directory in docker container… so I can just backup directories on the host instead of trying to figure out the best way to backup a randomly named docker volume. This way I know what I’m backing up for sure.
Any questions, just ask!
Somehow, I have never seen this list… and easily over half of those projects I’ve never heard of but could add some great functionality to my home. Thanks for posting it!
I’ll pitch in here… so website dns (porkbun) is configured to point to your home in, great!
2 things need to happen.
Once those are done, in theory, you should be able to access your website outside of your home network using your domain name.
I’ve just started to delve into Wazuh… but I’m super new to vulnerability management on a home lab level. I don’t do it for work so 🤷🏼♂️
Anyways, best suggestion is to keep all your containers, vms, and hosts updated best you can to remediate vulnerabilities that are discovered by others.
Otherwise, Wazuh is a good place to start, but there’s a learning curve for sure.
I cover most of what services I’m running in my own post looking for assistance recently.
I’m not sure if you ever made your way to following through with this… But the three node system isn’t a bad starting point. However, here’s how I would approach it (similar to how I actually got my start in homelabs and how I do things now)
1 system for your router (looks like you picked a Qotom unit, those are decent), 8-16 gb ram
1 system for proxmox virtualization… run all your services in LXC’s or Virtual machines, as much ram as you can get a get for your system
And 1 system dedicated to storage (truenas or unraid), 32gb ECC ram (personal preference but not necessarily needed even with zfs for home use)
I’d start at https://reddit.com/r/homelab … but since we’re on Lemmy, I’d rather suggest posting on !homelab@geekroom.tech (new, but looking to gain traction)
Certainly has me concerned. I’ll have to investigate a bit more into the financial solvency of the company to better understand whether they are at least covering bills and such… but honestly sounds like they aren’t and haven’t been.
Going to need to start looking for alternative S3 type storage.