• 0 Posts
  • 38 Comments
Joined 2 years ago
cake
Cake day: July 20th, 2023

help-circle


  • Assuming that all monitors do anything specific at all would be a folly, no. There are no assumptions there, the sRGB spec has no ambiguity when it comes to the transfer function of the display.

    That a certain percentage of displays don’t behave like expected is annoying, but doesn’t really change anything (beyond allowing the user to change the assumed transfer function in SDR mode).

    the video https://www.youtube.com/watch?v=NzhUzeNUBuM goes more indepth, but it’s a very true statement to say that “some displays decode with the inverse oetf and some don’t” this issue has been plaguing displays for decades now.

    There are no assumptions there, the sRGB spec has no ambiguity when it comes to the transfer function of the display.

    You are 100% right in saying “the reference display is gamma 2.2” however, we can only wish this is what displays do, Color.org themselves got this wrong!!! https://www.color.org/srgb.pdf and leads people astray.

    The most likely actual reason Window uses the piece-wise transfer function for HDR is that it did that in SDR mode too - where however the default ICC profile was also piece-wise sRGB, so it canceled out on 99% of PCs, and had no negative effects.

    I don’t actually believe this to be the case, if it was people who use custom ICCs would get extremely wonky results that don’t typically happen. On the other hand it is very true that colors when doing it the way they do, you get the “least offensive” results. Though IMO the best solution would be to simply be to default to the pure 2.2 and allow users to override the transfer. the Color protocol allows for explicit peicewise sRGB anyways, so doing this should fit right into a fleshed out colormanaged setup.

    That’s a very different thing. Pushing viewing environment adjustments to the display side makes some amount of sense with SDR monitors - when you get an SDR display with increased luminance capabilities vs. the old one, you change the monitor to display the content comfortably in your environment

    I think I am a bit confused on the laptop analogy then, could you elaborate on it?

    With HDR though, if the operating system considers PQ content to be absolute in luminance, you can’t properly adjust that on the monitor side anymore, because a lot of monitors completely lock you out of brightness controls in HDR mode, and the vast majority of the ones that do allow you to adjust it, only allow you to reduce luminance, not increase it above “PQ absolute”.

    How monitors typically handle this is beyond me I will admit, But I have seen some really bonkers ways of handling it so I couldn’t really comment on whether or not this holds true one way or another. Just so I am not misinterpeting you, are you saying that “if you feed 300nits of PQ, the monitor will not allow it to go above it’s 300nits”? IF so this is not the case on what happens on my TV unless I am in “creator/PC” mode. In other modes it will allow it to go brighter or dimmer.

    My current monitor is only a 380nit display so I can’t really verify on that (nor do I have the hardware to atm)

    I didn’t claim that PQ had only one specification that uses it, I split up SMPTE ST 2084, rec.2100 and BT.2408 for a reason. I didn’t dive into it further because a hundred pages of diving into every detail that’s irrelevant in practice is counter productive to people actually learning useful things.

    ah I see, I was a bit confused on what you had meant then. My apologies.

    Can you expand on what you mean with that?

    Keep in mind this was based on the above misinterpretation of what I thought you meant.

    With libjxl it doesn’t really default to the “SDR white == 203” reference from the “reference white == SDR white” common… choice? not sure how to word it… Anyways, libjxl defaults to “SDR white = 255” or something along those lines, I can’t quite remember. The reasoning for this was simple, that was what they were tuning butteraugli on.

    That “directly” is very important, as it does very much make both these signal levels the same. As I wrote in the blog post, the spec is all about broadcasts and video.

    Other systems do sometimes split these two things up, but that nearly always just results in a bad user experience. I won’t rant anymore about the crapshow that is HDR on Windows, but my LG TV cranks up brightness of its UI to the absolute maximum while an HDR video is playing. If they would adhere to the recommendations of BT.2408, they would work much better.

    I think this is an issue of terminology and stuff, reference white is something the colourist often decides. When you assume that HDR graphics white == SDR white this actually causes more problems then it solves. I would say that it is a “good default”, but not a safe value to assume. This is something the user may often need to override. I know personally even when just watching movies on MPV this is something I very often need to play with to get a good experience, and this is not even counting professionally done work.

    That’s just absolute nonsense. The very very vast majority of users do not have any clue whatsoever what transfer function content is using, or even what a transfer function, buffer encoding or even buffers are, the only difference they can see is that HDR gets brighter than SDR.

    And again, this too is about how applications should use the Wayland protocol. This is the only way to define it that makes any sense.

    this actually isn’t really that true. It is indeed the case that users wont know what transfer function content is using. but they absolutely do see a difference other then “HDR gets brighter then SDR” and that is “it’s more smooth in the dark areas” because that is also equally true.

    Users have a lot of different assumptions about HDR, but they all follow some sort of trend “it makes the content look more smooth at a greater range of luminance” and if I were to give a “technical definition that follows general user expectations” the definition would be something along the lines of “A transfer that provides perceptually smooth steps of luminance at a given bit depth up to at least 1000 nits in a given reference environment” which is bad for sure, but at the very least, it more closely aligns with general expectations of HDR given it’s use in marketing.

    (I really hate the terms HDR and SDR btw, I wish they would die in a fire for any technical discussion and really wish we could dissuade people from using the term)


  • I should elaborate on why the “Peak white” stuff is wrong, they give this math here for mapping linear luminance. This can be really confusing, “what do we map the references to” well if PQ “graphics white” is 203, should we map sRGB to 203? clearly not, at least not always as implied by BT.2408.

    the question as to what we map SDR content to in an HDR space is complex, and in many cases almost certainly not some number that we can do 1:1 mapping with, which is why specifications for inverse tonemapping exist. for instance BT.2446 defines multiple tone mapping algorithms to go from SDR->HDR->SDR or HDR->SDR->HDR or any step inbetween with minimal content loss and fidelity loss.

    we cannot do a simple one size fits all function and expect everything to be hunky dory


  • This makes a numerous amounts of incorrect assumptions.

    For one it assumes all sRGB monitors utilize gamma2.2 for decoding images. This is bluntly put, completely wrong. A large amount of displays utilize the inverse OETF (the peicewise srgb transform) for decoding sRGB. (for some more information from a somewhat authoritative body, filmlight’s “srgb we need to talk” video on youtube goes more indepth but TLDR is 25-50% of displays use the inverse sRGB oetf)

    this is why windows HDR uses the inverse oetf. Decoding content graded on a pure 2.2 display with the inverse oetf is way better then decoding content graded on an inverse oetf display with a pure 2.2. Windows took the safe route of making sure most content looks at least OK. I would not say that windows HDR is wrong, it’s not right, but it’s not wrong either. this is just the mess that sRGB gave us.

    Another time you should be using the inverse sRGB OETF to linearize content when the original content was encoded using the sRGB oetf and you want to go back to that working data, but this applies less to compositors and more to authoring workflows.

    Another wrong assumption

    When you use Windows 11 on a desktop monitor and enable HDR, you get an “SDR content brightness” slider in the settings - treating HDR content as something completely separate that’s somehow independent of the viewing environment, and that you cannot adjust the brightness of. With laptop displays however, you get a normal brightness slider, which applies to both SDR and HDR content.

    People have been adjusting monitor brightness for ages. Sometimes manually, sometimes with DDC etc.

    Another issue that is brought up is “graphics white” BT.2408 is a suggestion, not a hard coded spec, many different specs or suggestions use a different “graphics white” value. A good example of this is JXL. 2408 also very explicitly says ‘The signal level of “HDR Reference White” is not directly related to the signal level of SDR “peak white”.’

    this is important to note because this directly contradicts the some of the seemingly core assumptions made in the article, and even some of the bullet points like “a reference luminance, also known as HDR reference white, graphics white or SDR white” and “SDR things, like user interfaces in games, should use the reference luminance too”

    if your application has some need to differentiate between “SDR” and “HDR” displays (to change the buffer format for example), you can do so by checking if the maximum mastering luminance is greater than the reference luminance

    This needs to be expanded upon that this does NOT correlate to what the general user understands HDR and SDR to be. HDR and SDR in the terms of video content is no more then a marketing term and without context it can be hard to define what it is, However it is abundantly clear from this quote here that how they are interpreting HDR and SDR (which is a very valid technically inclined way of interpreting it) does NOT fall inline with general user expectation.

    Anyone reading this article should be made aware of this.






  • Click baity title aside. This is actually pretty much pretty true. What the vast majority of people want when they’re writing their own composers seems to be specifically the custom window management aspects.

    And it is true that even with something like Wlroots or a Smithay, it is a lot of works right your own composer and have it be “competitive”. And he is right. There are a lot of composers out there that are just not usable for anything more than the basics. And there are tons more which are just toys that have been abandoned that aren’t really usable. That being said we saw a lot of that with window managers, But yes, writing a compositor is a lot more then writing a window manager.

    I personally don’t use hyperland, but I can see the point he’s trying to make, and I think it’s a rather good point. I think if we had more compositors that focused on having a scriptable window management, then that would be for the better.

    I don’t really see this as toxic either. I mean, if it’s toxic to call a composite or trash in one way or another, then I would argue that 90% of the Linux community is far more toxic than he is. It’s just a matter of truth. Wayland is a big complicated thing with a lot of protocols and some of it is poorly documented.

    And of course, this is him shilling his own composter. It’s his own composter, and this is the blog about him making his own composter. Of course he’s gonna put a post on it, shilling his own compositor.

    That being said, As I said earlier, I would like to see a more scriptable take for things like window management. I don’t think hyprland has to be unique in this aspect, but as it stands, it most definitely is.

    pardon my weird language, its hard to use STT.


  • None of them are really great, Im hoping cosmic in the future will be, as it stands.

    “Touch primary” DEs (IE. Phone, tablet with no keyboard or detachable etc.) I think are the way to go.

    Plasma mobile is king 100% nothing gets remotely close right now.

    Phosh I found far too buggy, and the apps are far too limiting. Things like squeekboard for instance don’t scale properly, I had issues running chromium browsers on it too.

    Ubuntu touch uses lomiri which im sure is great, I havent had luck running lomiri on any “common generic PC” linux distros. I did try getting it running on arch but found too many issues.

    swmo is nice in theory, but it’s missing a lot of the ergonomics.

    Plasma mobile is missing a lot. It has some not great design choices I find, however it by far has the best app ecosystem in terms of actual app quality, as well as actually working fine on tablets and phones alike.

    For touch secondary experiences I find KDE and Gnome to be just about as you would expect, Both are fine and mostly navigatable on touch only stuff, I would say KDE is often a lot better in terms of responsiveness, Gnome I can find bugs out as @that_leaflet@lemmy.world said. That being said, In portait mode, KDE is down right terrible some times as many KDE desktop apps have zero support for portrait aspect ratios, and you are relying on scaling being “low” enough that the app can fit fine anyways.

    as for some stuff you can look forwards to in the future, We are starting to see stuff like catacomb which is designed for phones. I also had some actually great luck with Niri but it’s mostly just buggy, and you still need something to manually launch keyboards.

    In terms of applications, I have absurdly high hopes for cosmic apps. Each one I have used this far has a “tiling first” design policy, which translates pretty much 1:1 with being flexible for a phone it turns out, While cosmic apps have really poor touch support, if you install them and pretend your mouse is a finger, you will find that each one is almost perfect in a phone form factor.

    EDIT: I wish I could say linux touch was in a good spot but realistically it’s not. I personally recommend just installing something like Bliss or using waydroid as the primary experience. I am very active in the bliss community because in large part tablets and touch support. Android is still far better then linux is in general, even if a little less flexible, you can have a fully foss install like what I have personally.





  • no way! sherlock, are you going to tell me that water exists next? holy you people watch too many movies. I don’t know if you live inner city in some crap us state, or some slum in some third world country but no fucking duh.

    Even in the states, cops get reported all the time, the vast majority of times I hear of when cops do get reported and no real disciplinary action is performed, is almost always because it’s some crappy city where the police literally can’t find anyone to replace them. This is why things like transparency in the process are needed. But lets pretend for a second that most cops in the world are these corrupt maniacs that hollywood likes to make them out to be.

    3 really bloody easy steps literally any crappy US state or really any state/country whatever in general could take right now would completely resolve this issue.

    • Make bodycam footage of incidents publicly accessible only redacting necessary footage by way of destructive blur and blur only. This will keep the privacy of folk in while still making sure that each and every officer can be held responsible for their actions.
    • Make a brief “internal investigation” status publicly available when one occurs, and make all information admissible in court. When you pair this with incident footage from publicly available bodycams. This both protects innocent officers by way of making it abundantly clear that a case is moot, but also prevents internal corruption by making it easy for effected parties to hold corrupt officials responsible via court.
    • increase funding for police in an open manor, all expenses should roughly detailed and publicly available, so they can accountably increase spending on things like de-escalation training, non lethal subjugation training whether this be grappling arts, tasers which aren’t completely useless, whatever it doesn’t matter. Just give them more options and better training.

    But no. we can’t do this. Because guess what, politicians, democrats and republicans alike for US folk are all greedy assholes who benefit from division. Everyone want’s to scream defund the police, or treat the police as some overarching messiahs and either get rid of them wholly, or let them act with impunity. It wouldn’t even cost that much money to do the first two points which are the most important ones.