• 0 Posts
  • 7 Comments
Joined 2 years ago
cake
Cake day: June 10th, 2023

help-circle
  • To clarify something, I don’t believe that current AI chatbots are sentient in any shape or form, and as they are now, they’ll never be. There’s at least one piece missing before we have sentient AI, and until we have that, making the models larger won’t make the sentient. The LLM chat bots take the text, and calculate which words are how likely to follow onto that. Then based on these probabilities, a result is picked at random. Which is the reason for the hallucinations that can be observed. It’s also the reason why the hallucinations will never go away.

    The AI industry lives on speculative hype, all the big players are loosing money on it. Hence the existence of people saying that AI can become god and kill us all helps further that hype. After all, if it can become a god, then all we need to do is tame said god. Of course, the truth is that it currently can’t become a god, and maybe the singularity is impossible. As long as no government takes the AI doomers seriously, they provide free advertisement.

    Hence AI should be opposed on the basis that its unreliable and wasteful, not that it’s an existential threat. Claiming that current AI is an existential threat fosters hype which increases investment, which in turn results in more environmental damage from wasteful energy usage.



  • Hey, just wanted to plug an grassroots advocacy nonprofit, PauseAI, that’s lobbying to pause AI development and/or increase regulations on AI due to concerns around the environment, jobs, and safety. [emphasis added]

    No, they’re concerned about AI becoming sentient, taking over the world, and killing us all. This in turn, makes them little different from the people pushing for unlimited AI development, as the only difference between those two groups is that the latter believes they’ll be able to control the super intelligence.

    If you look at their sources, they most prominently feature surveys of people who overestimate what we currently call AI. Other surveys are flat out misrepresented. The survey for a 25% chance that we’ll reach AGI in 2025 State of AI engineering admits that for P(doom), they didn’t define ‘doom’, nor the time frame of said doom. So, basically, if we die out because we all fap to AI images of titties instead of getting laid, that counts as AI induced doom. Also, on said survey, 10% answered 0% chance, with 0% being the one of the only precise option offered on the survey, most other options covering ranges of 25 percentage points each. The other precise option was 100%.

    Basically, those guys are useful idiots for the AI industry, pushing a narrative not to dissimilar from the one pushed by the AI boosters. Don’t support them.


  • No, it is not security through obscurity. It’s a message signature algorithm, which are used in cryptography all the time.

    Yes it is. The scheme is that when you take a picture, the camera signs said picture. The key is stored somewhere in the camera. Hence the secrecy of the key hinges on the the attacker not knowing how the camera accesses the key. Once the attacker knows that, they can get the key from the camera. Therefore, security hinges on the secrecy of the camera design/protocol used by the camera to access the key, in addition to the secrecy of the key. Therefore, it is security by obscurity.


  • That’s security by obscurity. Given time, an attacker with physical access to the device will get every bit data from it. And yes, you could mark it as compromised, but then there’s nothing stopping the attacker from just buying another camera and stripping the key from that, too. Since they already know how. And yes, you could revoke all the keys from the entire model range, and come up with a different puzzle for the next camera, but the attacker will just crack that one too.

    Hiding the key on the camera in such a way that the camera can access it, but nobody else can is impossible. We simply need to accept that a photograph or a video is no longer evidence.

    The idea in your second paragraph is good though, and much easier to implement than your first one.