I really like to build from zero, but some things are better copied, no matter if you fully understand them or fall short. :)
For example, I’m not qualified to check if Hamilton and Euler were correct - I only do as they explained, and later double-check the output against input.
This seems to be the kind of a situation where, if the researchers truly believe their study is necessary, they have to:
After that, if they still feel their study is necesary, maybe they should run it and publish the results.
If then, some eager redditors start sending death threats, that’s unfortunate. I would catalouge them, but not report them anywhere unless something actually happens.
As for the question of whether a tailor-made response considering someone’s background can sway opinions better - that’s been obvious through ages of diplomacy. (If you approach an influential person with a weighty proposal, it has always been worthwhile to know their background, think of several ways of how they might perceive the proposal, and advance your explanation in a way that relates better with their viewpoint.)
AI bots which take into consideration a person’s background will - if implemented right - indeed be more powerful at swaying opinions.
As to whether secrecy was really needed - the article points to other studies which apparently managed to prove the persuasive capability of AI bots without deception and secrecy. So maybe it wasn’t needed after all.