Signal’s claim to fame here is that they were subpoenaed in 2016 and could only supply account creation and last connection times:
> The American Civil Liberties Union announced Tuesday that Open Whisper Systems (OWS), the company behind popular encrypted messaging app Signal, was subpoenaed earlier this year by a federal grand jury in the Eastern District of Virginia to hand over a slew of information—"subscriber name, addresses, telephone numbers, email addresses, method of payment"—on two of its users.
> ... “The only information responsive to the subpoena held by OWS is the time of account creation and the date of the last connection to Signal servers,” Kaufman continued, also pointing out that the company did in fact hand over this data.
I think this discussion should also mention that Signal is a non-profit organization, dedicated to enabling secure and private communications.
Yes, it's not strong proof, but it should be taken into account when comparing the goals and motivations of organizations developing various other communicators.
The organization behind your communicator app could be in the business of gathering data about you and selling it in various forms (Facebook, Google), in the business of selling hardware and add-on software services (Apple), or in the non-business of trying to provide you with private communications.
I prefer to look at the history of who founded and continues to run the Signal Foundation... Moxie Marlinspike. Moxie has a long history of improving security in all kinds of tech and fighting for privacy.
The Signal app itself is opensource as well various pieces of the tech stack. You can audit yourself what is being sent and how their protocols work. The protocol itself has won awards due to its security and elegance.
There is a lot of good things to say about Signal and you can easily find it all. They have made some annoying or less than ideal features that are opt-out instead of opt-in but they're not sacrificing privacy for them.
Your phone is running an APK, which is a bunch of signed code. You don't have the keys to sign such an APK yourself, but you can get tools that will tell you exactly what's inside the one you have.
I believe the Java source in GitHub is designed to be capable of a reproducible build, where you get the exact same Java binaries out as Signal's own builders did and thus you can compare that to confirm the Java code in your APK matches a specific Git checkout.
The media files (e.g. images, labels) are just straight binary copies so that's easy enough.
However there is native code to make stuff like video calls work, and when I last paid attention there was no reproducible build for that component. So you could imagine that somehow the native video call code is actually a secret backdoor or something.
The source have a script that builds in a docker from a bunch of other previously built binaries, allegedly to be built with keys that are secret, and then just output "the apks are the same" and you have to believe that ¯\_ (ツ)_/¯
Still, years ahead anything else that actually have users.
The server is mostly 100% closed source. There's one open source that you can host, but it's widely believed to not be even close to the one they use.
> should be taken into account when comparing the goals and motivations of organizations developing various other communicators.
Business or funding models can change for both for- and non-profit organizations. Especially as people move to options that are believed to have better user-privacy, the idea that they do not sell/monetize collected user data today does not indicate what they will do tomorrow.
Unless users have strong evidence that companies are not collecting and/or monetizing this information (which as the OP pointed out, there is for Signal as found in a subpoena), the "billboard" approach towards promising user privacy via marketing and PR is a shallow one at best for non-profits as well as for-profit companies.
Is it possible that they could in fact produce this data but were prevented from publicly saying so due to a gag order?
I'm asking specifically because I remember Private Internet Access, a VPN provider, also being tested in court in the past , and because of this I've chosen to trust them despite them falling under Five Eyes jurisdiction.
In short, the ACLU helped them to lift the gag order, and the blog itself shows the legal documents. The documents show exactly the data returned (Account creation and last access in Unix millis). Only the phone numbers are still redacted.
Mullvad has contributed to Wireguard that secures me the confidence in their service. Also the experience of creating an account without my name and email address is the best. Theh only thing left is the billing message (I use credit card) has prefix: VPN*
Do you get decent speeds from Mullvad? Friends were reporting that they moved back to PIA due to worse speeds on Mullvad. That and the lack of a chrome extension (which is occasionally useful) has prevented me from switching away from PIA even if I'm unhappy about being in business with Karpeles and Kape.
I just fired it up and connected to an endpoint in my city, I have a 600mps download pipe and hit 150mbps with default settings and about 275mbps with wireguard selected in the mullvad app. Switching to TCP in the mullvad app didn't change my result enough to notice.
I didn't try other servers/cities to get more information.
I am getting great speed out of Mullvad, usable for everything, except frame critical gaming. Even video streaming usually works fine. I would say I get approximately 3/4 of my normal speed when using VPN.
"And Mr. Musk's endorsement of Signal last week sent publicly traded shares of Signal Advance Inc., a small medical device maker, soaring from a roughly $50 million market value to more than $3 billion. (The company has no relation to the messaging app.)"
Signal may have only supplied that metadata at the time. But what I am concerned about is that if Signal is US-based, couldn’t the state demand Signal’s app signing key via a NSL, and couldn’t that signing key then be used for targeted attacks by which someone of interest gets a Signal app upgrade that is malicious (while everyone else gets the non-malicious app)? I admit to being somewhat unfamiliar with Android distribution through the Play Store, so if this is unfeasible, help me understand why.
Yes. But if you specifically are targeted by organisations capable of issuing NSLs, you're completely hosed already. (And they're just as likely, if not more so, to have done that to your OS instead of just the Signal app.)
Technically they could get the signature key, but they can't force Signal to publish it via the store. Users would have to download an .apk file and install it directly. At that point there is no reason to have the signing key at all as the phone will recognize a sideload as a third party install. As far as I know, the government cannot compel a company to do something like update an app.
> but they can't force Signal to publish it via the store
Is there not a suspicion that Google, another US-based corporation, may have some agreement with American national security to supply malicious APKs to individual targets via the Play Store? Having Signal’s signing key would allow the state to present that custom-targeted APK as an ordinary Signal version update.
While I'm not saying Google hasn't done something like this (I have no proof either way) there's a strong legal argument to be made that forcing a company to produce binaries is compelled speech which goes against the first amendment.
It's more about preventing companies like Facebook getting their hands on everyone's data and abusing it as well as preventing organizations like Signal themselves using / abusing this data.
We won't ever truly know if Signals data makes it's way into the hands of government security agencies but I would say it more than likely does or it will if they want it to in the future.
If some government wants to get you they will get you, probably via your operating system... Signal won't help you. If that's your concern then you gotta stay off the internet to be honest!
It's important to note that Telegram does store all your data by default as they do not enable E2EE for everything like Signal does. So if you're under the assumption that they don't, this is incorrect.
Telegram, for all intents and purposes, is about as secure as using Facebook. The best you can do with Telegram is hope they don't sell out or get compromised at some point in the future, because all your private communications are stored on their servers forever. Telegram does have "secret chats", which from what I can gather, don't even work for group chats, only one-to-one messages.
My general advice is to treat Telegram like a new Facebook if you have to use it, assume everything may by read by everyone, don't treat it like it's private and secure.
For "text messaging" friends and family use Signal. Everything is end-to-end encrypted by default, so you know nobody is collecting your data.
In a way, yes, Telegram is even less secure than WhatsApp for this.
The way I've been presenting it to people is that Telegram can look at more data than WhatsApp can. But WhatsApp will use the data they have more than Telegram will. That's the tradeoff.
And yes, obviously Signal is more secure than both of them but I've been steering non-techies to Telegram because of usability, backups, cross-device history etc. As usual, everything is a tradeoff, but if people were happily using WhatsApp up until now, and also use Gmail, Telegram is not worse than those.
Durov recently announced they will smart monetizing through ads, but only in "channels" of people with huge subscriber count (which generates a lot of costs). Channels are 1:N public broadcasts, a bit like Twitter.
$1 per user/year sounds about right. I use Telegram more than any other chat app. People send lots of media (and even large files) and Telegram archives them forever. One group Telegram I’m in is 6 years old and has 500+ VIDEOS (and 10,000s of images) permanently archived in it.
What I don't get is people are trusting unverifiable builds of Signal, Telegram, WhatsApp, etc as "secure" on each of their E2EE implementations when that part of the binaries we install on our phones isn't even verifiable by code and compilable by ourselves.
But what I do like about Telegram is their good user experience and Bot API developer experience. It's soooooooooo fucking good I'm telling you. It just works, be it on web, mobile, and desktop.
At this point who the fuck knows if Durov can be trusted (hell we all wish, right, no harm in that). But regardless of that, at the end of the day I'd be willing to admit he's a fucking genius when it comes to Telegram's UX and DX.
> At this point who the fuck knows if Durov can be trusted (hell we all wish, right, no harm in that).
It's a threat model decision. If you're someone who wants privacy from the US or other Western governments (think Antifa on the left side, or corona-deniers, qanons and other conspiracy nuts on the right side), Telegram is the best option since the Western governments can't hold them accountable. If you're a Russian or Chinese dissident, or opposition in countries aligned with them (e.g. Serbia) Whatsapp and Facebook are your best bet.
Yep and also keep in mind that FBI/DOJ capitol breach presser yesterday the FBI dude basically said “it’s hard to tell who is shit posting and who isn’t so it takes some elbow grease” which I take to mean that it’s OK to shit post. Just you know, don’t use computers for anything you want to keep secret.
We don't know that Signal doesn't store data about users on its servers. Even the source code can't tell us that, because we don't run the servers.
What we do know is that programs like Telegram have to store data about users on their servers, by design. A big difference between the two projects is that Signal is carefully designed to minimize the amount of data the service needs to operate; it's why identifiers are phone numbers --- so it can piggyback on your already-existing contact lists, which are kept on your phone.
By contrast, other services store, in effect, a durable list of every person you communicate with, usually indexed in a plaintext database.
> We don't know that Signal doesn't store data about users on its servers. Even the source code can't tell us that, because we don't run the servers.
Yes. Ultimately we have no choice but to trust trust itself.[a] That said, if the OP were a non-technical friend asking me the same question, I would respond more or less like this:
"Of all the widely used messaging services, Signal is the only one known to be designed to minimize the amount of user data needed to operate, and all indications are that they are operating as designed[b], so Signal is likely your best choice today if privacy is your main concern."
It's been relevant recently due to the SolarWinds supply chain hack, too, since the implant was inserted into the build process, so I've been seeing it a lot more too. It wasn't used to infect a compiler, but still makes people think of Trusting Trust.
My understanding: If you verify the safety numbers in person, then I believe you can be confident that it's E2E encrypted for that conversation. If the safety numbers are different, then there could be a nefarious actor listening in.
Someone please correct me if I'm wrong.
Edit: That being said, I believe they could still record IPs, as well as the destination and timestamps of each message.
It only helps verify what data the client sends to their servers, not what fraction of that data is stored on their servers. They could be (but probably aren't; see other comments) storing e.g. information about how often you connect and the volume of data that passes through their servers.
I just had a skim over the post and it seems to be saying that it allows them to process user data without the OS having access to it. This does nothing at all for letting me verify what is running on their server or that they are even using this SGX feature at all.
It protects signal from hackers or a malicious datacenter provider at best.
> SGX enclaves also support a feature called remote attestation. Remote attestation provides a cryptographic guarantee of the code that is running in a remote enclave over a network.
> Originally designed for DRM applications, most SGX examples imagine an SGX enclave running on a client. This would allow a server to stream media content to a client enclave with the assurance that the client software requesting the media is the “authentic” software that will play the media only once, instead of custom software that reverse engineered the network API call and will publish the media as a torrent instead.
Signal does not has to access to contacts. It does asks for contact access permission, to show in the app the names that you have set for your contacts. But you can just answer no and everything works.
On the contrary, if you answer the same to WhatsApp, it plain refuses to work. But it actually created an account on their servers, and from that on you appear on your contacts who do use WhatsApp as another user of WhatsApp, which invites them to write to you there although you cannot receive their messages. To fix this, you have to find the option in WhatsApp to delete your account.
I don't want to speak for the parent commenter, but I think the concern is that the local app could be exfiltrating the contact list (and then by the exact same logic, message content as well) in some side channel unrelated to anything seen in the published source code, unless (a) the user builds the apk from published source code themselves, or (b) if there's some way to prove that the apk received via the Play Store is identical to one built from that source code.
Is (b) achievable by all users who have this concern?
Signal and the ACLU sued and were granted permission to release sealed warrant data from a previous law enforcement request for user data.
As of mid-2016, and trusted as much as you feel like trusting something attested in a court of law, Signal stores: a bool (is this phone number a user) and two ints (epoch of signup, epoch of last transmission).
Signal: operations that involve sending your contacts (like contact discovery) use a pattern Signal invented where the client can validate the software running on the server. The server runs inside the SGX secure enclave. Before your client sends any data, it performs remote attestation on the running server code to ensure it matches the published open source code.
Keep in mind that SGX is not as secure as advertised.
Also whole security dangles on Intel to be trusted to not give its private keys to anyone. Which is a big ask for any company. NSA/CIA likely can get those keys legally via FISA court order or illegaly via hacking and/or insider.
Sure, but the question wasn't "does the NSA have access to data", it was "how do we know that information isn't stored."
The answer is that signal includes an industry-leading attestation process using CPU security features.
It's true that if the CPU manufacturer is compromised that would compromise anything running on it, including attestation. But that's not really to do with Signal's implementation, and it is out of scope of the question.
Sure, but the question wasn't "does the NSA have access to data", it was "how do we know that information isn't stored."
The answer is that signal includes an industry-leading attestation process using CPU security features. If the CPU manufacturer is compromised that would compromise anything running on it, including attestation. But that's not a flaw in Signal's implementation, and it is out of scope of the question.
I like this reason because you don't need to know anything about tech at all to be able to understand this. You also don't need to trust or like Snowden. If you view Snowden as a hero or a traitor, it changes nothing. All you need to trust is that he's got no reason to lie about using Signal, and neither does Elon Musk.
Telegram is storing your message content in their cloud for „cloud chats“ (default), as those are not end-to-end encrypted.
Telegram‘s „secret chats“ and signal chats are end-to-end encrypted. The servers still may store metadata, and there is no way to tell if they do than either joining them or let a trusted third party verify that.
To check if e2e encrypted message content cannot be encrypted via backdoors on their servers, you need to ensure they use proven encryption schemes and the client encryption does correspond to those algorithms.
> I'm just curious how we trust companies such as Signal, Telegram, Mozilla, that claim they don't store and sell our data?
These are three very different companies with very different security processes and trust profiles.
In the case of Signal: if you trust that the source code they distribute is the same as the app available in the Play Store, then it's pretty easy to verify that the messaging data is end-to-end encrypted in a way that prevents Signal from having much metadata that they even could store. With "sealed sender", they don't even know who's talking to whom: https://signal.org/blog/sealed-sender/
There's the possibility that Signal could ship a different app in the Play store, but that would require active malice to do in a way that would not be trivial to discover, and at some point you do have to trust someone. It's not impossible, but it's hard to imagine a world in which Signal is compromised but other links in the chain aren't, because quite frankly, there are far more easily corruptible or hackable links in the hardware/software stack that you use, so Signal would make a pretty inefficient target for someone who wants monetizeable data.
 ie, an accidental divergence between the two would be more conspicuous
I had to scroll down a shockingly long way to find this.
The point is that even if Signal permanently stored everything you ever sent them, then they wouldn't be able to read it.
- You can build the client yourself per Signal's reproducible builds, so actually, they could not ship a different app to the published source without it being immediately detectable
- You can validate the source code does not send any unencrypted data to Signal
- You can validate that your private keys used for encryption are stored locally on the device and not transmitted to Signal
Theoretically, anyone who has the corresponding private key could decrypt the message. So if your contact uses an unofficial client which does share their private key with a third party, then that third party could unencrypt that message, however by that point, the app creator has compromised the device anyway, and could do something as naive as take screenshots of all of the messages in the background after Signal has done the work of ensuring the secure transmission of the message.
Note, I haven't actually done all that, because I do trust Signal. But I could if I wanted to. And obviously, this assumes that all the cryptographic standards used in Signal are still unbroken - but if they were, then you're screwed either way.
In the case of Signal, I imagine people assume all of the following:
1. the protocol between client and server is setup in such a way, even if Signal wanted to store interesting information, they could not access anything interesting even if they wanted to (for example, messages), thus they don't store anything since it's useless
2. the app implements the protocol faithfully and this has been checked by people perusing the source code
3. the binary downloaded from the app/play store phone is compiled from the sources listed on github
> I get how it might be done in theory but real life is complicated. Has anyone attempted to do this?
This is mentioned elsewhere, but the answer is: reproducible builds.
You can take the Signal client source (which is available on Github), build an APK or whatnot yourself, then get the SHA256 hash or whatever and compare that to the artifact downloaded from the app store and validate that it's the same.
You don't compare builds because you probably don't actually have sources. What you do is use a special iPhone (a Security Research Device) that Apple grants some researchers or you use an emulator like the one from Corellium (to whom Apple recently lost a lawsuit over this emulator) to probe and step through the code. Find the key sections that do the real crypto work and make sure that they do what they are supposed to do and that they are getting the correct inputs.
There is a large group of people who do this sort of research, and some fraction of them do this research and actually talk about it or publish papers. If you could find a deliberate weakness in the security of an app like what we are talking about (or WhatsApp or iMessages) then you have just printed your own golden ticket to whatever mobile cybersecurity job you want for the next decade or two, so there is a bit of an incentive to publish if something like this was discovered...
I'm not familiar enough with Matrix, but I'm curious about how/if it can protect against malicious federated server operators? Does that just boil down to needing to trust they are not running modified code?
Your own Matrix server does not spread any information unless you allow it or unless you message other servers.
Now, concerning the other servers, it may be a problem, just like Gmail is a problem for everyone running their own email server. However, it's a much smaller problem and at least theoretically everyone who cares can escape the walled garden.
You can create a server for all your friends and you will always know exactly which information is shared with others and which isn't.
Telegram is very clear that they do store our stuff on their servers. And in clear text unless you choose end-to-end encryption.
My concern is not about my data being stored on their servers. My concern is about having having marketing data being sold to third parties in order to target advertising at me, just as when you leave "third party cookies" active on your browser. That is creepy and invasive. Would Zuckerburg ever do such thing?
Right! the keyword here is "Reproducible Builds". Basically once there is documentation about how to produce the release build, you can do it yourself and compare the resulting hash with the build distributed in the Store.
Generally speaking it does no come for free, but once you find a way (e.g. for iOS compiling with a specific Xcode version in a specific OS with some adjusted config) is kind of doable (except that Apple encrypts your build server side for DRM purposes, so you'll need a jailbroken phone to do something about it)
For Signal there is an open issue here for iOS 
and some documentation for Android 
This has nothing to do with the comment you replied to, as you have no idea what software is running on their server, so what would it even mean to reproduce it in the first place? The correct answer is merely "the server never received much in the first place so it doesn't matter as much if they stored all of it".
Ok, sure. But what do you propose? It's still a much better situation than what we have with Whatsapp. Is there something that the Signal Foundation could do to alleviate that concern you have? There's no technical solution in any technology for preventing the other side being compromised, as far as I can see.
Yeah and since you have the possibility of dealing with state actors with deep pockets, you have to wonder if Android or iOS doesn't have the ability to copy your private keys and send those off somewhere for storage. Because of signal's popularity, it feels pretty possible to me.
If the NSA did have it backdoored somehow through the OS, it's a good bet they'd force LE agencies to use parallel construction to keep that information top secret.
That is why we really need open source hardware and OS's. A good (or even functional) open linux phone can't come fast enough.
Key authentication is not for the "paranoid" or simply those with "high risk profiles", otherwise every web browser in the universe wouldn't do it by default on every single connection to every single website. It is a normal, routine thing that is expected in all modern secure communications systems.
We've got certificate authorities to centralize trust for server public keys. And those require trusting organizations that lots of people don't want to trust. We don't have an equivalent system for individuals. There is no trivial push-button key verification process for peer-to-peer communications. Key signing parties suck and never worked. Key validation for things like Signal is nicely automated if you are physically near the other person. But beyond that it is tricky.
It is hard enough to get my parents to use a secure messenger. If I told them they needed to do a key verification process for every person they ever communicate with... they'd just go back to facebook messenger or sms.
I think it is completely reasonable for somebody to say "I don't care enough to worry about validating public keys" while also educating people like journalists about how to do that correctly.
Video calls alone won't stop a MITM attack. They would just send both video streams along, and record both sides.
Signal does have the capability to have a verification phrase displayed, which is generated from the session key. Reading that off can make the video more difficult to MITM, because then they'd have to morph the audio to match the phrase, and if it's done after the video is setup, morph the video as well. Not impossible, but difficult.
This is false. A video call will not prevent or detect MITM. You may be suggesting that a video call is used to authenticate the key, which is certainly a step in the right direction, but I don't think Signal supports this.
I do think it is a valid concern. Over the years, various sources reported that intelligence agencies mostly use metadata (who's talking to whom, i.e. the social network) in their analysis because message content is harder to parse and understand (and, outside of email traffic, harder to obtain in the first place).
While this is a great way to build trust, there is obviously no way to confirm the App Store version is the same as one built from their public source. In fact, due to the way Apple optimizes apps for each device, this becomes even harder. Furthermore, just because you compile it from source and put it on your phone does not mean that you can reasonably stay aware of or understand all the internal workings that happen inside the app.
I know that developers can post LLVM bitcode to the App Store instead of a binary, which allows Apple to recompile it for architectural changes. I'd be surprised if Apple optimized per device. Creating separate builds with optimizations for different iPhone models would make more sense. Do you have more details on that?
Are you going to read every commit and fully explore the entire app to know your messages and encryption are being handled securely? And keep doing this every time you update? If so, you have more time than I do. :-)
The software on the server doesn't matter, as long as the encryption is solid on the device. That's the whole point, the devices handles all encryption/decryption so the server can't understand any of the data coming and going. The reason they don't store the data is because it would be pointless. The Github repos contain the device source code which, for most platforms, can be verified.
No, it does. In some cases (think dictatorship) - you not only want the secret police to not read your messages - you don't want them to know at all, who are you talking to(and how often and when!).
Otherwise you might all go to jail (or worse), if they are after one contact of yours. And then you can try to feel save, that they don't know your encryption password.
Well, if a government goes authorian, than it does not matter much, what service you use, if you have to assume your phone has spyware on it.
If the main danger is, police scanning the phone for compromised material (without a police spyware on it), then there are some ways to deal with it technically, by using services that don't leave a trace. Telegram for example has a "secret chat" function, which won't save the messages, meaning someone scanning your phone later, won't find them.
(which I head is also a main reason for many people to join telegram, because so they can chat with their affairs and not have their wifes read it)
Then there are simply private tabs of chrome or ff, from where you can use chat-services without trace. (if the chat services are not cooperating with the police, or are decentralised by default, I think in that scenario I would use matrix)
Anyway, you live in kashmir?
I know mainly of the conflict by reading Shalimar the Clown, from Rushdie. Just curious about your opinion, if you know the book. I heard it was not well received in Kashmir itself?
I think it was very well written, but I don't know how accurate it is.
Yeah. Just last year I had to teach neighbours and such to use a launcher like Evie which lets you hide apps. Many were stopped during random street checks and that saved their Turkey. Heh.
8 months or so ago I was stooped because it looked like I was "recording a video" on my phone when actually I was. Took a slow turn, double press power button and pickachu face that I wasnt. Still a couple guys around helped or I was history.
No. I havent read Rushdie. It has that whole demon verse thing around him, he isnt liked
The problem with telegram as with WhatsApp and signal is phone numbers. India has had this network analyzer on isp level for like 6-7 years, called "netra". So all unencrypted traffic goes through it. Same for all encrypted traffic. This is the reason why I stopped using tor, because my traffic would show up uniquely than rest and that gets them suspicious quickly.
There is a lot of text written on the conflict which actually is more than 500 years old. Kashmir has been under foreign oppressive occupation for over 500 years constantly and even today is under 3 nations. Its not like the occupation wont affect the people.
I am trying to get people I know on matrix because there is no PII, waiting for dendrite to come out of beta so that I can set up my own server and such.
"No. I havent read Rushdie. It has that whole demon verse thing around him, he isnt liked"
Well, he did made many people aware of the conflict, which created attention, which results in indian police having to give interviews to the intercept for example, which helps in some ways. Things would probably be darker without.
How is your opinion on a political solution?
Do you think independence would work out (if your big neigbhours would let you)?
My understanding is, that the kashmir population in itself is divided?
i "think" i am safe. Recently a prominent lawyer was assassinated in broad daylight, that has gotten me scared. other than that this place is actually safer for non combatants than rest of the subcontinent and i can say that with authority. the governments otoh are not safe for people.
from what i have observed and from facts, at least on the indian occupied side, india spend like millions to convince entire generations about a local hero who happened to be pro india.
yeah so its not like people are not protesting, we just don't see a point in the conventional protests. kashmir has been fighting foreign invaders for over 500 years so its kinda in the system, to oppose.
as a kashmiri, i see no alternative other than complete independence. there is no other way forward, will that take 20 years or 500. Doesn't matter
personally i think all three of these nations have to let go of kashmir, not because of some altruistic reason but because of CPEC. india and pakistan "need" to pass through kashmir to access it. china cannot afford jeopardizing their project because of indo-pak squabbles. for them business is king and the sooner things settle down, the easier.
for us kashmiris, we could practically live off of port fees for all the goods passing through our borders so yeah, i am hopeful
"personally i think all three of these nations have to let go of kashmir, not because of some altruistic reason but because of CPEC. india and pakistan "need" to pass through kashmir to access it. china cannot afford jeopardizing their project because of indo-pak squabbles. for them business is king and the sooner things settle down, the easier."
I would be more afraid, that those big powers next to you don't want to let you go exactly for that reason. They want to use your land.
But yeah, I hope that they can settle for that. Leave you in the middle. A buffer between them.
Good but currently the land is divided between 3 and there is fighting. Unless status quo changes, fighting will continue. Its not like if any of the three forces its way on others territory and the two will stay quiet. Either all can have it like now and keep fighting or all let go and no one fights.
Perhaps a little pedantic, but I don't think this is technically correct, since Stallman and the GPL wouldn't really apply here for server-side code, since it doesn't seem like the client-side application code is being questioned regarding trust, but rather the intermediary code on Signal's own servers. In that case, Affero GPL would be the answer. And that license was first published in 2007, which is something like 18 years after the initial GPL release.
I don't think that actually detracts from gist of your point, but just wanted to point it out in case anyone was interested.
What prevents Google from replacing Signal on the Android Application store with their custom and backdoored version ? Can we check a hash or something ? Does the signal foundation do that on a regular basis ?
If Google wanted to read your messages and were willing to use malware to do it, there’s little to stop them on Android. Even if Signal checked the apk regularly, there’s no guarantee that the apk served to them is the same one served to everyone else. They could also push an update to the OS that recognizes the Signal apk and applies a patch after downloading but before installing.
That said, Signal does apparently support reproducible builds so that people can check that the apk matches what’s on GitHub (though this is more of a way to detect malfeasance on Signal’s part rather than Google’s)
There's nothing stopping Google from silently pushing a keylogger to your phone and recording every single thing you do. They don't need to hijack Signal or anything else for that. By using your phone you are implicitly trusting Google, the manufacturer and several other parties.
If you trust the source code of the software you're running, you can at least get a sense of what data they're getting in the first place. You know, at least, that they're not getting the content of your communications if you verify safety numbers. You can also prove that they're not getting the contents of the gifs you're grabbing for your conversation, because the client makes a secure connection to the gif service using Signal's servers as a proxy.
As far as promising not to store your metadata, or promising not to deliberately give the gif service information about your account because they hate you, or promising not to store your contacts when you search for other friends with Signal, then yeah you have to just take their word for it. Though, they may over time look for ways to put some of those guarantees on the client side as well with some clever engineering, so you could prove it.
Telegram stores your data by default in their servers. You have the option of removing single messages or conversations but there’s no way of knowing if they really do so. Also if you remove your account without removing your conversations first they stay there forever (others can see the messages)
Which data? We can be somewhat sure that they don't have access to the content of Signal or Telegram secret chats as long as we have verified the identity of our contacts.
After that, what data do you care about? Neither Signal or Telegram is intended to provide complete anonymity. That is a much harder problem. For Mozilla that would involve Tor. I don't think that Mozilla really has "servers" in the sense you mean.
Telegram is based on the UAE. The UAE is known of their very strict monitoring practices of their citizens. Not until recently they allowed FaceTime to work there. I honestly doubt that Telegram does't give the UAE government access whenever they need it's a monarchy government.
If you wish to be more certain, use something open-source. For instance, Matrix has many clients made by different teams, in the open, and several of these are part of e.g. Debian, so you should be able to find at least one you can trust.
There's a lot about Signal in particular that they get right. AFAIK:
(1) All Signal messaging is E2EE; (2) they don't store messages on their servers; (3) the client code is open source, and it seems like a good portion of the server code is open source.
Where I think Signal could go further on being the most secure, useful, and privacy-conscious messaging app/company in the world:
1. Open source ALL of the server code. They have something called Signal-Server (https://github.com/signalapp/Signal-Server) on their Github, but it's unclear if this is the server they use, or simply a server one could theoretically use to run a private Signal server.
2. Open source all server-side services/infrastructure code that doesn't compromise security in some way.
3. Better features. Signal is currently the most secure and privacy-conscious of the messaging apps, but solidly the worst overall user experience. It's not that it's bad, it's just that the other apps are much better. People like gifs and giphy and emojis and a fast-feeling interface. This is important, because it's hard to be a privacy-conscious individual when all your friends want to text on other apps. At least in my social circle, Signal is still the thing that people jump over to when they want be extra super sure they're not leaving a paper trail, but not the default messaging app they use.
4. Introduce a user-supported business model. This probably makes a lot of people uneasy, and while I appreciate the current grant and donation-based business model (the Wikipedia model), that model comes at great cost of efficiency. By operating effectively as a non-profit, you are inherently in a less competitive position relative to your competitors (the best product and engineering people are more likely to go competitors who can pay more), and you're persistently in fund-raising mode (again, see: Wikipedia). There are lots of ways to skin this cat, maybe the easiest is to ask power users to pay like $5/mo. Or just give people the option to pay with absolutely zero obligation. Some non-zero cohort would inevitably take them up on this.
Most of these suggestions, of course, especially 1-3, are very very hard and come at an enormous cost. Building in public as an open source business seems to massively slows things down and introducing a huge amount of community management overhead. That said I'm sure there are ways to manage/mitigate those costs.
There's no need to trust them. You assume they log everything that your device sends them, as well as the time, IP address, etc. and infer all they can from it. Then you act accordingly. You can apply similar conservative assumptions to your device and the programs it runs, but for practical purposes you may want to relax them somewhat.
No real market is an easily disprovable claim. While not worth a fortune in the small, in bulk it's worth a imperial butt load. Attention is what you're able to sell. That's advertising and sales. E.g. do you think that those associated with others in right wing militia would be more or less sensitive to advertisements for gear for prepping? How big is that market?