Comments by "XSportSeeker" (@XSpImmaLion) on "Apple service phoning home when viewing images was a bug" video.
-
Interesting technical discussion, but the fundamental error is that the Apple defenders are trying to prove a negative with shallow evidence, and then speculate based on... loyalty to the brand? Fanboyism? Contrarianism? Not even sure anymore.
I'm not an Apple user nor intend to ever own an Apple device, had an iPad in the past... but let me position myself as a neutral onlooker for security analysis there.
The test done with code analysis and using specific tools for Internet traffic monitoring are indeed more comprehensive than just saying the app is pinging back to the mothership, and there might be some logic in the rebuke there, but let's think about the claims.
The first problem in all of this is why I don't care about Apple claims about security in the first place. The code is proprietary, so you don't know what it actually does.
You can use tools to monitor traffic at a given period for one specific app or function that the phone has, but this at most will give you what it's doing at that given period of time for that specific case - not what it can do in the future, what it's doing for everyone or what it's doing at any point in time you are not monitoring or analyzing it.
Given this, it might be eliminating a subset of methods that Apple could use to look into your photos or your info, but it does not eliminate the whole. Again, the general problem of proving a negative.
There are so many different ways that Apple can accomplish taking info from you using a proprietary opaque software that is not open to analysis and auditing that this very specific very contained look into it just doesn't amount to much. The guy who did the analysis, if truthful in his discoveries, proved that Apple didn't take significant personally identifiable information for that app for those photos at that point in time in that specific case, and only that. He didn't prove Apple is private, secure, that it's trustworthy, that it cannot be using that same mechanism to collect info from other people, or at different times, etc etc.
This is a general limitation of Internet traffic analysis. What you see is specific to your case at that specific time, and that's that.
So, it's sending an empty message with no personally identifiable information or any information at all, at that specific moment in time for that specific app. Sure, that's fine. And I think it does serve as a limited counter to the argument Apple is collecting data via that specific process, perhaps. I think it serves as an appropriate rebuke to alarmist claims.
Assuming it's a bug seems a bit disingenuous... it could be, but I wouldn't jump to conclusions there. That's as valid as say, assuming it's evidence that Apple is trying to come up with ways to send data back, just that they didn't fully implement it yet.
In fact, from a security and privacy standpoint, that would probably be the better assumption. It doesn't make much sense for a "bug" to act like that, does it? A bug that interacts with phone data and sends and empty "ping" back to a specific Apple server?
That sounds like a pretty intentional and specific kinda function to me to be dismissed as just a bug, independent if it was taken away after the latest update or not.
And then, of course, this does not give you a guarantee that it's sending nothing back all of the time for all cases and for all phones - it's specific for the case analysis only.
The other thing is that, because you have this all integrated complex Apple OS and apps that an iPhone has, there are multiple different processes that are all in constant communication with Apple servers (or other servers) all the time sending all sorts of data back including several that are pretty much unreadable.
So, what stops Apple from taking the data from photos or other personal stuff, shoving it into a buffer, sending it at some other time, stuffed into the communications of another Apple app, process or functionality, perhaps to a third party server, for it to then be unpacked and read by the company later on?
This is the fundamental problem of opaqueness in code. There are numerous ways to collect data surreptitiously, and it's a rat's race of strategies to avoid detection by common analysis tools.
To be clear, not that a code being open source magically solves all of that, or that companies using open source apps and OSs should be trusted by default. As proven by Android and Google, a whole ton of surreptitious spying and data collection can still take place for a long time before independent auditing and analysts detect it's happening, if the detection happens at all. It's still a rat's race to understand the code and then identify potential ways it may be collecting data and sending it back to the mothership.
It's just that when code is proprietary, you don't have access to understand how the thing works. You might have some access to see what it's generating, but that doesn't give you a complete picture of it.
In any case, it was an interesting discussion. Setting aside who is right and who is wrong, I find it interesting how some people are willing to dive into the whole thing to see what is really happening. Unfortunately this whole thing is pretty out of reach for most users, but it seems it'll be more and more needed overtime.
I remember just a while back how much fanboys kept saying that Android cannot be trusted because Google's business model relied on ad revenue, whereas Apple relied on product sales. Everytime that argument came up I said that, because of walled garden and proprietary code, plus capitalism of course, nothing is stopping Apple from one day also getting into the ad revenue business too. Fanboys would always say how Apple would never do that yadda yadda. Well, guess what? That whole argument fell flat.
1