jjp8182
Platinum Member
It's all about the "in and out"...for any site or script (within a site) to extract any information it has to be allowed to leave a system...i.e., a site etc can't have what they can't get...and it all starts with a secure browser that does not allow scripts to run...along with a leak proof firewall that prevents a site/server from gaining access via ports beyond the typical http server ports...
Again...almost all data gleaned from common Internet users is had via scripts...by stopping said scripts from running you are stopping said data from being gathered and returned...if you don't know how to prevent scripts from being executed you are out of luck if you are seeking anonymity...it is not difficult to do this (prevent the execution of scripts) but you must be informed and realize they are running if not prevented...
LoL...try requesting a URI on what is commonly referred to (by novices) as "the dark web" etc...many of the "dark" destinations will not even open in most of the common browsers that novices use and the destinations are loaded with scripts that will eat a wannabe's OS for lunch...a majority of the destinations are commonly accessed via CLI's only and most users are lost before they ever get started...
No disagreement here ....thing is requesting information from a site requires a routing path(s) back to the requesting entity so even when/if all other undesirable leakage is stopped the site owner/information provider still knows that information was accessed/distributed - so depending on how much effort they put into it could be followed without touching anything you have any control over. Then there's also the fun of what all the the hardware pieces might be doing as part of their regular operation (which gets really interesting when dealing with safety-critical systems where even intercore interference on multicore processors could be an issue). From what I've seen very few people (to include the engineers & software developers) ever have full understanding of they systems they design/build. ... so I'll definitely not claim to do so even for systems I've had to get intimately familiar with - particularly since many of them are frequently changed/"upgraded"....
...but most of this gets rather academic as most users can block out most common tracking, but stopping absolutely everything? That'd seem rather unlikely.... particularly if the communication path remains entirely terrestrial-based (or using TCP). So yeah, most can be stopped as it's generally "cooperative tracking" (even if the user isn't aware or consenting) and for most people that's probably going to be good enough. I would expect anything more than that to start involving things that are outside the interest of most users (along with those doing the tracking) as the information gained wouldn't necessarily be worth the effort required to obtain it....
When even air-gapped systems can/have been tampered with it'd seem pretty unrealistic to expect that a continually connected device can't/won't be tracked to some degree ....if the information gained is worth the effort. ...but again that's getting into things that (currently) are unlikely to be an issue for >99.9999% of people.... (e.g. everyone who doesn't need to realistically worry about being targeted for capture/elimination by a world government due to past actions taken probably doesn't need to worry about being tracked via most of those methods)
...which is why IMHO it's generally not worth worrying about it too much (e.g. not beyond blocking the annoying advertising tracking from entities you don't have direct commercial/business ties, or regular scam artists/spammers/criminals)
So yeah secure browsers and firewalls go a long way ... a person just needs to decide what's secure enough for them (which should realistically consider what sort of threats/problems they're actually going to need/want to prevent). :confused3: