This site went live in 2015 and hasn't changed too much since then. Participating agencies have their google analytics data collected by GSA, which is hosting this webpage.
It's very useful, and I've referred to this site many times over the years (probably bc a couple sites I've worked on are in the top 50!).
For those people wanting other awesome and informative govt sites, take a look at https://www.usaspending.gov/, which has all government contracts data easily searchable (and bookmarkable).
Participating agencies have their google analytics data collected by GSA, which is hosting this webpage.
Mostly off-topic, but the GSA is one of those federal agencies that is really under-appreciated.
When I was a reporter in the days before the commercial internet (yes, my tools were pencil, typewriter, and teletype), the GSA would publish lists of what government reports were coming out each month, helping you find all kinds of incredibly useful data. If you needed help, you just picked up a phone and they would point you in the right direction. I always looked forward to getting their big blue envelopes in the mail at work because I knew I could find something meaningful in there that I could localize for my audience. And the information was always presented in a manner that was both professional, and easy to understand.
The GSA is one of those agencies that is best left alone to do its work in obscurity. Every once in a long while, a politician will stick his nose in there, but usually only to get a name put on a building.
I bet the US government has lots of useful websites that I've never even heard of. But every now and again I randomly stumble across one (like the one from your comment), and it makes me wonder if there is a "discovery problem" here. What other useful govt sites are out there, but are unknown to me.
The discovery problem is not only known, it's writ large. There is a vast bureaucracy of benefits available to Americans that most Americans don't know about because the creation of the programs rarely includes advertising in the budget.
If you've ever seen the "question mark vest guy" (Matthew Lesko, https://en.wikipedia.org/wiki/Matthew_Lesko), who offers book on federal programs available and how to take advantage of them... He's been criticized as a charlatan or misleading people, or encouraging people to "freeload," but here's the thing... He's really quite sincere. He believes the US government bureaucracy has failed in its duty of public information and education and is full of under-utilized programs that, as a result, become tools for the informed to siphon resources from those the programs were intended to help. He thinks that's unfair. The "Free Money Now" book covers and TV advertisements are employed because his target audience is assumed to not be savvy enough to do the research themselves, so he's trying to reach them with noise and spectacle where quiet black-and-white announcements buried in the back pages of local newspapers have already failed (and the fact that he started his work in the era where the easiest way to reach his target audience was still books because the sorts of people who didn't know about benefits programs they were eligible for were strongly correlated with lack of Internet access, not surprisingly).
There is a vast bureaucracy of benefits available to Americans that most Americans don't know about because the creation of the programs rarely includes advertising in the budget.
It's intentional. In addition to the lack of advertising, making people fill out needless/confusing forms, etc. also filters out people who are legitimately entitled to government benefits. It's a special type of cruelty to dangle a benefit behind so much red tape in front of people who don't have the time or know-how to deal with it!
The US COVID test program, seems good at first blush. On the one hand, it was a breath of fresh air that all you have to do is go to one website and fill out one very short form. On the other, the parameters of the program were 4 tests per address. USPS already knows every address, so why did I even have to bother filling out the form? Why aren't they just sending out the tests?! There are going to be people who want/need the tests but won't get them because of the bureaucracy.
They aren't sending the tests out, in part, because 1/3 of Americans would performatively destroy them on social media. Not to mention that the tests cost money and sending them to people who didn't ask for them isn't good practice.
The government generally has a responsibility to avoid waste, fraud and abuse. If they send something to everyone who qualifies then sooner or later it will be abused or inefficient, and those will be the stories that make the news.
The covid test program is a universal benefit. If someone wants to burn them, that's their prerogative. I don't care. What's good practice is giving people the benefits they're entitled to. I don't understand why sending them to people who didn't explicitly ask for them isn't good practice. If there's an explicit opt out, OK, that's different. Benefits really shouldn't have to be opt-in though. Requiring opt-in costs money ands slows down, maybe even prevents, some receiving their benefit.
That's the problem, when you make it so hard for people who need benefits to get them, when you trap them in a constant web of confusing paperwork and means testing (and don't forget, all this bureaucracy, developing and reviewing paperwork, ain't cheap, either), it's easy to find fraud based on technicality, by filling out a form incorrectly, even if you're really entitled to something. It's also easy to make someone give up because they don't have the time or access to resources required to sign up.
In addition, with a lot of overhead, it makes it easier for states to abuse the system [0] as well. I get that this is a big mental block to overcome, but saying the government has to avoid waste and fraud in this way is a bit of a cop out.
Probably not worth it to you and me. For the target audience, the fact that the linked site doesn't have "tangible dimensions" (i.e. I don't know where I am "in" the site) and can't just be read front to back is already daunting.
I have relatives who have Lesko's book and just browse through it in the bathroom. It's comforting to them not only how comprehensive it is, but that in some sense it has a beginning and an end; when they finish the book, they have some kind of confidence that the percentage of programs they're aware of is now large. That might be a false confidence, but the sense it's possible keeps them reading. Hard to do that with a searchable index.
Here's one I stumbled upon recently: the NOAA (National Oceanic and Atmospheric Administration) has an interactive sea level rise map where you can see whether a given location in the US will suffer flooding/inundation from sea level rise based on the value the user selects: https://coast.noaa.gov/slr/#
All else being the same I would expect the great lakes to rise. My guess is that the lake levels are so strongly tied to rainfall patterns that it's pointless to predict their future levels right now.
Under "Visitor locations right now," 1% are coming from Graceville. As far as I can tell, that's either a town of 4k in Australia, 2k in Florida, or there are two Gracevilles in Minnesota with a few hundred people. Any idea what's up?
> IP mapping isn’t an exact science and so MaxMind assigns a default address when it can’t identify its true location. That address just happened to be the Arnolds’ property, a remote farm that is located slap-bang in the middle of America.
> More than 600 million IP addresses are associated with their farm and more than 5,000 companies are drawing information from MaxMind’s database.
I'd guess either some ISP/VPN is masking to that location, or a block of ips incorrectly map to that spot (perhaps by the library they are using to determine location from ip)
It seems like this data is from Google analytics, which is not only blocked but also is being shimmed by Firefox, at least when strict privacy is enabled. So this do not show the correct number of Firefox user, only the users without privacy protection enabled and without any tracking blocker
Things are only getting worse in this regard as more and more devs don't even test on FF, and FF lags behind new features. Internal business apps are going to be Chrome only and it's only downhill from there.
> They are trying to stop unmonitored exfiltration of data.
This was always a pipe dream (preventing data exfiltration). The best any company do is to prevent accidental exfiltration of data. Like when someone attaches a spreadsheet with social security/credit card numbers to an email going out over the Internet. There's tools to detect that sort of thing (and stop it) but they don't work when the data is encrypted.
There currently exists no technology that can stop the human problem of data exfiltration. Here's a quick quiz to see if anyone in your company can exfiltrate whatever TF they want: Can employees play sounds on their desktop (as in, it emits sound)? Then they can exfiltrate any file they have read access to pretty damned quickly. Even huge, multi-gigabyte files!
Current data exfiltration methods can take advantage of a tiny corner of a monitor, the sound output (direct connect to line out or optical is ideal!), power lines (yes, this works! https://www.helpnetsecurity.com/2018/04/13/data-exfiltration...), various USB tricks (even if entire categories of "storage" devices are blocked via software), and many, many more. Most of them are basically undetectable as well and can be executed with JavaScript in any browser that gives the user access to the developer console.
Obviously, if end users have access to PowerShell or Python that's even faster/better at exfiltrating the data.
My favorite one though has got to be the sound output... You can write a simple script that converts bits into inaudible sounds that can be picked up by a cell phone in your pocket! It's not nearly as fast as a direct connection to the line out jack but it is so cool! haha
Second place has to be data exfiltration, "by blinking the numlock LED"
Aren't they basically being kept alive by Google at this point? Almost 90% of their revenue is from that one deal at it has felt like they have been more concerned with other initiatives for a while now.
14 years on and still haven't found a way to make themselves any less independent of Google's money. >90% of their revenue derived from their direct anti-privacy competitor makes them entirely on life support.
Rust was their opportunity to escape that, now they just threw it away and now they are back to where they started.
Mozilla gave all the Rust trademarks away and the opportunity to create a consultancy out of Rust (Since that has taken off) has been completely ruled out - which may have been a way out of not depending on Google's money fo 90% of all their revenue.
Right, but I think the overarching point is that 3% is awfully close to the 1% that is Internet Explorer, which many would consider dead. If usage decline continues, it just wouldn’t take much for no people to stop supporting it.
If the company behind it dies then development on it will likely drastically decrease and it will quickly become a less attractive option. There's probably a reason why you chose Firefox over any number of other open-source browsers that don't have much development activity.
I used to love Firefox, but fell out of love majorly. First there's a still long-standing bug (5 years and counting) that newly opened tabs don't have access to localstorage.
Then 96.0 broke a lot of our code somehow. Cross-checked with 95 and everything still worked fine.
I hit and reported a very similar bug over a year ago. Try to add a cookie or localstorage entry in Dev Tools on a page that outputs JSON, such as: https://api.ipify.org/?format=json (Spoiler: it's broken)
The DX for firefox just isn't that great when compared to Chrome.
Correct me if I'm wrong, but this shows that in the last 90 days 5.06 billion visits came from 31.3% Windows while 1.1% were from GNU/Linux. If we assume that both groups visit government websites equally often, then for every GNU/Linux user there exist (31.3/1.1) = 28.5 Windows users. Scary stuff.
1.1% is way too high. I bet they didn't filter out all the scrapers that poll gov't websites.
Hmm I wonder if just parsing the HTML still works like it did 8 years ago when I had to scrape the USPS: https://github.com/NavinF/USPS-scraper/blob/master/USPS_scra...
As long as the USPS only allows API requests from browsers (as opposed to the much more common situation where you need to update the status of every tracking number in a database), people still have to scrape their website pretending to be a browser.
Oh they had an API 8 years ago too. It’s just that they only let you use that API from JavaScript running on your users’ browsers.
The undocumented tiny ratelimits and threat of bans for server-side API users (while no such ratelimits applied to the HTML pages) forced pretty much every app to scrape their HTML server side.
From my README which quotes their old docs: “Note: The United States Postal Service expressly prohibits the use of Web Tools "scripting" without prior approval. Web Tools scripting can be defined as a technique to generate large volumes of Web Tools XML request transactions that are database- or batch-driven under program control, instead of being driven by individual user requests from a web site or a client software package. The USPS reserves the right to suspend server access without notification by any offending party that does not have prior approval for Web Tools scripting. Registered Web Tools customers that believe they have a legitimate requirement for Web Tools scripting should contact the ICCC to request approval.”
For what it's worth, Linux is over represented in the (fake) User-agent strings of the bots that attack my web servers. Most probably are indeed on linux, since they are predominantly scripts running on cloud providers. :)
That site shows 113.1 million visits in the last 90 days with 47.2% being Windows and 1.2% linux. (47.2/1.2) = 39.3 windows users for every linux user.
I agree that it is skewed in a number of ways. I just wanted to estimate a lower bound.
Thank you for pointing out a better source of data for my use case.
The fact that a lot of Chromebooks are being sold is fairly solid; I would reckon that the questionable claim is that Chromebooks have strong sales traction with retail consumers. I bought myself a Lenovo Duet tablet last fall because I wanted a device that could last multiple days with intermittent streaming use and I missed the feeling of a cramped 10" netbook. I got it on sale for $200 USD and was floored by how nice the experience was on a PC that cheap. But friends and family who saw me with the tablet were shocked that I, an adult who likes computers, owned a Chromebook. I only know one other person IRL with a personal chromebook, and they bought it after being given a school-deployed one in college.
I use one and have had a couple over the years. Decently cheap and I functionally just want a laptop form factor and UX for a tablet use case: web browsing, some light app usage (often for casting / streaming), and also being able to type as needed, all from an actual couch / on the lap / wherever. Used to be no Android App support so that part was a no go and now you have the linux containers to fall back on if you need to run something else or want a proper terminal.
I have a bulky work laptop and a big desktop PC. The niche left over maps to a nice slim, fanless device for casual usage very well in the Chromebook space. Maybe preaching to the choir here, but my keyboard will need to be pried from my cold, dead hands and the tablet + detachable options all seemed way too delicate.
My anecdote: A lot of people seem to be using Chromebooks to replace tablets (because good, non-Amazon tablets that aren't expensive are rare these days). You can pick up a $250 touchscreen Chromebook that works as a vastly superior web-browsing and video-watching device than a $250 Samsung tablet.
Keep in mind that the majority of all fed gov visits are USPS tracking pages and the like. Chromebooks are majority used by students so I would expect this.
Or bots. When I still cared about visitor statistics 10+ years ago, I saw unrealistic amounts of visitors using old versions of Netscape, browsers on Amiga and other niche systems. They were probably all bots in disguise.
I love it when people put the instruction for what to enter into a field instead of actually filling out the field.
I run into it ALL the time and no matter how you write instructions, at least 1/3 of people won't read them but just skim to what they think they need and then copy/paste. Ugh.
>Why are there (personal?) email addresses in there as well? Who sets that as their user agent?
Might be some programming exercise. It's a common network programming exercise to build a HTTP client using bare sockets. But they don't look like student email addresses.
Some APIs require contact info in the user agent header. For example, all calls to the reddit API are supposed to contain the dev’s reddit /u/username in the user agent.
"Free of charge" pen testing more like. You may have missed the news but since yesterday there aren't even any commercial flights between China and the US anymore, they were all cancelled.
The US consulates and embassy in China are the only reliable source of information regarding air quality and pollution.
Also, India having a large number of English speakers, probably has a significant percent of the population that prefers getting COVID-19 info from the CDC.
US gov sites may not represent the entire web, but the platforms that the entire population uses to access government websites, does. People don't use a different browser for the State Department and for Facebook. The data presented here is actually reflective. If anything, it's conservative. Not many teens are looking up tax forms or passport renewal forms (their parents do it for them). And these young users are less likely to use anything other than Chrome and Safari.
I am fairly certain this is a side effect of people sharing this link through messaging apps, thereby inflating Chrome (default browser for Android devices) and Safari (default browser for Apple devices) relative to Firefox.
Ack. I've used PubMed for over a decade (since it was taught as the de facto way to search the science literature in my Canadian university) and it never occurred to me that it was a US federal government website :facepalm:
When it says "now", how "now" do they mean? I've refreshed a few times and the number hasn't changed (171,533). I went to cisa.gov (disabled all my blockers) but the number here is still the same over here.
While I was looking at the data, the ranking of the top visited pages changed without reload, so I assume it is streamed in real-time. But I guess that the real-time data comes from aggregated and somewhat delayed data.
If you open up DevTools and look at the requests, realtime.json contains a "taken_at" property which shows that the data is updated every five minutes.
In state government, many desktops are set up with the agency's website as the browser homepage with no way to change it. I would think federal government is the same, that alone might skew some of these statistics.
As someone pointed out in another thread, there is something fishy going on with these with a high percentage of usage coming from Graceville. Who knows if they are spoofing user agents and what not.
perhaps there's an internet exchange in one of the Graceville's, and it accounts for traffic from nearby cities too?
I noticed that San Jose makes the list, as does LA and Seattle right now, but no San Francisco. I suspect that SF traffic goes through San Jose and thus most of the bay area appears as San Jose in their numbers. Not a big leap to imagine this happens in other places too.
Back in my day, the mantra was "Information wants to be free", and laws limiting freedom on the internet and freedom of information were looked down upon.
As a DARPA project, the early internet was populated almost exclusively by Americans for quite a long time. It grew rapidly, but I suspect that you're thinking about the culture that was steeped in almost everybody you talked to being an American.
Americans have a philosophy of the freedom of speech grounded in the principles of the First Amendment. Those principles are not universally shared, and indeed, even amongst the liberal nations the American take on the topic is pretty liberal (contrast the way, say, speech overtly supporting fascism is treated in Germany vs. the relatively new and controversial hate speech laws in the US, or the principles of affirmative defense for defamation in the US not shared by the UK, or the Refused Classification category for videogames in Australia, or the fact that the law of against "publication or utterance of blasphemous matter" was repealed in Ireland in 2020).
As the average internet user grows to resemble more the average human than the average American human, regression to the international mean on freedom of information is to be anticipated.
the GDPR doesn't care where the website is as long as it's accessible to EU citizens
it would be most amusing to watch the EU commission attempt to enforce its extraterritorial !bad, bad cookies! law against the US federal government on its own website
It's very useful, and I've referred to this site many times over the years (probably bc a couple sites I've worked on are in the top 50!).
For those people wanting other awesome and informative govt sites, take a look at https://www.usaspending.gov/, which has all government contracts data easily searchable (and bookmarkable).
https://18f.gsa.gov/2015/03/19/how-we-built-analytics-usa-go...
Mostly off-topic, but the GSA is one of those federal agencies that is really under-appreciated.
When I was a reporter in the days before the commercial internet (yes, my tools were pencil, typewriter, and teletype), the GSA would publish lists of what government reports were coming out each month, helping you find all kinds of incredibly useful data. If you needed help, you just picked up a phone and they would point you in the right direction. I always looked forward to getting their big blue envelopes in the mail at work because I knew I could find something meaningful in there that I could localize for my audience. And the information was always presented in a manner that was both professional, and easy to understand.
The GSA is one of those agencies that is best left alone to do its work in obscurity. Every once in a long while, a politician will stick his nose in there, but usually only to get a name put on a building.
See also: The Congressional Budget Office.
If you've ever seen the "question mark vest guy" (Matthew Lesko, https://en.wikipedia.org/wiki/Matthew_Lesko), who offers book on federal programs available and how to take advantage of them... He's been criticized as a charlatan or misleading people, or encouraging people to "freeload," but here's the thing... He's really quite sincere. He believes the US government bureaucracy has failed in its duty of public information and education and is full of under-utilized programs that, as a result, become tools for the informed to siphon resources from those the programs were intended to help. He thinks that's unfair. The "Free Money Now" book covers and TV advertisements are employed because his target audience is assumed to not be savvy enough to do the research themselves, so he's trying to reach them with noise and spectacle where quiet black-and-white announcements buried in the back pages of local newspapers have already failed (and the fact that he started his work in the era where the easiest way to reach his target audience was still books because the sorts of people who didn't know about benefits programs they were eligible for were strongly correlated with lack of Internet access, not surprisingly).
It's intentional. In addition to the lack of advertising, making people fill out needless/confusing forms, etc. also filters out people who are legitimately entitled to government benefits. It's a special type of cruelty to dangle a benefit behind so much red tape in front of people who don't have the time or know-how to deal with it!
The US COVID test program, seems good at first blush. On the one hand, it was a breath of fresh air that all you have to do is go to one website and fill out one very short form. On the other, the parameters of the program were 4 tests per address. USPS already knows every address, so why did I even have to bother filling out the form? Why aren't they just sending out the tests?! There are going to be people who want/need the tests but won't get them because of the bureaucracy.
The government generally has a responsibility to avoid waste, fraud and abuse. If they send something to everyone who qualifies then sooner or later it will be abused or inefficient, and those will be the stories that make the news.
That's the problem, when you make it so hard for people who need benefits to get them, when you trap them in a constant web of confusing paperwork and means testing (and don't forget, all this bureaucracy, developing and reviewing paperwork, ain't cheap, either), it's easy to find fraud based on technicality, by filling out a form incorrectly, even if you're really entitled to something. It's also easy to make someone give up because they don't have the time or access to resources required to sign up.
In addition, with a lot of overhead, it makes it easier for states to abuse the system [0] as well. I get that this is a big mental block to overcome, but saying the government has to avoid waste and fraud in this way is a bit of a cop out.
0 - https://www.clickorlando.com/news/2021/07/14/florida-to-pay-...
He makes it easier to find since he actively advertised on TV, but still probably isn't worth the fee for that convenience.
I have relatives who have Lesko's book and just browse through it in the bathroom. It's comforting to them not only how comprehensive it is, but that in some sense it has a beginning and an end; when they finish the book, they have some kind of confidence that the percentage of programs they're aware of is now large. That might be a false confidence, but the sense it's possible keeps them reading. Hard to do that with a searchable index.
If you want a fascinating rabbit hole: https://18f.gsa.gov/2014/12/18/a-complete-list-of-gov-domain...
> IP mapping isn’t an exact science and so MaxMind assigns a default address when it can’t identify its true location. That address just happened to be the Arnolds’ property, a remote farm that is located slap-bang in the middle of America.
> More than 600 million IP addresses are associated with their farm and more than 5,000 companies are drawing information from MaxMind’s database.
I hope they don't die, but this looks really bad for them.
This was always a pipe dream (preventing data exfiltration). The best any company do is to prevent accidental exfiltration of data. Like when someone attaches a spreadsheet with social security/credit card numbers to an email going out over the Internet. There's tools to detect that sort of thing (and stop it) but they don't work when the data is encrypted.
There currently exists no technology that can stop the human problem of data exfiltration. Here's a quick quiz to see if anyone in your company can exfiltrate whatever TF they want: Can employees play sounds on their desktop (as in, it emits sound)? Then they can exfiltrate any file they have read access to pretty damned quickly. Even huge, multi-gigabyte files!
Current data exfiltration methods can take advantage of a tiny corner of a monitor, the sound output (direct connect to line out or optical is ideal!), power lines (yes, this works! https://www.helpnetsecurity.com/2018/04/13/data-exfiltration...), various USB tricks (even if entire categories of "storage" devices are blocked via software), and many, many more. Most of them are basically undetectable as well and can be executed with JavaScript in any browser that gives the user access to the developer console.
Obviously, if end users have access to PowerShell or Python that's even faster/better at exfiltrating the data.
My favorite one though has got to be the sound output... You can write a simple script that converts bits into inaudible sounds that can be picked up by a cell phone in your pocket! It's not nearly as fast as a direct connection to the line out jack but it is so cool! haha
Second place has to be data exfiltration, "by blinking the numlock LED"
Rust was their opportunity to escape that, now they just threw it away and now they are back to where they started.
Firefox is in decline. Nothing has changed.
This is from someone who uses Firefox
The DX for firefox just isn't that great when compared to Chrome.
Hmm I wonder if just parsing the HTML still works like it did 8 years ago when I had to scrape the USPS: https://github.com/NavinF/USPS-scraper/blob/master/USPS_scra... As long as the USPS only allows API requests from browsers (as opposed to the much more common situation where you need to update the status of every tracking number in a database), people still have to scrape their website pretending to be a browser.
The undocumented tiny ratelimits and threat of bans for server-side API users (while no such ratelimits applied to the HTML pages) forced pretty much every app to scrape their HTML server side.
From my README which quotes their old docs: “Note: The United States Postal Service expressly prohibits the use of Web Tools "scripting" without prior approval. Web Tools scripting can be defined as a technique to generate large volumes of Web Tools XML request transactions that are database- or batch-driven under program control, instead of being driven by individual user requests from a web site or a client software package. The USPS reserves the right to suspend server access without notification by any offending party that does not have prior approval for Web Tools scripting. Registered Web Tools customers that believe they have a legitimate requirement for Web Tools scripting should contact the ICCC to request approval.”
What an asinine process.
My pattern matching experience from real life tells me that this is unlikely....
The following is for people logging into government services, it is a better source for metrics on browsers/OS usage.
https://analytics.usa.gov/general-services-administration/
I agree that it is skewed in a number of ways. I just wanted to estimate a lower bound.
Thank you for pointing out a better source of data for my use case.
I have a bulky work laptop and a big desktop PC. The niche left over maps to a nice slim, fanless device for casual usage very well in the Chromebook space. Maybe preaching to the choir here, but my keyboard will need to be pried from my cold, dead hands and the tablet + detachable options all seemed way too delicate.
Sign of the times?
NIST? Definitely
LLNL MPI tutorials? Obviously.
But NSF… never?
https://stackoverflow.com/questions/29916054/change-user-age...
I run into it ALL the time and no matter how you write instructions, at least 1/3 of people won't read them but just skim to what they think they need and then copy/paste. Ugh.
Why are there (personal?) email addresses in there as well? Who sets that as their user agent?
Might be some programming exercise. It's a common network programming exercise to build a HTTP client using bare sockets. But they don't look like student email addresses.
Also, India having a large number of English speakers, probably has a significant percent of the population that prefers getting COVID-19 info from the CDC.
By device type:
Mobile 53.7%
Desktop 44.1%
Tablet 1.9%
By browser:
Chrome 48%
Safari 36.2%
Edge 6.4%
Firefox 2.8%
My personal conclusions:
Tablets remain a niche product, Firefox is dead, and Edge will be dead in a few years if it can't eat more market share, which it won't.
https://en.m.wikipedia.org/wiki/Usage_share_of_web_browsers
But it doesn't mean that the overall fraction of visitors from outside the US is that high.
iPads don't show themselves to be iPads. They read as full computers.
It's an anti-profiling feature Apple added a few years ago.
https://home.dotgov.gov
And you can see the code and development lifecycle on github: https://github.com/18F/analytics.usa.gov
Pretty lightweight for what it is.
I noticed that San Jose makes the list, as does LA and Seattle right now, but no San Francisco. I suspect that SF traffic goes through San Jose and thus most of the bay area appears as San Jose in their numbers. Not a big leap to imagine this happens in other places too.
Clearly a HN hug!
Americans have a philosophy of the freedom of speech grounded in the principles of the First Amendment. Those principles are not universally shared, and indeed, even amongst the liberal nations the American take on the topic is pretty liberal (contrast the way, say, speech overtly supporting fascism is treated in Germany vs. the relatively new and controversial hate speech laws in the US, or the principles of affirmative defense for defamation in the US not shared by the UK, or the Refused Classification category for videogames in Australia, or the fact that the law of against "publication or utterance of blasphemous matter" was repealed in Ireland in 2020).
As the average internet user grows to resemble more the average human than the average American human, regression to the international mean on freedom of information is to be anticipated.
it would be most amusing to watch the EU commission attempt to enforce its extraterritorial !bad, bad cookies! law against the US federal government on its own website