If time travelers from the future were to visit you, it would be difficult for them to quickly prove their authenticity.
Temporal passwords. At the start of each year, you devise a new password. You commit the password to memory, but you never write it down or divulge it to anyone until Dec. 31, when you submit it to the Temporal Password Registry, which publishes it and promotes its dissemination.
Prior to their visit, time travelers from the future can look up your temporal password in the registry for the year in which they plan to visit you. Their ability to communicate a password that you have not yet shared with anyone provides evidence that they are actually from the future.
I've worked on the same problem and came up with the same solution. My password was "Teapot Dome" from 2016-2018, and it's been "Cottage Cheese" since then. I have just come up with a new password, which I obviously won't share.
This solution doesn't actually work. Maybe the people are mind readers rather than time travelers. Maybe they beat the password out of you and then wiped your memory. Maybe they are time travelers, but it's like Groundhog's Day rather than Back to the Future and they've guessed ten billion passwords until getting it right. Maybe they have an Infinite Improbability Drive and just got really lucky.
Your test can prove that something funky is going on. It can't prove that the something funky is time travel.
This has a big flaw. If the future visitors copy all passwords to a usb stick and take it with them then anyone in the present time that gets a copy can pretend to be from the future. Worse still, everyone would believe the phoney future visitors without question.
With the knowledge of the current pandemic situation, perhaps the concept of 'party' and/or 'invite' is a wholly alien concept to the people of the future with access to time-machines. And assuming a similar reaction to current archeology for things they can't explain better than 'artifact was used during religious or reproductive practises' and didn't consider attending.
Maybe you could use a tamper-evident security token along with the password. Could be as simple as a fortune cookie. Keep it safe and crack it open at the end of the year to write the fortune next to your password. In the case of a probable time traveler who tells you your password, challenge them to tell you your fortune and open it afterwards to verify.
> it would be difficult for them to quickly prove their authenticity
Yes, you could tell them something that happens tomorrow or the next day. But if time is of the essence and you can't wait until tomorrow, then the Temporal Password helps you _immediately_ demonstrate your authenticity.
What if we generate a random number every, say, 5 minutes and maintain a log of them along with their timestamp. Then the time traveler could just look up a few entries immediately after their destination time and convince people by predicting them in advance. Not immediate but pretty quick.
Could just bring along a future copy of the bitcoin blockchain, though as a downside that would potentially give away other information about the future (what people become rich, etc) they may not wish to divulge.
I guarantee you I'm going to forget a password in 12 months if I never have to type it in anywhere or use it on any regular basis.
Can I please come up with my password and submit it to the Temporal Password Registry immediately -- and just have it "go public" on December 31 of that year? It'd also be a nice way for me to check back in throughout the year, logged in as me (the only person who should be able to see my password for this year), just to verify what I thought was the password was correct... in the event of needing it.
There could still be the possibility that the person claiming to be the time traveler is just someone who managed to hack into your computer memory.
I mean if this happened to me, my first thought would "who is this person and how did I got hacked?" which seems way more likely than being visited by someone from the future. Would prefer to wait for a confirmation such as telling me about what would happen on the next day.
Wouldn't it be easier for them to just tell you what the Dow Jones Industrial Average will close at the day of their visit. If that's not convincing enough, they could do it for two or three days in a row.
It wouldn't work provided that how vulnerable passwords is nowadays.
I would instead tap into blockchain technology. For instance, to generate blocks for each passing seconds/minutes which randomly ties your UID to a random user. A decryption will be provided to you in 1 years period, i.e. in 2021, you will receive a hash key that will help you decrypt your blocks in 2020. You will get the exact time now if you decrypt the block. That way, by the time you get the keys, the locks will already be deprecated, unless of course you time traveled.
Schools are functionally no different from part-time prisons. You must attend daily under penalty of law.
Many teachers are plain awful.
They rely on students being additionally taught by their parents. That forces parents to go to school with their children, which perpetuates the vicious cycle. The teacher recalibrates the class to the students who either understood everything the first time or had supplemental education and the rest languish.
Schools assign homework that is not easy to do when the student hasn’t fully grasped the concept. That burns time they could use to get better.
So... I am implementing Math common core in software. The first part is an automatic homework solver for math. Once we solved the student’s homework, we can teach them how to do it with generate problems. Crucially, there will be multiple perspectives and an ontology of topics do the student can backtrack to where they got lost in class weeks ago.
After we are good with math, we’ll do the same with English. It will probably not go too deep, but it will let students obtain the missing foundation of knowledge.
Maybe a relevant personal anecdote might help you -
My grandfather used to sit with me for an hour every morning and used to teach me maths.
He would focus on basics first. He would make sure I had the basics drilled in to me. Not just understood them, but mastered them. Then we would move on to the next topic.
It was a bit slow at first. But after a while, once the basics were done, I finished the whole year's math book in 2-3 months.
I have seen this in software engineering too. Once I am good at basics, or once they're drilled in enough, I am faster and quicker.
Drilling basics is basically like having the basics in O(1) look up with very reduced space complexity too. It reduces the amount of overhead your brain utilises. This makes your brain free to think about the actual problem you are solving. Also, I think this is what allows your brain to work in the background, even when you aren't actively thinking about the problem.
Lockdown has really shown up how my 7 year old struggles with his maths work set by school. I've gone back to basics with him and have been drilling him on simple numeracy until he can do it effortlessly using some flash cards I bought and some iPad apps (DoodleMaths, DoodleTables - can't recommend them enough).
Since then he has sailed through all of the new parts we're learning. I really expected it to be much harder than this, but it seems like not fully understanding some basic concepts and having confidence with basic numbers makes all the difference for really understanding the why of all the concepts that are build on top.
In about 6 weeks of me spending around 30 mins to an hour each weekday he has gone from refusing to look at a maths problem to being confident with it.
Retired teacher here...glad to see fresh thinking on this problem. The mega-publishers that serve school districts are stuck in a 1980s model. A project related to yours is ASSiSTments at Worcester Polytechnic Institute. The market is big, with space for several innovative projects.
Take your child back to first grade and systematically re-test his knowledge. Identify where he got lost, catch up, and then today's assignments will not be as difficult. The child is likely stuck on something that wasn't thoroughly understood in the past.
I did this for a child recently. He was in the special needs program. I looked at his homework and re-tested his math understanding. It turns out he had no solid understanding of place value and had difficulty with adding two digit numbers mentally. No wonder he was struggling. We fixed that by spending a week 30 minutes per day on just that concept, he caught up to the next roadblock, which was fractions, we fixed that, and so on.
That's expensive if you hire someone to do this, but enabling self-study through ability to backtrack would not require as much teaching skill and be more of a supervisory activity for parents.
There are entire books on how to interpret math word problems, which is really all about converting verbal expressions into mathematical symbols. If you haven't already, buy one or check one out at the library. I am building a parser that implements those books as if they were algorithms.
That's the approach I am working on. No NLP, no ML, nothing fancy like that. It will take a while, but I should be able to pull this off. This is a question about hard problems and I think it qualifies. ;)
I totally agree that missing foundation is a huge, compounding issue, and that it can be solved with “surgical education”. I used to do this as a tutor. You dig deeper down the stack of fundamentals until you find something that’s missing, and then you build the stack back up.
That being said, surgery is hard and can be traumatizing if taken on haphazardly. One of the harder problems, in my experience, is if an adolescent has been behind for years, it takes a long time to get to “normal”, and that can be really depressing for both the tutor and the pupil.
Another huge compounding problem is literacy skills. There are people in high school who can barely read, and it dramatically impacts their ability to catch up in every other subject. You can write a perfect explanation, but it might be totally disorienting to these students.
Education is really hard. I wish you the best of luck, it’s inspirational to read your ideas here!
Hey, this doesn't sound too different from the adaptive diagnostic Maths Pathway (https://mathspathway.com/) has. It starts at Level 1 and retests all mathematical knowledge of the student until it reaches their zone of proximal development (ZPD). It's all within a classroom environment, but it means all the students are learning the maths that suit them, not a one-size-fits all which is the issue you are referring to I believe.
Yeah, I feel there are a few potential PhDs in that statement. Coding the solver and the problem generator is the easy part, figuring out how to explain something and from multiple perspectives will be tough. The multiple perspectives is key. I hope I can build a community of people willing to explain and a community of moderators to vet the explanation.
My wife and I have taught our daughter materials sometimes 2 grade levels above where she is. I think every child has the potential for genius. It is really how you present the material.
I taught a few classes last year on how to program in Scratch to grades 1-5. The 4th and 5th graders really loved it and they went off on their own to learn more. It was not a large sample, but I think presenting the right material in the right way to the right age makes a huge difference.
I have been thinking of ways to improve education. I think there is a huge potential in online video. If it is produced right in the sense of say "Hollywood" verse say Khan Academy, kids will enjoy it a lot more.
I didn't mention parents not being able to teach. I just asked where your assumption for that is coming from.
Just wondering as I haven't heard of this at all in my home country. Homework is usually quite clear, and the basics to get through it are made clear in the classroom. We had separate times for learning maths, and doing exercises.
Have you thought maybe your assumption is based on very very old anecdotal data?
That sounds good and maybe the country you are from does not have this problem (anymore).
The parents I know from my home country are telling me that their kids are often struggling and are overwhelmed by the homework assignments. While this is already recognised also by the board that created the curriculum the changes are slow and half-hearted.
In my opinion, however, that is not even the problem, nor is it the problem that they are trying to solve. We usually forget that teaching is not "transfer" of knowledge. Each person actually recreates the knowledge in their own head and tries to fit it in with the rest of the world they know.
I believe that the best solution to education is to be able to personalize the new content in a way that naturally extends the student's knowledge. As already stated this may mean backtracking a little to be able to put the new knowledge onto a good foundation.
Normally, teachers cannot do that for each student individually so getting a program that could do that customization would be a great win.
I would be delighted if my product turned out to be unnecessary and I am not being sarcastic.
Science homework should be able to be done in little more than it takes to copy it from an A student. It should not take hours, which it often does. I am looking through the lens of disadvantaged students.
I learned basic algebra from something like this in book form. Kind of like a "choose your own adventure", but it would pose problems, and either skip ahead on the correct answer, or lead to clarifying text for common mistakes.
(no longer remember the series title, but do still remember the penny finally dropping on "x is an unknown")
I certainly agree that so much about school is problematic and could be improved, but ...
> Schools are functionally no different from part-time prisons.
A prison by definition cannot be part-time ; ). Plus, it's a totally unfair comparison. Schools are intended for everyone. Prisons are intended for a specific subset of people deemed to have broken a law.
I’m working on a new way to talk online, with the goal of killing cancel culture, increasing understanding, and basically calming down current radicalization. Picture Reddit, but with the ability to anonymously share ideas with other people in your social circles.
My theory is that most reasonable people stay off social media, so places like Twitter end up filled with unreasonable narcissists. At the same time, discussing politics on a semi-anonymous forum like Reddit is pointless, who cares if someone on the Internet is wrong. But maybe there’s a better way of communication, something new, that lets you talk with people you actually know.
This is a great idea. I would love to use a service like this.
One of the downsides of anonymity that many people raise (fallaciously) as an argument against it is that too many of the anonymous people are just trolls, therefore it is a negative quality. Having it be people that are known to the person brings about an assumption of character that can solve this issue, so long as the user associates with people that they tolerate (which is a mostly fair assumption, considering how private groups and similar functions operate on the established platforms).
One problem that I could see is that someone could be found out simply by virtue of being the only person that would say something like them. I guess this could be alleviated by filtering in public posts from others, but this could cause other problems while not solving what it's supposed to.
The other issue for this is the Gab problem, which cannot be easily solved: that if you're using this service, you're a bad person who needs to be blocked/fired/etc. Unfortunately the mobs you're working against work to destroy with little rhyme and no reason, and they work for free (often literally unemployed). This problem is hard to get around, but I think could be mitigated with marketing it as a platform first and a solution second.
Nonetheless, I wish you the best. This is a very interesting concept and could really do a lot of good in the world.
Thank you. I think it’s a really important problem and honestly wish it was already built so I could just use it.
The Gab problem is core. Reddit bans a bunch of assholes so they go off and find alternative platforms. Free speech is great as a principle, but in practice nobody else wants to hang out with these people.
My idea is that a new model of interaction could get around the problem. I’d want this to be a link people would feel comfortable sharing to their LinkedIn network. Using the site doesn’t mean you were banned from other platforms, just that you think this way of communication is better.
It's a chamber that you decide vs one the group decides.
If you want to only surround yourself with people that are exactly like you that's up to you. If you want to be an intelligent thinker and surround yourself with believable truthful people who expand your worldview that's also possible.
It puts the power back in your hands instead of the site operator or group consensus.
One of the reasons people use social media is to get a feel for the generally environment of opinion, even if we disagree with it. We want to know what "people are saying" and understand the rapidly-changing boundaries of what is socially acceptable. That's why I follow people on Twitter and Facebook that I don't trust at all. I would not want to use a decentralized trust system to consign myself to an echo-chamber, even if it is full of believable truthful people.
But the system TimJRobinson describes is flexible: you don't have to simply filter out the less trustworthy posts. You could simply flag them somehow as low-trust.
Right now the major social sites are designed to amplify the voices of users that produce content that drives engagement, even though the most engaging content tends to be offensive or inaccurate. That's why the people I least trust on Facebook always show up on the top of my feed: I sometimes engage with them by telling them I think their posts are inaccurate.
So I can see your system being used not just as a filter for what the users sees in their feed, but as a feedback mechanism for people's posts.
I think a more responsible social site would optimize for positive outcomes, and not just engagement: employing algorithms and techniques to optimize for accuracy, quality, and civility. I think decentralized moderation could be one of those techniques.
The problem we seem to be seeing just now is lots of echo chambers have formed without input from "the other side of teh argument" on Facebook, Twitter, whatever social media you choose.
What you are suggesting is rather than group level echo chambers which will validate maybe 80% of your views and go against maybe 20%, you are now going to have a fully customized echo chamber that echos your views 100%. I think that will make things worse.
Personally I try to get views from both sides, but I think I am in a minority for doing this.
> too many of the anonymous people are just trolls,
I have a different take on this: trolling becomes an optimal farming method when reward mechanisms isn’t working. When being a squeaky wheel is the best way to get the oil, people tries to be the worst shaped wheels. Fittest survive.
I really like this. At the start of this podcast  they talk about the dark forest theory of the Internet. It's that when you go into a dark forest at night you don't see many animals because they know if they move or make noise they become prey. Similarly many intelligent people stay silent on Twitter and only speak up in private circles for similar reasons.
Thanks for the thoughtful and interesting responses, looking forward to listening to the podcast. I loved the Cixin trilogy, but hadn’t heard of the dark forest theory applied to online conversations before. I’m not sure I agree with such a strong suggestion, maybe more that the people that are most verbal in a group are rarely the people with the most interesting thoughts. It is sadly becoming increasingly true though that anything you say is now a permanent liability.
I have a theory that, individually, people fundamentally disagree less than they think they do. And in real face to face meetings, they are willing to disagree about a topic without killing each other. They talk around points of disagreement. Eg: me, when I'm talking to my grandmother.
The problem is that online people tend to polarize into factions during complex discussions. The more heated the discussion, the more polarized they become. Eventually it becomes impossible for either side to be self critical, or to cede a point to the other side no matter how true it may be.
"B" says "the sea is made of saltwater". A moderately reasonable "A" agrees. Mistake! The "A" group accuse them of being pro-B or anti-A. Suddenly, the idea that the sea is saltwater becomes an "B dogwhistle". A cunning trick! At this point, the "A" group adds 'saltwater sea' to their list of unacceptable opinions, and the policing of everyday language and opinion is in place. That's when the notion of defending free speech gets questioned. How can you police language if people defend free speech?! It won't do! So now free speech is a dogwhistle too. Add it to the list! There is no possibility for constructive dialogue, no matter how sensible, kind and cool-headed it tries to be. Disagreement with dogma or talking to the opposition are offences. New rules: "Don't follow an A on Twitter". Guilt by association. Next up: let's use the abuse/reporting system to ban individuals. Eventually, online systems that were supposedly designed for communication and conversation have become tools for suppression and virtue monitoring.
While anonymity itself will help people to own up to an opinion that they might be otherwise afraid to voice, unless there are no usernames at all it will be possible to track an individual across the system and figure out "which side they're on". Then you'll be able to figure out if their comments are acceptable or hatespeech without having to properly consider them.
Don't get me wrong, I'd love to see this happen, but I think it requires a lot of work. Deduplication of points made, automatic detection of logical fallacies via natural language processing, personal dictionaries with auto-translation of terms (eg: an acceptable word to you might be a slur to someone else - if the word has a genuine meaning, auto-translate to a non-slur). That would avoid having people point and scream and claim moral victory because of a word infraction, instead of hashing thing out properly.
Another theory is that people disagree more than they think they do, but their actual actions are not as disagreeable as their opinions. So, two people will often hold opinions that the other considers abhorrent but will be able to get along in practice (as long as they don't learn of each others' opinions) because their actual actions will usually be acceptable to each other.
> I don't get involved because these platforms simply encourage the worst in us.
In-person, I've found I can meet and talk with folks who believe and say most anything. In fact, I enjoy it. As long as we're trying to get along, we can have powerful and dynamic conversations. I can learn things and we can challenge one another.
Online, there's none of this "trying to get along". Some person types something wrong. Other people pile on. Pointless argument ensues.
People agree on most things! That's the crazy thing about it. But if we only interact online through text, we're raising an entire generation that doesn't grok that fact.
In your opinion, what is it that distinguishes your grandma from people in online discussions? I've been thinking about it a lot lately - that the facelessness of it makes it more difficult to employ empathy? That there are too many people online and connections with them are too ephemeral to form emotions necessary for kind discussions? That the anonymity makes some people lose inhibitions? What else?
I suppose you have to get along with grandma. You have a shared family group. Imaging taking to your mom, and saying "I've cancelled grandma because she thinks that Napoleon was no better than Genghis Khan, so no more family meet ups. If you don't cut ties with her, you're blocked and cancelled too."
There's also the fact that you've known grandma your whole life, and you think that she's 90% adorable, 10% a product of her generation. So you're willing to change the subject, or maybe just roll your eyes, when it comes Napoleon.
Pretty sure we don’t disagree on truth in a vacuum type information, but what axiom is most relevant to start from in a given moment in time.
I blame old social habits for work forcing us to continue to huddle into “flocks”, aka companies, to collectively funnel our output upward for validation.
Our biology as a literal thing has to then constantly contend with validation seeking. We minimize allowable results to narrow what is valid output. It’s economical they say, back in the day they would say this narrow lane of validity is most pious band likely to get you the reward of Heaven.
“They” of course are elected officials who we can watch flaunt they social rules they tell us to teach each other.
Given the literal mess we’re making it certainly does not appear to have “economy” in mind.
Thanks. Thinking about that problem too, and trying to find a way to give people both. “The personal is political” and one hypothesis is that at least a starting point for content you’d find more interesting is content that people you know find more interesting.
I've come to feel that a crucial feature missing from online discourse is the influence of third parties to the conversation.
If you have a conversation in meatspace, sufficiently nearby third parties send signals, because they cannot help but hear you (setting aside headphones or the like). Without entering the conversation, they can communicate by means like a change in stance, a muffled laugh, a roll of the eyes. If those signals are noticed, they provide feedback on the words exchanged and moderate the conversation when it grows heated or extreme. Social media doesn't well allow for these signals: the input interface doesn't allow for them without conscious effort to translate.
In the absence of those signals, any outrageous speaker on the internet can mistake the silence of our lossy input interface for silent approval. If they were wrong, someone would say something, surely? But no, there's just too much effort required to tell every damned fool they're an idiot. Even when one person speaks up, there's no apparent audience to serve as jury, and the fool can go on believing themself in the right.
In short: I think "audience feedback" is a necessity.
In effect: I think the concept of a "timeline", as a presentation of user-generated content, is socially broken. Every posting must be weighed, none should be allowed silence.
Up/downvotes on Reddit/HN/etc. are, I think, supposed to be this. Even if Reddit users stuck to the guidance that the downvote button is to hide disruption rather than to disagree, saying nutty stuff "loudly" would count as disruption.
The flaw in HN/reddit votes, Twitter faves, FB likes, etc is that not all persons who see a submission will use them. Every silent observer contributes to that illusion of assent that helps entrench deviant behavior.
Timelines are great tools for skimming content, but while we can aggregate content easily enough ("scan the room"), we have no tech solution for returning the social feedback that meatspace society relies on ("the room fell silent"). You could force that feedback by turning the timeline into merely a historical lookup, and delivering new posts to the user individually and not letting them return to the rest of the app without interacting with it in some way. Also, block unregistered users entirely, unless they can be locked into the same use pattern by some means that escapes me at the moment.
(If we split off into sci-fi dystopia land, measure the user's emotional state as they scan each post and accumulate those scores back to the author. This is terrible, please no one ever implement it.)
I think that even if it did attract those who would be banned anywhere else, it would not necessarily mean it is not working - depends how you define "working". I see the problem with cancel culture in suppressing legitimate but unpopular opinions and polarizing society by promoting echo-chamberization. I would love to have a place where opinions would be discussed and voted down not because people do not like them but because they can find arguments against them in rigorous debate. But how to set up the system in such a way that it would distinguish and push to the most visible place the best reasoned opinions and not the ones most liked, that is the hard problem.
Opinions like what? The only opinions people get banned for are ones which seem openly racist, homophobic, etc. Nobody gets banned for having or encouraging alternative opinions about, like, which Java framework is the best. Not even for more controversial arguments like whether God exists (at least as far as I’m aware.) Therefore the only ‘rational debate’ that will take place is between what most people would call racists, homophobes, etc. Nobody else is going to want to post in or even look at such an environment, so it rapidly turns into a completely worthless bigoted echo chamber where everyone’s patting themselves on the back for having such enlightened alternative opinions.
"There are two genders" would be one - a position that I guess that the majority of the public also believe. That's ground for banning in plenty of places. We aren't allowed to discuss on many platforms. Of course I will be labelled transphobic for that opinion.
> Nobody else is going to want to post in or even look at such an environment, so it rapidly turns into a completely worthless bigoted echo chamber where everyone’s patting themselves on the back for having such enlightened alternative opinions.
Pure speculation on your part. How does it end up any more of an echo chamber than somewhere where certain opinions are removed?
So, let’s assume that everyone who got banned for saying there are two genders (and presumably strongly believes it, because they cared enough to be banned for it) is attracted to this platform. People who disagree won’t be attracted to it; they can already post their opinion on, say, Twitter without being banned, and they hold the opposite opinion anyway. Now Twitter is an echo chamber of one side of the debate, and this new platform is an echo chamber of the other side. No debate can happen on Twitter; debate can IN THEORY happen on new platform, but won’t, because of the huge imbalance in participants (imagine an in person debate where one side is allowed to bring a hundred people who will all argue with the single guy on the other side). What have we gained by that? There’s still no debate happening. The banned people might as well have started blogs instead.
The difference being that you can actually post / see both sides of the debate on one platform. And that isn't the one with censorship policies. You seem to be in agreement that the lack of moderation actually gets us closer to the form of debate that we are after.
I don’t really agree. Like I said, I don’t think you will ever get both sides of an argument there; one side will force the other out eventually. I’d rather have a proliferation of communities which are ‘censored’. Sure, have a forum for racists where non-racists get banned, whatever (although I won’t shed any tears if it gets closed down). If I want to see what racists think, I can go check it out. But I don’t want to be part of a space where I’ll regularly encounter racist stuff, and they don’t want to be part of a space where they’ll regularly encounter anti-racist stuff. Mashing me and the racists together in an anonymous arena where anything goes obviously won’t result in us all nicely debating and getting to know each other. It is not a step in the right direction.
It’s not exactly censorship when you can start your own blog or platform with the exact opposite form of ‘censorship’, is it? Give me a break - someone saying “you can’t post things I think are racist on my website” is not a violation of your human rights, any more than the host of a party saying “you need to leave after what you just said” is dire political censorship akin to burning books.
Bigots of all stripes like to use the goodwill of others and their belief in freedom to say horrible, bigoted things and demand everyone listen to them. But we don’t have actually have to. That’s not what freedom of speech means.
You're not going to "calm down radicalization" with phrases like "killing cancel culture" and suggesting that viewpoints you don't agree with on twitter/the platform as a whole as "unreasonable narcissists".
“Killing” was a poorly chosen term. I think cancel culture is very dangerous in the way that it discourages reasonable discussion by picking on people whose opinions are deemed to be out of line and punishing them with mob justice. Maybe this interaction illustrated my point though. If I was talking to you in a pub and you said the same thing I’d respond in the same way. But if I’d posted about my great idea on twitter and you responded like this I can picture my ego urging me to say something more defensive and inflammatory.
I don’t mean perspectives I disagree with are narcissistic, I was honestly making an observation based on my limited understand of psychology. Some personality types like to hear themselves talk more than others, and I honk it’s clear that they are more attracted to Twitter.
For me, it's not about the word choice of "killing"; it's the seeming implication that "current radicalization" is caused by lack of "understanding" and then leads to "cancel culture" as the true problem. Your mission statement sounds kind of like "I want to win the argument with my current set of beliefs" rather than "I hope through honest and open minded discussion we can discover our own blind spots and reach common ground to move forward" or whatever.
Look at the supportive and critical comments you received; it looks split down the line based on political leaning. It may be worth considering that if you want to avoid creating just another echo chamber.
It’s definitely been interesting hearing some feedback. There’s a lot of passion about the topic.
The problem I’m interested in has nothing to do with winning arguments, more that so many people are quiet, not saying anything, because they aren’t interested in current discussion options, and the huge negative downsides.
I will reflect on all the feedback, especially the criticism. I don’t agree that I can divide it by political leaning though, and I certainly hope it’s not true. Definitely agree that the last thing the world needs is another echo chamber.
To add to this train of thought - the phrase 'cancel culture' has become politicised, hence the perceived split. I agree that were seeing a wave of puritanism and witch-hunt behaviour online, but that much of it is springing from admirable causes like climate change and minority rights. I think the risk of using the phrase is to discount the cause.
So that brings up a sticking point in creating a utopic online space, language itself means different things to different people. It's worth thinking about in your project - how do you use a light touch but prevent people from filling the space with newspeak or in-group language?
No one mentioned opposing viewpoints except you, and only to downplay real-world harassment as a mere difference of opinion. The radicalization occurring on Twitter is encouraging 'adults' to gang up on others (often children) and try to get their lives permanently ruined over ACTUAL differences of opinion, or for vastly disproportionate acts. So yes, we should use the word 'kill' when describing a force that destroys reasonable, well-meaning, and good people's lives every single day.
The "unreasonable" part is the the disproportionate and permanent effect of internet hatred in regards to comparatively non-permanent acts (that are still often harmful, but more often than not nowhere near to the same extent). The "narcissist" part is the need to do so for beneficial social points among those that do. Those that have the online support to keep the basic needs in life that they want to deprive others of. So yes, these people meet the definition on both counts.
It doesn't mean that those who are bombarded don't deserve to be reprimanded, just that they probably don't deserve to be bombarded with threats and harassment. You can express your opinion on Twitter without personal attacks and threats of harassment and violence. That part should not be a fringe opinion.
I'd like to maybe add onto that with a personal anecdote regarding twitter:
I've never used it and had an account created from about a decade ago. The only people I had followed when I first created the account were a few random celebrities and public figures. I pretty much never logged in despite many emails from Twitter to remind me of my account. A few months ago I opened it up to look, and I was really shocked and appalled at what I saw. My reaction to it was quite visceral. The best way I can describe the content I saw is just "hate", not the general term hate-speech that people throw around, but rather just people tweeting hateful things and exhibiting what appeared to me to be their hatred over something or someone.
Sure in between all of that there were a few wholesome tweets of course. But for the most part, it was just hate. I didn't really stick around too much other than to maybe add some additional public personas I do follow (I guess in support of them). The whole experience left me with the impression that Twitter and perhaps social-media entirely, are really bad/toxic/divisive to our society, at least in their current form.
I think Twitter is especially bad in comparison to other sites because it incentivizes people to post their real names, photos of themselves, etc. Even worse for this issue is the ability to retweet, and the trending page of Twitter. Instagram for comparison doesn't have this problem to the same extent because it's about sharing photos and not about following every new trend. This design gives those that live their lives as online reactionaries a 24/7 outrage factory. The trending page is a constant line of pitchforks and threads to bring them to. Regardless of the benefits of the service I think it's hard to deny how the site's design contributes to the problem, at least to some degree.
> You can express your opinion on Twitter without personal attacks and threats of harassment and violence. This should not be a fringe opinion.
I can. I expect that most people can as well. I don't however agree that any opinion should be expressed without consequence. If someone says "we should kill the Jews and reinstate the 3rd Reich", or "The place of blacks is subjugated below whites". This reveals how they see others, and making sure that their employer is acting based on compete information seems perfectly reasonable.
> The "narcissist" part is the need to do so for beneficial social points among those that do
Why are you ascribing nefarious motives to actions? Why is it clear that these people are acting in bad faith?
I totally agree with the extreme example of nazism or something like that, or most less extreme examples of prejudicial behavior, especially when that person is managing or just in any way talking to others. The problem is that it isn't so cut and dry. Too many of these people either aren't actually doing anything wrong (Some truck driver with his hand out the window) or have done something that is comparatively minimal to what they received (Justine Sacco, for instance). It's not that these people shouldn't be reprimanded, just that it shouldn't be the default for it to happen in this public, permanent, and very often dangerous way.
The narcissism is real, if you're not closing your eyes to it. It's especially obvious when these people doing it have done what they're complaining about themselves. It gets back to the main idea, that you don't have to harass people to get your point across. Even the most egregious reaction, trying to take someone's employment (and most often the ability to support themselves and their dependents) can be done in a way that isn't public and permanent.
Can you give an example of incorrect permanent punishment? Sacco found a job almost immediately. The truck driver may have a more difficult time, but not because this event will follow him, but because the labor market at the moment is shitty. (And to be clear here I'm not saying what happened to him was right or just).
> The problem is that it isn't so cut and dry.
Sure but now we're in a very fuzzy area. We're no longer saying that public shaming is always wrong, but that there are situations where, in your judgement, the scope is misapplied. Those are two very different situations, especially if you're willing to acknowledge that your perception of the severity of some action may indeed be different than the actual effect of that action.
> The narcissism is real, if you're not closing your eyes to it.
I still don't buy this. People not being self aware is narcissism, but not in the way that you seem to be meaning, which is more like that the activism is performative and not genuine.
Not to say that there aren't people who are performative. Lots of social media activism is, but in many ways so is stuff like signing petitions, and that doesn't get a bad rap.
So that story shows the potential for long term damage (his store hasn't closed yet), but I agree that if it does, it would be the best example I've yet seen.
However even if we assume the worst outcome in that situation, the negative impacts of cancel culture are tame compared to a lot of other systems. If we're calling for an end to cancel culture due to the one case of permanent damage, why aren't we calling for an end of the US justice system which, on a daily basis, causes far more permanent and far more cases of damage?
And this is sort of whataboutism, but lots of the recent concern about cancel culture, at least that I've seen, is from mostly upper class, mostly non-black and latino, mostly well educated people. Their concern has been that they'll be cancelled if they don't support recent protests enough or in the right way.
So we have two systems of justice, one that unjustly kills innocent people on the daily, and one that might end up closing down a single restaurant whose owner was innocent. Why are we focusing our energy on dismantling the second system over the first?
There are a few reasons I’m particularly concerned about cancel culture. One is that there is a mechanism for me to change laws I disagree with. There’s lots of things I hate about the justice system that I believe are being worked on. But fundamentally, I accept I was born into a particular society, and I’ve implicitly agreed that I need to agree to certain rules.
Cancel culture is mob justice. There’s no mechanism to change it, and it’s totally irrational. In the example, blaming one person for the actions of a family member goes totally against the philosophies I believe in.
Finally, I don’t think we can trivialize the impact of tossing the idea of free speech out the window. Human history is full of particularly nasty examples of what can happen if everybody feels forced to obey a mob.
> One is that there is a mechanism for me to change laws I disagree with.
There are also mechanisms to address culture you disagree with (and you're exercising them!). The question was not why do you find cancel culture distasteful, nor was it even why do you personally find cancel culture potentially worse than unjust policing (which for now let's just agree to disagree on), but why it is that you are prioritizing the push back against cancellation over the pushback against unjust policing.
At this current moment, it is, I think, clear which unjust system causes more harm. It is the criminal justice institution. That cancel culture could grow worse is feasible, but it has not yet. People aren't routinely killed at the hands of twitter complaints.
> Finally, I don’t think we can trivialize the impact of tossing the idea of free speech out the window.
Here we disagree on premise: cancel culture is the result of people who previously did not feel empowered to speak freely taking advantage of a system that raises their voices more prominently. That it is extrajudicial is a failure of the institutional justice system, which continues to systemically fail underserved communities (women, minorities, poor people). Sharing controversial opinions has never been without risk, that's why they're "controversial". There have been privileged sects of society for whom the risk of holding controversial, and even outright despicable, opinions was low. I don't think more equality in that regard is a negative thing.
> In the example, blaming one person for the actions of a family member goes totally against the philosophies I believe in.
Do you believe the situation would be different if the employer had been unrelated to the girl who posted bad things? While the exact numbers might be different, I don't see the overall picture being that impacted by him being her father. And "punishing" an employer for the actions of an employee, while still perhaps fraught in some cases, is much less worrisome to me than targeting family.
I’m rooting for you. Here’s a thought about a possible trade off between radicalization and engagement. When social media platforms optimize for growth, it makes sense for them to make it as easy as possible for users to share/retweet. It lowers “amplification friction” and allows messages to go viral. The most successful platforms have very low amplification friction, which suggests that low friction is an important ingredient.
What we are learning is that making it trivially easy to amplify anyone’s message enables cancel culture and (I believe) leads to radicalization.
If this is correct, then increasing amplification friction on your platform will lead to less radicalization, at the cost of lowering engagement. My guess is that for this to be successful requires a careful balance of where you land on the higher/lower friction spectrum. Too much friction leads to low engagement which leads to failure. Too little friction leads to uncontrolled amplification which leads to radicalization. So a balance is needed.
Either that, or a totally new idea is needed that turns existing platforms on their head.
Thanks. I think another key part is that social networks get huge. They get thousands of employees, billions in investment, etc. This makes them fragile, and forces them down certain paths. If I stay small then I’m not handicapped by forced expectation of user engagement or growth.
I agree with you about the danger of low friction amplification of messages. My hope is that if users can amplify messages but they get no social credit for it, it will dampen down this behavior.
The general issue with localized anonymity is it becomes a bulletin board rather than a discussion. In time this draws in drug dealers, prostitution, horny teenagers, and various sexual deviants (see also: whisper, craigslist and backpage forums). Gotta find a way to keep engagement up with the reduced volume of users.
4chan is one idea about how people can talk online with total strangers. The conversations my friends and I have at the pub are very different from 4chan, but also very different from Facebook. Recording everything you say so it can be played back years later in a job interview changes things, and I would say not for the better. But our conversations also don’t sound at all like 4chan threads, where it seems like “ironic” idiots talking to idiot idiots.
This was a problem that I briefly worked on it undergrad. Turns out authorship attribution is an active sub field in comp ling. So you’re not just trying to obscure identity from human readers, but the neural nets looking through all your public writing as well.
For the interested some paper keywords would be adversarial stylometry and authorship attribution.
Or multiple translations. This is a great idea that I’ve considered but want to leave for V2. V1 is maybe disappoingly simpler, but at least it would mean that your boss 10 years from now isn’t reading through your post history
I don't know if there's a solution but in my mind the problem is 'likes' or 'retweets'. Most networks create an incentive to angle only for those at all costs (how else does one become an "influencer"?)--whether it be saying something ridiculous or not. This is one thing that snapchat never did that I appreciate. It's really unhealthy for people because it's essentially like a social credit score (spread across many posts) that anyone new will look at. If you can build a social network that avoids that I think people would be much happier. Also there are lot of people that still want to share content but not participate in that disingenuous ecosystem (or who don't have time) so building a platform for them might be useful.
I love this idea. I have been working on something similar and would love to discuss this with you some more.
Anonymity is important but tricky. Many researchers believe that anonymity can be key to countering groupthink, where people say what they think is socially acceptable, and not what they actually believe, for fear of denunciation and retaliation (some great books on this subject: Elizabeth Noelle-Neumann "the Spiral of Silence" and Timur Kuran "Preference Falsification.") Anonymous polling, for example, is probably critical to the functioning of a democracy.
In social media I think anonymity is more tricky. One idea I have is that posts always have an author, but likes and other indicators of support and popularity are anonymous.
I liked that Slashdot had different upvotes for informative/insightful/funny, and funny doesn't add to karma.
The forum needs to strongly encourage every comment to have their opinions substantiated, and show rigorous reasoning and inference explicitly where it occurs (and where an opinion is not directly based on evidence but is a reasonable conclusion from evidence). It's not an impossible task, since we have a similar example of a forum with high standards in Stack Overflow and its network.
This means that the platform is primarily one for discussion and developing opinions grounded in evidence and reality, not one for free expression (which reddit primarily is).
I think you're confusing radicalism with extremism or aggressiveness.
* Radicalism - from the Latin "radix" - means focusing on (what you consider to be) the root of an issue, striving for fundamental change of the way things are.
Also - I couldn't agree more about "who cares if someone on the Internet is wrong" - but I think the conclusion is that you care when it's in physical real life. On the other hand, talking to people you don't know is how you get to know new people, so that part might need some more nuance.
My current theory is that you would make Reddit much better if you would make spillitting and merging subreddits much more fluid eg users could vote that branch this subreddit and they would automatically moved to "new branch".
I have a fully working Webapp, mobile app, dB and REST API from a project I worked on when lockdown started that I lost interest in scaling but I’ll make the repos public and edit this comment later with the links if they can help you out with this.
> Picture Reddit, but with the ability to anonymously share ideas with other people in your social circles.
> something new, that lets you talk with people you actually know.
Anonymous Facebook? The problem with that is this:
> so places like Twitter end up filled with unreasonable narcissists
That's also a problem among the people you know and have added on Facebook, many of them are unreasonable narcissists and they don't mind showing that with their name and picture on the web. I feel like giving them anonymity can only make things worse. Just think of your racist uncle, your Nazi grandparent, your homophobic friend, but now you won't know which of them is posting the bullshit.
You have chosen a very difficult problem to solve, I can only tell you why I think your current idea won't work and wish you luck.
You’re not wrong when you say it’s hard. I have been struggling with the problems you’ve brought up for awhile.
Re:anonymous fb, I’m not going for exactly that but let’s imagine I was and consider the problem for a second of the homophobic/racist/etc Uncle. The simplest solution for not getting that content is to remove them from your list of contacts. But I also have a theory that this is where there could be a global improvement in communication. I believe some people are trolls not because they’re anonymous but because they’re not. Their identity is basically a troll - think the guy who wore a MAGA hat to a recent BLM protest. If that person was allowed to, maybe they would want to change their opinion.
> The simplest solution for not getting that content is to remove them from your list of contacts.
People already have the chance of doing that on Facebook (not anonymous) or Twitter (relatively anonymous), there's a reason they don't do it. It's probably something to do with human nature or psychology, this is likely an area that requires more study (or for which I don't have enough knowledge about).
> I believe some people are trolls not because they’re anonymous but because they’re not. Their identity is basically a troll
I agree with this, but considering your example:
> think the guy who wore a MAGA hat to a recent BLM protest.
I wouldn't call those people "trolls," they are closer to the definition of a antagonist. They are against something beyond reason and logic because it's part of their identity. That's why they love their symbols (the swastika, the Confederate flag), symbols don't require logic or thinking, they only require faith. This brings us to the next issue:
> If that person was allowed to, maybe they would want to change their opinion.
They are unable to listen to opinions or try to change theirs. There's a quote that applies to them:
> You cannot reason people out of something they were not reasoned into.
Online discussions generally don't work because they are not about two people sharing opinions, they are usually about people trying to impose their opposing faiths.
That’s the interesting tech challenge. Keeping things anonymous, even if a db dump leaks, but still allowing moderation and users being able to block specific people. I’m still digging into the solution but believe I have a simple solution. The key is that not everything is anonymous, you know who you’re associated with. You just don’t know the association between a specific user and the content they authored.
I love writing. I used to freelance and string. I wrote a lot of speculative sci-fi that didn't go anywhere. I was an early blogger. I was an early HN'er as well. So I'm creating content constantly no matter what the current tech.
I've been watching what works. Random thoughts: every conversation should have a moderator who is responsible for the subject and tone of the conversation. The start of every conversation should include what the goal is: analysis, venting, compassion, meme, etc. Conversation participants should be rated over time based on their ability to adjust their tone based on what the moderator requests. Each person votes up and down various moderators depending on the type of material they present and how well they manage their conversations. Over time the ranking system should match up moderators and participants, but not subject areas (important). There should be some escalating trust level based on this, the ability for people to make mistakes, be forgiven, delete/apologize for bad-day remarks, etc. I think with the right system this would happen organically, though. Not sure.
Note that a popular/well-used system will by necessity not be a hugely-successful commercial one. You might make it work for a bit, but these are separate and conflicting goals.
Personally, I'm dialing back FB and moving all of my content over to locals. I want to own whatever I write and I want the option of putting things I create behind a paywall easily. I also don't want to ever worry about having a bad day and getting my life destroyed by the mob.
There are some similarities. The interesting thing about circles was the recognition that you are many people, and you don’t want to necessarily have political arguments with your soccer friends. So many people seem to have given up on the promise of the internet to connect people. We’ve tried a few different modes of communicating, I still have hope the problem can be cracked.
Existing mediums for note-taking (Evernote, Notion, Roam Research) are not sufficient for doing knowledge work over long periods of time. The functions these incumbents serve are primarily as “stores of knowledge” that we save because it’s interesting in the moment but never read through or “scratchpads” that we use once and never get rid of, which end up cluttering our information space like a junk drawer full of shopping lists and knick-knacks.
Serious thought involves more than just collections and associations: mastery requires repetition, creativity requires serendipitous discovery, and productive output requires flow states. It’s also a matter of acknowledging the fact that “units of knowledge” do not exist on their own: all knowledge is embedded in context (or “deeply intertwingled”, in the words of Ted Nelson), and without context, metaphor, and nuance, we cannot form meaningful connections. By baking these attributes into the medium itself, it’s possible to build an information space that’s simple to explore, can surface information when you need it, can augment the mind’s naturally ability to form connections, and can get out of the way the rest of the time.
Couldn't you just say "I'm working on this specific thing for this specific purpose that works this specific way" ? Sigh.
Sorry for the negativity, but this post is the perfect example of the wishy-washy rhetoric that people interested in this domain always use.
For examples of what I'm talking about, browse this tedious litany  (roam-whitepaper), see if you manage to get pass the first paragraph . BTW, the price for the "Believer" plan of roam  is $500 dollars. Roam itself is advertised with the hashtag "#roamcult". Pretty strong BS signals coming from this one.
You know, the trouble is, you're absolutely right. I _could_ have just said "I'm working on a note-taking app", but nothing about that seems particularly weird or hard.
To address the poetic waxing more directly, it's important to remember that we still don't have very good language to talk about the "primitives of knowledge work", which is why I phrased my statement the way I did. We know it almost certainly doesn't involve the concept of "notes" (at least in the way we think about right now) and calling them simply "ideas" is altogether too vague.
As far as the #roamcult is concerned, we are overwhelmingly in agreement :)
Like most note-taking app developers, you don't seem to be concerned with the fundamental questions of what notes are and why we take them. In a certain sense everything we create physically or digitally can be considered a note.
Many write-only "scratchpad" notes are useful because they helped the mind focus and organize information. They can often be safely archived or discarded.
I very much like the idea of the hierarchy and the content just being the file system.
I’m less excited about the connection to Zettelkasten because that smells of fad.
If there were an app that would make useful sense out of a few tens of gigabytes of stuff kept in a plain folder hierarchy, I’d be a potential customer. In a way Dropbox tries to do this, and it’s better than nothing (for me) but I still wish for some kind of magical portal into my messy attic.
I started working on the app way before Zettelkasten became a thing. My goal is to not force a specific workflow. If you would like to use it as Zettelkasten, it'll work. If you prefer a different system, you can have the app adapt to your needs. It's all just files & folders underneath, only the representation varies.
I'm curious about your folder hierarchy. What kind of files do you keep there to get to tens of gigabytes and what type of knowledge would you like to extract/what type of connections to form?
I'm currently at about 100 MB for roughly 1,400 individual notes (text and images).
Text, images, videos up to and including the occasional rare film, lots of PDFs (usually small), sometimes a PowerPoint or an Excel spreadsheet, used to also do source code and builds and stuff before GitHub... and if I had faith in them being more useful, as opposed to just backed up, I'd probably dump a whole bunch more in my semi-organized DropBox folder.
So for my use-case the Organizer App would have to be able to deal with lots and lots of data of various types, occasionally including multi-gig files like the rough cut of a film, and also know when I "mv this ~/Organizer/that/" -- a tall order, I know. But with storage so cheap these days, I still dream of that.
Re: semi-organized DropBox folder: this came up here recently and might be of help to organize your files: https://johnnydecimal.com
The app I'm working is more focused on capturing knowledge as structured notes. It sounds like a good full text search could already be of help in your case. That maybe combined with slowly categorizing your content with a system like Johhny.Decimal or something similar.
You've very eloquently put to words my disappointing experiences with that. I hope that someone who understands the problem that well that they can state the problem so unambiguosly, will be able to deliver a novel solution.
All of these work more like scratchpads and, well, note books. That's fine if I want to put information down for later retrieval and even for low key exploration of past thought. But that's just the digitization of pre-21st century information management. What is now sought after is the next, previously impossible step.
What I envision is more of a knowledge space navigator that lives from connections and links (Wikipedia!) but allows a personal state of information and annotation/augmentation on top of it.
I want something that digitally represents my state of thought while augmenting it with a clean version of the world's knowledge and related information, if that makes sense.
Imagine writing a crypto algorithm - a browser with 30 tabs open and a note app plus my IDE feel like a clumsy and ineffective way to truly put my mind and thoughts into the context of work. It simply feels like doing digitally what people did in the 1960s with pen and paper, encyclopedias and books on their desk.
However, the internet and our hardware today feel like they allow to add at least one another dimension. For many, it's a hunch and a revolutionary product waiting to happen.
I worked with someone in 2013 who I'm still convinced solved this problem. But he was an absolute terror to work with. A petty authoritarian who would look over every website I'd go to and have screen monitoring and key logging software.
I kid you not. I left after 6 weeks. I snuck out the work with me (I wasn't "allowed" to bring it home.) If you're interested I'd be more than happy to share the code.
I think it was really revolutionary.
Essentially it used grammatical structures to arrange text in a navigable 2d space which he needed because he was a highly visual and spatial learner.
But what it allowed for is a nonlinear and a nonsequential arrangement of ideas using Wikipedia text, not just some simple mindmap stuff.
I used Wikipedia as an example in a prototype engine I made.
Articles Flowed Into each other through a continuous navigable space. You could interact and engage to go cognitively deeper and expand a new path, as opposed to a series of documents. Wikipedia became one continuous thing that you could endlessly navigate through a 2d space of.
The content wasn't large blocks of text but broken up using a separate visual language so that there'd only be a few words then a relation to another group and so on. This kept the concepts spatially relative and made the distinction of pages disappear.
It did it all automatically. Really amazing stuff. I also worked with Ted Nelson, this guy's methods were better. No question.
He developed the techniques over about 20 years manually and had transferred textbooks to rolls of butcher paper that he kept in cabinets. He totally didn't understand the value of his process as something as transformative as Vannevar Bush's As We May Think.
Instead he wanted to make it a proprietary format with proprietary content under a private publishing company for childhood education. He wanted a kludgy editor to make new content with and then a kludgy viewer for single topic things. He wanted to dictate the interface, keystrokes...
Because once again he's an authoritarian pedant. Bah, he didn't see what he made.
Ideas need to be controlled at the right level of abstraction and liberated at the others. That's what Linus knows that RMS doesn't. That's what TBL knows that Nelson doesn't. That's what Jobs knew but Apple doesn't.
I wanted to run with the idea but yeah, 7 years ago and I've done nothing. I got everything still.
I should stop everything and finish it. It's really something radically different. I think it will change at least the way I personally learn things.
The campaign to convince others, yeah well, no guarantees there.
Replacing HTML/JS/CSS with a language called ALFI. It is stupidly simple in its design but still very powerful. Similarly to HTML you use it to create widgets, place them, and define their behavior. It is humanly readable like HTML but line-based instead of markup-based. Instead of nesting it uses references. This allows it to be streamed.
A big difference is that the language itself doesn't allow styling (like CSS), the downside being you get less flexibility but the upside being it will render correctly on any display with any resolution.
For this I have also written a new type of web browser called NAVI which takes ALFI code and produces (somewhat) beautiful widgets and renders them using OpenGL.
>A big difference is that the language itself doesn't allow styling (like CSS), the downside being you get less flexibility but the upside being it will render correctly on any display with any resolution.
HTML without styling will render correctly on any display with any resolution. The facts of the history of the web tell us that people want custom styling, though, and businesses want it even more, because marketing says so. Your widgets need styling for each device they're rendered on, in which case you're back to the exact original problem as HTML and CSS. All you've done is move the problem to someplace else.
Frankly, I don't see why this isn't a markdown extension, since that seems much better suited to solving your base problem and is WAY more readable than the mess you have currently (which only seems readable to someone versed in high-level programming, either functional or OO)
I have moved the problem of styling to the browsers, like NAVI in my case. The browser is the only part that understands the limitation of the device and knows how to best render each widget so it makes sense for it to handle all the styling instead of having styling intertwined with the data itself like HTML does. CSS doesnt solve the problem of mixed data and styling either, it just hides it better.
The lack of styling I do understand can make it feel as if your site lacks personality when all sites look kinda the same so I see your point here. I might later allow some minor themeing, like say allowing you to select a color scheme, with the understanding that this might be ignored by the user. I will have to think about this more. What you will never get to decide are things lika margins and paddings and things like that.
I don't think you can compare Markdown with ALFI like that exactly. Markdown will generate HTML in the end and it is the resulting HTML that you need to compare with because that is what the browsers understands. Also, Markdown only solves the trivial cases and that is why it can be kept simple and readable but how do you for instance create a three column layout in Markdown with an image in the middle column? I don't know, maybe there are extensions for that these days too.
It would be interesting to have a Markdown to ALFI generator, but I suspect that when you are used to reading ALFI code you might find it to be a bit overkill because even though I am a bit biased of course I do think ALFI is pretty readable.
Also, there is no intentional OOP or funcional aspects, I did not quite understand what you wanted to convey there.
> The facts of the history of the web tell us that people want custom styling, though, and businesses want it even more, because marketing says so.
Wait, what? Are we on different webs? The facts of the history of the web tell us that some of the most popular services for publishing are Facebook, Twitter, LinkedIn, Medium, etc - places that allow for very limited custom styling.
I think you misunderstand what OP is trying to do, and are criticizing them for not instead making a thing you already know.
I think it's very, very obvious that "custom styling" here is referring creator styling, not user styling, so I'm not sure that you're in a position to be criticising the person you're responding to for "misunderstanding".
If I read you right, you’re saying that I’m the one misunderstanding OP? It’s honestly not clear to me what the difference between “user” and “creator” would be in the context of this discussion. Could you elaborate?
Since a “creator” is also a user of the web, I guess you mean “user” as in someone who only consumes content? I’m confused by that since nothing in the discussion seems to be about user stylesheets.
I...am a bit confused as to whether you're actually reading the same conversation.
The project creator explicitly said that their creation cannot be styled, at all. It renders the exact same "standard" way on all devices. The retort was that the vast majority of people (with a clear callout to companies that would obviously like their own branding) do not want a web where every site looks the exact same, which is why CSS exists in the first place. You seem to have read/decided to turn this into a discussion about end-user customisation of sites (and, frankly, a thinly veiled rant about Facebook and Medium), when that first of all has nothing directly to do with what was being discussed and second of all would also be out of scope for this project because it had styling itself out of scope.
> The original proposal seems to be this: You publish a document. The platform takes care of presenting it.
And with a language that explicitly does not allow styling, how exactly is "the platform" that takes care of presenting it going to render anything but a single, default style for all content without...reinventing styling?
> One criticism was: No, people want to control the styling of what they publish.
No, one criticism was rather obviously that people don't want to go on the web and see the exact same thing everywhere they navigate to, which is what you get when styling is not possible. However, you seem to be looking at the entire conversation through some strange lens.
> Again, assuming you’re talking about readers when you say “end-user”, I never even mentioned them.
The creators of a web service/platform wanting to be able to brand their creation and the users of that service simply going with their chosen brand's aesthetics when publishing content are two concepts that can simultaneously exist - in fact, can even be linked.
I am not sure how it has to be explained that people being okay with publishing content on Facebook, LinkedIn or Medium without much custom styling is the furthest thing from an indicator that people want Facebook, LinkedIn, Medium and every other website to look exactly the same.
I’m starting to feel silly for continuing this thread. I will just conclude with my best understanding of how we are talking past each other.
I think I understand that you are imagining a middleman to be “the platform” even in the context of NAVI/ALFI. I understood NAVI itself to be this platform; much like the Facebook app allows you to publish and browse Facebook content with very little variation in the styling of different content, so NAVI might allow you to browse and perhaps create ALFI content with little variation in styling. You are comparing all the content within Facebook and others to the content on the rest of the web, while I’m talking about how content within a platform doesn’t need to be visually distinct for the platform to be appealing to publishers and readers. You’re thinking of the web as the “platform”, Facebook etc as the “creators” on the platform, and you are grouping people who publish and read on Facebook as “end-users”. I’m thinking of Facebook as the platform, people who publish things on Facebook as creators, and people who read the things published as the end-users.
Sorry in advance if I’ve misrepresented what you’re saying, but this is the best I can do in explaining why we’re unable to understand one another.
Facebook, Twitter, LinkedIn, and Medium all have custom styling specific to that platform. The person you're arguing with has interpreted me exactly correctly, you're the one misinterpreting. Please don't engage in pointless flame wars based on poor reading comprehension.
The adjacent thread was not about how to interpret your comment.
It’s completely besides the point that these platforms have custom styling with regards to the web in general. The point is that people are publishing enormous amounts of content on these platforms despite not having the ability to control what that content styled like. Ergo, lack of styling is not a deal breaker for people to publish stuff on a platform.
just fyi, something similar called the gemini project as released recently. https://gemini.circumlunar.space/, not saying either one is better or worse, but it might either be an avenue for collaboration or at the very least to get validation that your idea is one that people want!!
What js does is that it allows you to dynamically update the contents of your website like changing an image when you hover over it for example. This is something you can actually do with ALFI by itself without having to depend on a second language like js. I'm glad you asked this question.
Yes but I haven't looked into how much work it would be. I think this might be essential in order to reach a wider audience because I think people want to be able to use the same browser to surf both ALFI and regular websites. Would be awesome for this to work in Firefox and Chrome.
The first scientific experiment was conducted by 5th century BC Pythagoreans. They wanted to show that the basis for musical consonance was math. From that, they inferred that harmony in math accounted for the harmony of the cosmos. This integration of math+physics was very forward thinking.
But, if we fast forward to the present, we still don't have a complete scientific explanation for the basis of consonance and dissonance. Really! To make my own contribution, I've been running psychophysical experiments to investigate why consonant chords that are mathematically slightly dissonant actually sound much better than chords with perfect mathematical consonance. I've been gathering data with sounds but also with haptic vibrations and with visual flicker frequencies. This multisensory approach is fun because it produces visible rhythmic entrainment in the brain, as seen with EEG. My goal is to contribute to a general theory of neural resonance and harmony in human experience.
Why does this matter? Happiness is great, but I'd argue that what we really want is personal and global harmony. Note that harmony isn't sameness, it is unity in variety -- the resolution of conflict and dissonance into an integrated wholeness. We want inner harmony with our selves, harmony in our relationships with others, harmony in society, harmony with technology and harmony with nature. Happiness is individualistic but harmony involves the pleasure of virtue. I hypothesize that harmony can help set a better objective function for the future of humanity.
Harmony was also the objective function for the first deep learning neural network, Paul Smolensky's Harmonium.
Finally, harmony is also a central theme in classical philosophy. The concept had a massive influence in the Italian Renaissance and in the English Scientific Revolution.
I recently put together a reader for understanding Plato's views on Harmony. Comments are welcome:
I worry from this brief description that you may be ignoring cultural aspects to our perception of consonance and dissonance.
Also, harmony itself is a distinctly western concept whose musical role expanded dramatically with the advent of polyphony. Many (most!) musical cultures around the world don't give harmony much, if any role, and place higher importance on melodic structure.
Also, music harmony and the other kinds of harmony you mention seem to me to be related only by the language used for this particular metaphor. There seems to me to be no likelihood of there being any interesting relationship between musical consonsance and "harmony with technology and harmony with nature".
I agree that the central scientific question is whether harmony is a metaphor or mechanism, e.g., for psychological constructs like Cognitive Dissonance. My guess is that harmony is so pervasive that it ceases to have original meaning -- like in the manner that every atom is a harmonic oscillator or how brainwaves are based on an "octave" structure (e.g., Beta is double Alpha, Gamma is double Beta, etc). However, this question of metaphor or mechanism should be resolvable with science, eh?
However, I disagree that harmony is primarily a western phenomena. It is a central feature to Confucianism and Daoism. It also plays a major role in Native American philosophy.
"Small integer ratios" apply to huge numbers of natural phenomena, because "small integers" and "ratios" are properties of the world we live in. Pointing out that "harmony is another example of small-integer ratios" seems fairly devoid of content to me, especially when the primary user of musical harmony has long abandoned those pure integer ratios to allow modulation and other desired compositional techniques.
Octaves are not universal across cultures. Byzantine music has no octave equivalence, for example. There was even a paper cited here on HN recently showing lack of octave awareness in different cultures (the paper may have had some flaws, but was interesting).
The fact that integer relations are mathematical relations that bear so heavily on physical phenomena is precisely the point. That's what the ancients got so damned excited about. You may not find meaning in it, but it certainly drove the development of science and philosophy from Plato to Galileo to Kepler to Descartes to Newton, etc.
Now, in recognizing that integer relationships are, in fact, an imperfect description of a pervasive phenomena -- that's why the ongoing investigation is so interesting and challenging. Don't we expect what is pervasive in physics to apply to psychology, culture and economics? I'd argue that the mathematical "imperfections" of modern music point to better models of what universal harmony really is.
Overall, the point is that we haven't yet solved harmony. Not even remotely. And i'd love a reference on why you think Byzantine music doesn't have octaves when the Greek music it grew out of most certainly did.
1) "Do not mistake your models for reality" (Lord Kelvin, sometime in the late 1800s) ... many of things that we describe with "small integers and ratios" are not in fact small integers and the ratios are rooted in our observations and thinking. The natural world is more fractal than integral, but the notion of "oh! that thing is a lot like two times that other thing" was a notion more accessible to natural philosophers and early philosopers.
2) No, I see absolutely no reason to "expect what is pervasive in physics to apply to psychology, culture and economics".
3) I don't have a citation for you on the Byzantine stuff, but have been discussing it a lot recently with a musician who grew up on it and continues to perform it, and he was explaining to me how they have no notion of octave equivalency and that because of their tuning and scale systems, when you go up or down the number of steps that "should" correspond to an octave, you end up somewhere other than 2*freq or freq/2. Remember, this is in part why equal tempered tuning was developed: if you stick to "pure" just intonation (precise integer ratios), you can't construct intervals that fit nicely into the octave (e.g. pick a note, go up some number of intervals. Pick the ending note, go down the same intervals, you don't end up back where you started). This stuff is all covered in basic music theory. My understanding from this Byzantine musician is that their musical tradition basically just said "we don't care", and went with a tuning/scale system where you don't just (as in our western system) go up or down N (12 in our case) and end up an octave from where you started.
* Your friend seems to be discussing the Pythagorean pure interval tuning system where, indeed, going up is different from going down --
* In my own empirical work, I've found that pure intervals are not preferred compared to slightly dissonant intervals (in 3 tone sawtooth chords). You can try it yourself -- perfect consonance sounds much worse! Understanding this conflict with mathematical intervals is part of trying to "solve" harmony
* I agree we should remain disinterested in our models, such that we are driven to improve them and not espouse them as reality.
* Yet, I think it is a mistake if we aren't inspired by simple models -- or at least take their hypotheses seriously enough to test empirically
* For instance, is cognitive dissonance involving actual dissonance of some kind? The brain is incredibly rhythmic and even has octaves in the coupling between brainwave bands. It would seem to be natural to test these theories that Plato laid out thousands of years ago -- for instance, that musical rhythm entrains neural rhythms through resonance effects. Maybe that sounds silly, but I'd argue that we are foolish to avoid gathering empirical evidence for these ideas!
* Similarly, harmony had a major effect on astronomy. It actually still does, at the level of the cosmic microwave background radiation, where the presence of perfect harmonic peaks in the signal were the conclusive proof that the universe is "flat"
* Using the concept of sympathetic resonance has been extremely generative in psychology (see Adam Smith's first book) as has harmony in economics. Expecting what is pervasive in physics to apply to these domains doesn't mean it should apply in exactly the same manner -- just that one should expect analogous effects -- at least to the extent that one is looking for explanatory theories to test! We are so far from understanding these domains that, if we don't at least consider these natural and ancient theories (because they seem silly), we are doing ourselves a disservice. Let us be inspired by the past and test, test, test!
I got a little lost when you tried to compare musical harmony to societal harmony.
With regards to musical harmony, is it possible that it's more or less random? I know multiple cultures have different definitions of musical harmony. I suspect the evolution of hearing also contains random elements. Similar to language, it's not so much about an inherent universality, just a universality we can all learn and agree on.
The human ear has an intimate relationship with the octave 2:1 and its ratios, so its very hard to believe the convergence of appreciation globally to be random. More dramatically, visualizing the harmonious ratios on objects such as Chladni plates (a field called Cymatics) reveals that there is something deeper to consonance and harmony than meets the number line.
"visualizing the harmonious ratios on objects such as Chladni plates (a field called Cymatics) reveals that there is something deeper to consonance and harmony than meets the number line"
Ok I'll bite. What does it reveal? There's nothing inherently meaningful here. We know that 'dissonant' sounds (those that create interference patterns) create wavelets that are smaller and with less contrast than the more 'coherent' patterns from ratios that are closer to whole numbers.
It means we find consonance pleasurable and see a distinct "signal" in it. At least, that's the information-theoretical way of looking at it.
When dealing with cosmology one often seeks to make a big deal out of a simple concept like a duality, a cycle, or a ratio. These are concepts recurring through the world, and looking for them in more places sometimes reveals knowledge.
This comparison is delved into in the sections in Ernest McClain's The Pythagorean Plato  which analyze Plato's Republic. McClain summarizes his somewhat remarkable claims thus:
> From a musician's perspective, Plato's Republic embodies a treatise on equal temperament. Temperament is a fundamental musical problem arising from the incommensurability of musical thirds, fifths, and octaves. The marriage allegory dramatizes the discrepancy between musical fifths and thirds
as a genetic problem between children fathered by 3 and those fathered by 5. The tyrant's allegory dramatizes the discrepancy between fifths and octaves as that between powers of 3 and powers of 2. The myth of Er closes the Republic with the description of how the celestial harmony sung by the Sirens is actually tempered by the Fates, Lachesis, Clotho and Atropos, who must interfere with
planetary orbits defined by integers in order to keep them perfectly coordinated. In Plato's ideal city, which the planets model, justice does not mean giving each
man (men being symbolized by integers) “exactly what he is owed,” but rather moderating such demands in the interests of “what is best for the city” (412e). By the 16th century A.D., the new triadic style and the concomitant development of fretted and keyboard instruments transformed Plato's theoretical problems into pressing practical ones for musicians and instrument makers. With the adoption of equal temperament about the time of Bach we made into fact what for Plato had been merely theory. Musically the Republic was exactly two thousand years ahead of time.
It would be easier to dismiss McClain's thesis as Bible Code
crackpottery if he didn't have such interesting things to say about the seemingly arbitrary numbers appearing in Plato's writing.
For example, on 5040
> “Our songs have turned into laws!” Plato exclaims in one of his relentless puns in the dialogue Laws, this time on nomoi meaning both laws and traditional melodies for the recitation of the epics (799d). [...] The absolute population limit of 5,040 “landholders” will be analyzed as
the tonal “index” of a tuning system “fathered” by four primes, 2, 3, 5, and 7; the number 5040 = 2^4 x 3^2 × 5 × 7 defines a tuning system like that of Plato's friend Archytas, who is the earliest theorist credited with using 7 as a tone generator. Since 5,040 is also factorial seven (7! = 1 × 2 × 3 × 4 × 5 × 6 × 7)—i.e., just 7 times larger than factorial six (6! = 720) which defines the calendar octave, or Poseidon and his ten sons (cf. fig. 6) — we have a clue as to the identity of Plato's 37 guardians, 18 from the “parent city” and 19 “new arrivals,” for new arrivals among Plato's products are those generated by 7. [...]
Or, again, when McClain points out the relation between the number 729 - in Plato: the "distance" between the happiness of a king and that of a tyrant - and the simplest expression of the ratio of the Pythagorean comma: 531441::524288 (531441 is the square of 729)
Looked at this way, Plato's conclusion in the republic, that a society is just when each member of it deviates from their own interests just sufficiently to optimize for the interests of the society as a whole, is intended to map onto a truism of music theory - that an instrument, say a modern piano, is optimally in tune when each of its individual notes deviates from its own true value by the small amount required to keep the instrument as a whole sounding good.
This idea is the basis of various tuning systems including the predominant modern system known as "equal temperament."
> why consonant chords that are mathematically slightly dissonant actually sound much better than chords with perfect mathematical consonance.
Probably because we're used to hearing music in equal temperament, so we associate it with "correct" harmony, whereas something like just intonation sounds a little weird and off (but I ultimately find myself preferring it? Wtf?)
You may be interested in de Waal, Peacemaking Among Primates.
An under-appreciated aspect to our educational system is that school desks are shared between two children, so people grow up learning how to interact relatively harmoniously with at least one other small child.
Absent a vaccine, Covid may alter that to the one-child-per-desk model, so I may discover in another 20-30 years how important shared desks were or weren't.
I am building a database for human movement. Right now each exercise or pose or movement is indexed by its name - Downward dog, squat, handstand, etc. But this gets confusing when multiple names apply to the same movement. The true identifier is the motion of your limbs in space. I want to encode that motion/position so that if you try and upload L-sit handstand and also L-sit, it tells you that you are try to upload a duplicate (except for the arms). Furthermore, hopefully each movement can be uniquely encoded into an ID that could be used at a webURL. If you did this you could also compute the similarity between two movements to indicate that one is a progression step to train the other.
I don't have a computer science background so encoding an compression are new to me, but Im a good hacker and I can quickly get things like open pose working. Im trying to complete this in the next 3 months. Wish me luck.
Yes, I am considering using exrx for the base training set actually. They have an api where you can get movement labeled (which muscles are contracting), license video footage of many exercises. Maybe I can somehow integrate what Im building with their site.
I like the idea, but I'm curious what sort of things you are hoping to accomplish.
For instance is human deformity important? Do you want to take into account mild to severe birth defects or dismemberment? How about things like severe burn scars or other medical conditions which can limit maneuverability?
Limitation of movement is a huge aspect of what I want to do with it. Everyones body is different. Thus our hips can rotate different amounts in different orientations. Thus if you submit footage of yourself doing a yoga flow for instance, the database could understand the end ranges of motion - seeing that through all motion, the angle never exceeded 38 degrees for instance.
Then a year later you could submit new footage of a flow. The software would upload all movements into the database and recognize that that angle had increased. You now have increased movement.
Now lets say someone is missing an arm for instance. When their body is in a pike position, the database should still recognize the pike and index it against other pikes (both that user and the global range of pikes). So the database is going for similarity, not exactness. Relating similar movements though making them distinct is what I believe will be one of the hardest parts of this challenge.
I believe a large part of health and fitness is understanding our own bodies and the movements we can perform. Whether its a total of 25 possible movements or 500 possible movements, exploring and training those movements is always cathartic.
I am building this for a yoga/bodyweight training workout tracking app that I have made, but was having trouble indexing all of the motions for. Though I think if it works properly, the applications could be greater than the app.
Is there a way to get updates with your project's progress?
Human movement notation is also a hard problem for dance and choreography that needs solving. My partner has been talking about it for years, and there would definitely be interest from the dance world.
I don't have a GitHub repo or anything going yet. I can update this comment when I do. My email is in my profile if you want to stay in touch. In fact, I would really appreciate it. Dance and choreography would certainly be a big aspect.
I was not. thanks for the link. Often I've found these datasets (this reminds me of CMU's motion capture dataset for instance) are more for everyday movement - things like the sitting down in a chair or calling for a taxi. Though their charter seems to imply looking for all possible human movement.
There's an unreleased version, which focuses more on how to portray characters, rather than just what they are. For example, instead of saying "energetic", they'll be pacing about a bit.
I might just pivot it into a story/plot tracker for writers, and use it to fill out the blanks rather than generating full characters from scratch. Where the community can add in their own templates and tropes. An author can decide that they have a character who is stoic, cynical, and sarcastic, and the tool will generate a background story, how to portray the character, what conflicts they get into with other characters.
I've thought about how systems like this could be used to greatly enhance the world of RPG's. Being able to generate a full back story and personality for every NPC would be fantastic, but only a start.
Then there could be an idea of 'atom' of information that uses models to spreading via NPCs.
There is just so much I know that could be done with this, but I'm guessing hasn't been because it takes more computation than I realize. Or maybe there just hasn't been the effort put into it. I'm hoping it's the latter.
Video game graphics seem to be providing diminishing returns for the game experience compared to depth of storyline/world building. Gamers will not care as much about how a game looks when it is very engaging due to the other elements. But it seems like those other elements are constantly undervalued by game makers.
I think it's more that content is hard. My random generator linked above already pulls from all the top tropes from TV Tropes, but it still seems repetitive. There's an awfully high proportion of stories starring a deadpan snarker or a stubbornly optimistic heroine, but these roles and personalities in a vacuum are not enough to make them unique.
Latest experiments show that details help a lot. Quality went up a lot when I moved from "they" to "him/her". And it went up even more when I had several different types of text describing the same feature. So maybe the missing link isn't just generating better skeletons, but rather different ways of saying the same thing. After all, there are a million fantasy worlds out there. All romances are the same archetypes. But some stand out more than others.
I’m sure you’ve probably come across it but it is worth mentioning - TVTropes (https://tvtropes.org) is a veritable smorgasbord of story tropes and character archetypes. Their catalog of examples is pulled from TV, movies, literature and history.
I donate to DF, lol. While there are plenty of epic moments in there, I think it takes a while to build up, and it comes in between a lot of mundane moments as well. And I think a lot of stories are built with a romance element in mind and there hasn't been a lot of procedurally generated games that do that well.
Writing a computer program is often the best way to convince yourself that you really understand a problem. And an already-written computer program is often a good way to document the behavior of a complex system because you can play with it to see what happens in all the edge cases.
So I'm writing curricula that use computer programs as the primary teaching tool. One is for computer science, where the idea is that anyone who can read some python can pick up all the important ideas from a formal CS education without sitting through a year or more of preliminaries. Over time I'm planning to add smaller sections on more advanced topics.
The other curriculum is theoretical physics. There's already a good book that does classical mechanics  in scheme. I've hired some postdocs to learn scheme and code lessons in general relativity, statistical mechanics and so on. I do the lessons, solve the problems, and then we talk about what worked and what didn't. I work on this about ten hours a week. After a couple of years I should have knowledge roughly equivalent to an ABD physics grad student, plus teaching material that can take anyone else to the same level from modest beginnings.
I'm looking for collaborators on this project so don't be a stranger. Twitter/email is in my profile
I was just ranting on HN about the need for this two weeks ago. It seems like the inevitable end game for teaching of hard sciences (and possibly other fields). One inspiration (besides the book you referenced) is there first few posts of "An Intuitive Explanation of Quantum Mechanics".
Here is a great example where he describes QM as essentially a computational process and does everything just shy of writing down the code. That specific page is from the series here .
And from reading this I realized how much easier many concepts would be to grasp if you could just read the objective source code of the description and not have to try to interpret messy English or imprecise notations.
I'm building a "catalog" of architectures that you could use to create a complete cloud architecture on your AWS, GCP or Azure account in less than one minute.
So, for example, you could create a docker-based architecture with CI/CD, auto-scaling, zero downtime deployment, SSL, load-balancing, high availability and MongoDB in less than one minute in your own AWS account.
It's like Terraform with the user-friendliness of Heroku.
It's very hard because every providers have different APIs and concepts so you have to start from scratch for each.
I love working on it because cloud-computing may have so much impact in some organizations like biotech startups or NGOs.
I've thought about this in the past. There's a modern way to build architecture, and cloud providers are in general only providing the building blocks and requiring you to put them together. It's unscalable to expect every company to hire someone smart enough to construct a good architecture (let alone the time) but at the same time, people who have been working on infrastructure long enough know that there's really only a handful of useful architectures that solve 90% of problems. I thought of it as CTO-as-a-service or CTO-in-a-box.
Service meshes like istio.io start to solve a portion of this.
Almost certainly there is endless complexity but I bet you can come up with something useful. Good luck!
I do use Terraform. Terraform is very good to create the building blocks of your architecture but not so much for the user-friendliness part.
Let's take an example: You have an architecture with a CI/CD. Great. You add your CodePipeline and CodeBuild ressources to your plan. Perfect.
For the CI part, you want your build to start on every commit on every non-master branch. Bad news: CodePipeline doesn't have support for multi-branch. So you will need to find a way to clone the default pipeline for each new branch.
Repeat this for all the user-friendly features (GitHub check runs, env vars management, deployment monitoring/rollback...) for the three cloud providers for all the different architectures and you start to feel the difficulty (I can testify ;)).
I've often found myself wanting something like this, a bootstrap to spin something relatively complex up quickly and then start customizing it. I think it would be nice if there was a way to export to the format native to whatever cloud you're targeting, e.g. CloudFoundry. This is something I've found missing in tools like Serverless Framework.
Any thoughts about maintenance? My personal journey with learning AWS is/was very painful, but now that I understand everything that I need at a high level, it is quite a bit more manageable. I can't imagine having to make minor tweaks to my infra if I had no idea what it was doing.
Improving the performance of fountain codes and applying them securely to peer 2 peer file sharing.
A fountain code is an almost magical algorithm that can split a file of size n up into a (practically) infinite stream of blocks of size b, such that collecting any n/b blocks out of the stream can reconstruct the file.
Applied to p2p file sharing it effectively can eliminate rare pieces as well as the need to communicate which pieces people have. Related topic here is homomorphic hashing.
Unless I find something better before September my master thesis will be on this topic.
For fast Reed-Solomon I'm aware of this: https://github.com/Bulat-Ziganshin/FastECC. It's kind of amazing how far Reed-Solomon has come, thanks to the fast fourier transform. At the very large problem sizes it does slow down more and more though. If cleverly applied (e.g. layering erasure codes) you can make it go very far indeed. I don't like how hardware support is essentially necessary to make the larger field sizes necessary for bigger instances fast.
Wirehair is very interesting, and I hope to study it well and describe it more formally as part of my Master's thesis. I'm not aware of any academical analysis of it. I did look into it enough to diagnose it suffers from the same O(n sqrt n) issue that RaptorQ does, again for very large instances. The issue lies in having to do Gaussian Elimination for a submatrix (to solve the inactivated columns) near the end of decoding, and this submatrix can be on the order of O(sqrt n).
I'm interested in 'very large instances' because ideally I'd be able to create an efficient fountain code with block sizes on the order of the size of a UDP packet, disk page or QR code, which has some very interesting applications.
I've been mulling over an idea that is essentially a combination of personal ID, secure digital authentication and online communications all baked into one.
There's a EU directive instructing on how citizens should be able to identify online with eIDAS. In my country, you can use eIDAS to authenticate in basically any governmental agency portal, but you can't get any eIDAS enabled auth method as a citizen. The current way of authenticating is done via bank accounts or a paid extra mobile service that requires a non-prepaid mobile contract.
This is a relatively huge issue. First off, the Finnish government pays the banks for each auth any user does when they for example want to log into their medical records etc. It's a few million euros a year just for verifying the users.
There's also obviously issues with whom the banks serve, there has been some cases with them not taking foreigners or people with bad credit as customers, making it impossible for them to authenticate themselves.
The current EU directives also indirectly require that the banks should provide a bank customer the possibility to authenticate without needing to have a banking account (which costs money), but to my knowledge this still isn't possible. I pay around 20 euros a month just for the luxury of having an account, not everyone can afford that on top of other bills.
Auth services are not accessible for impaired users.
It's also basically impossible to manage who has essentially the power of attorney and over which matters, for how long etc. Either you have to give them your login info (good luck resetting your SSN) or try to use the services over the phone and somehow convince the other side that you have permission to manage things for another person.
There's no ways of authenticating who is using your accounts online and actually verify the users.
Basically, my idea is combining biometrics, PGP and having the government running the identity management themselves. This would have added benefits of basically enabling hashed throwaway addresses and info for use online while providing a free and accessible way of authenticating strongly online.
It occurred to me the USA might do something similar in the future and let the banks authenticate and verify identities.
(The $1200 CARES Act stimulus payments were automatically wired to those who previously authorized the IRS to post their tax refunds to their banks.)
> actually verify the users
Maybe you can harness existing Public Notaries instead of using online banking? The USA has over four million Public Notaries who can "witness" and verify identities. For example, a user can pay for a Public Notary to come to his house. The Public Notary reviews the user's government provided identification and issue them an official E-ID and a encryption USB key like Google Titan Security key. The Public Notary can record this transaction in a government database so that there is a trail of who received the Titan key and who provided it.
We don't have public notaries as such and it would still 1. be a system that places trust in humans (which is easily exploitable) and 2. not free for the end users.
I mean, I think it's some ten cents per auth through a bank, if you'd have to invite a notary or visit them every time you want to auth, it'd definitely cost more than that.
I was thinking of a combination of biometric ID, physical card with NFC or USB and a pin or a password. Biometric info is hard to spoof, but not entirely impossible which is why ust stealing the ID card or biometric info shouldn't be enough, you'd need some type of password. Once the user provides all three, you'll know that physically that person carries the aforementioned identifications and is like whom they claim. These would be used to encrypt and unencrypt hashes, meaning that other individuals can also use the hashes to make sure they're contacted by or contacting themselves the correct person they meant to.
We'd also need to implement a way to manage permissions for other users to manage our own data. If you're for example physically incapacitated and want your caretaker to be able to access some services, you could add their hashed identity as an allowed entity and decide over which services and features they can see and/or edit.
Unrelated but speaking of throwaway addresses, it would be cool to be able to create a throwaway postal address (which is then translated by the postal service), so online shops don't get your personal address information.
Yeah exactly, this was one of the usecases I wanted to deal with.
Several brick and mortar retailers here require your address and personal info even when buying and picking up physically at their store and several have had their databases hacked and leaked.
Why do I need to give those when they're not shipping anything to me and I pay in cash?
We have an agency called Maistraatti which is our nation wide registry office which will have all of my postal information, family relations etc.,basicslly anything related to me as part of our society. Why can't I just provide online and physical retailers some ID that the registry can then translate into my actual info when it's actually needed, for example for shipping or if they want to check my credit etc. They could just save that ID for that purchase and temporarily check the necessary info through an API.
Hashed info would be one solution, the retailers would only get the hashes I provide and the registry office could then match those hashes to my info. In essence, I could basically create single use throwaway information for each retailer if I'd wanted to and they would be none the wiser.
That would be a nice service that the postal service could charge for. Virtual po boxes that could be created or re-routed on demand. You would just have a one line address, and when the address is digitized it would be converted to the current address.
PO boxes cost and I still need to provide my name, SSN, email, phone etc even if I'm not ordering online.
If PO boxes were free, it'd solve one part of which I take issue with, but it costs like 4 euros per pickup. If your income is low, the 4 euros on top of the some 20 euros for banking and another 20 for a cellular plan will quickly add up.
So you're thinking like a virtual mailing address as a service. You receive and forward people's mail. Seems interesting. Also kind of high risk for the service provider. People will use something like this to buy guns and drugs and other stuff on the black markets. But I guess they do that anyway. You would have to be prepared to deal with a lot of subpoenas to unmask the real mailing addresses. Could be a useful service though. Be sure to charge a lot for it.
My idea would have to be implemented on a national level. I take issue with the socio-economic injustities in the current identity and personal management solutions as they're not technically accessible nor free while still being simply a must have in order to do anything in Finland.
Isn't this essentially a URL shortener for post? The post operator generates an ID for your address, sender uses it to post stuff, the post office maps it back to the address. One additional challenge I see is that if the fees vary by distance, the sender would still get some sort of an idea as to how far away you are, but that is probably acceptable.
In my european country we can use the Personal identity card with the use of a USB to ID card converter?, to log in to governmental resources. although people mostly use the SmartID because its on the phone and free, unlike the SIM card authentication which is a bit more cumbersome.
It's interesting that Finland took that approach. In Portugal the government just created its own ID provider (https://www.autenticacao.gov.pt/), which lets you login with your ID card (which is a PGP smartcard) or a two-factor PIN + mobile phone token.
The relationship is actually opposite: banks will let you login on their sites using the government's ID provider. It's not mandatory, though.
I work in healthcare in the US, and using banks to perform auth is a fascinating concept. I also don't see the US ever adopting it due to nuances in American concepts of privacy. We don't mind sharing literally everything with a single entity, but once you get more than 1 entity involved, everyone freaks out. Using banks for auth would also eliminate the wide array of third party auth services, like Auth0. Eliminating the middle-man is very un-American.
You should take a look at SingPass, which is the Singapore government’s version of this. Most people with a valid Singaporean ID card can register for it, and we use it for all kinds of stuff - signing in to government websites, opening bank accounts, checking in for covid contact tracing, etc.
I'm working on an fully open source Physical Vapor Deposition System which is capable of producing thin flim solar cells, Ex(CIGS,CDTE) at efficiencies in the range of 15- 20%. The system is designed to produce one 10 watt cell for every 45 minutes. I could go into further depth here but for obvious reasons It would take me a while to explain everything.
What we're basically doing is using thermal evaporation to lay down thin films of metal on top of each other in a high vacuum environment. We're then patterning the cell with a fiber laser to produce the cell traces and patterns needed to cell size.
Here's an example of system that does what i've just described in the context of creating CDTE cells. Note that it doesn't use a laser to create the cell divisions but rather uses a conductive ink and sand blasting.
I thoroughly analyzed their design and I have copied some aspects of their design while avoiding many of the flaws that significantly limited it's practically and efficiency.
Here's a quick summary of the design changes and flaws that I found.
For one, the design specified in the paper can barely reach high vacuum which is a requirement for producing cells of reasonable efficiency.
One major improvement I'm working on is a system that allows for automatic changing depostion powder inside that chamber instead having multiple chambers for each layer of depostion powder.
The advantage of this approach is that miniaturizes the chamber size significantly, making closer to the size of something that could fit in the back of your car then something that needs a dedicated room or floor for.
I've also looked carefully at how im going to achieve high and maybe even ultra high vacuum. And in that regard I think I've made some significant strides.
My design achieves high vacuum in three stages, the first is through a simple ventri pump and the second is a sorption pump which has been redesigned based on a old paper I found here(). The last stage uses something called a non-evaporable getter pump.
Experienced vacuum engineers might initially be baffled of choice of pumps I am using. As they are normally considered too slow for the type of operation that the chamber is being put through.
However, the downsides of the speed of these pumps can be mitigated by three measures. (1)Building a chamber that can go through bakeout(which removes contaminates and reduces pump down time) (2)Designing a chamber with metal-to metal seals and low leak rate. Lastly the obvious principal of making the chamber volume and surface as small as possible while making the size of the pumps large.
I've barely scratched the surface here but I think this should give you a rough idea of what I'm doing. I really dont think this stuff is hard as how it's made out to be.
Here some resources that really have really helped out so far.
-Building Scientific Apparatus a book that should give you a broad overview of things you need to know
-vacuum sealing techniques alexander roth
Extremely exhaustive in the amount of information of about, valves, and just general construction of the chamber. The book is really old but everything still stands and it's honestly better than most of the stuff i've found online.
This sounds cool! Building Scientific Apparatus is a truly excellent resource.
Two questions: (1) Could you get away with an inert atmosphere? I'm not familiar with the pros and cons with respect to PVD.
(2) It sounds like your vacuum setup will have a long cycle time from vent to pumpdown to operation. A load lock with a (turbomolecular?) pump adds quite a bit of expense. What's your approach to achieving high throughput?
You could, I've seen a couple of papers attempt that approach with rather poor results something like 8-10% percent. Though I'd say by the easiest approach to producing thin films cells involves basically using electroplating which achieves similar efficiency (https://onlinelibrary.wiley.com/doi/abs/10.1002/pip.417).
The cycle time is of course a highly dependent on the final design of chamber. There's no reason that Sorption and Getter Pumps combined with a ventri prestage can't preform to a degree that meets the design requirements.
However their performance is highly dependent on two things, the ability to reach bakeout quickly and the use of metal to metal seals instead of o-rings.
The actually difficult and expensive part of high vacuum engineering is figuring out how to engineer valves that can both withstand bakeout temperatures and make the tight leak free seals.
In this regard I plan to use what essential amounts to a plate valve with something called, a "powdered seal".
This valve meets the requirements for the design in every aspect with it's only downside being that it is slow to change open to close. Though this downside will not reduce the overall throughput of the system as it is designed.
Really interesting, don't know much about CDTE cell production, Just curious, do you have a small start up working on this or is it mostly theoretical at this point (or are you doing this as part of your day job at a larger company)? I did a bunch of thin/thicker films in grad school and post-doc times and now work on thicker porous films.
Specifically, I'm working with a local pro-refugee organization in a densely immigrant populated region in Spain. There's a complex chain of steps that you have to go through in order to acquire citizenship. Only people with access to good lawyers are able to deal with all the bureaucracy of the process, without mentioning other problems (missing obscure expiry dates that reset your process, language-related problems, local government workers not actually knowing/willing-fully ignoring migrants' rights...).
There's a good network of volunteer lawyers working on this issue, but its not scalable. I'm working on a platform that would allow migrants to solve their own situation, by crowdsourcing the knowledge of lawyers on a case-per-case basis and offering a simple interface in their language to track open processes & discovered the ones they need to go through and how.
As an abstraction for this, I've been thinking on how we could improve citizen/government communication. A small use case / example for this could be refugee camps. My previous experience here is that they are small, disconnected communities with a top-down type of organisation towards the camp organisers. It shouldn't be hard to provide real-time tools for connecting both, potentially leading to things like asking for their needs, managing their legal situation, or even allowing for voting & self-governing.
The most important processes require physical presence. The ones that are digitalized aren't a good solution either, as these people might not speak Spanish very well, or they don't have the required digital literacy to access & go through a government website (which is a problem for locals as well). The solution right now is to offer personalized support from volunteer organizations on an individual basis.
COVID has affected quite a lot. Most of the processes have stopped as the government shut down the in-person offices, and now they are slowly reopening... and the situation was already crowded before the pandemic. On the positive side, extradition orders have been temporarily paused.
How can we document "human society" in 1000 pages or less? I've been casually researching for a while, and will eventually write a guide on going from zero to the moon.
Step one is survival, basics of hunting, first aid and farming sort of stuff. That volume would end around homesteading and self-sufficient living.
Volume two would be establishing society in larger groups than a family unit. Things like job specialization (N roles for N people instead of 1/N of each role for each person), establishing trade (currency, weights & measures, supply chain), government (mostly what not to do, what to protect, how to adjudicate disagreement), public works ("roads are a good idea") and their ilk. Also medicine beyond first aid and basic care.
Volume three would be advanced STEM topics, getting from a functioning society to... more. Not even the sky's the limit. It should include blueprints for things we take for granted like refrigeration, telecommunication and birth control. It will include all the basics of physics, chemistry and biology required for smart people to fill in the gaps and launch a human to the moon and back.
I want to super-nerd-out about this, and publish it on Tyvek or something exotic so it'll last through decades of wear and tear (and water-logging and more), and include a ruler on the spine and it's own weight documented for reference.
I believe this was the goal of some French Enlightenment philosophers like Denis Diderot and others who worked on the Encyclopédie. Definitely a worthwhile project. One problem you will have to find a solution to is how to adequately explain things that require tacit-knowledge (e.g. it's not enough just to know chemistry, you also have to know how to do it, which requires physical practice, not just book learning).
Cool idea - few questions: How many volumes are there? Would be cool to see the full list.
The first to volumes are very goal purpose driven and so fit will together. The third feels very grab bag and likely should be split up more.
You'll also likely want to cover on schools of thought, the scientific method and whatnot.
An alternate volume 3a might be on mass production of food (enabling greater human capital). Large scale agriculture and the chemistry and botany that enables it. Some amount of animal farming is likely required.
3b could cover the communication and transportation networks necessary to distribute that food. Both the engineering, infrastructure and tech behind it.
At some point you can shift to covering, manufacturing and mass production - which enables all the small products needed for so many fields. The handles, washers, lenses,...
Then final shift to digital and everything that rolls out from there.
Three volumes planned (Survive, Thrive, Expand), but given the overall scope that seems like it's gonna be difficult. The third is very grab-bag because it's "everything else" - there's no end to what could go in there, it's just going to stop at some point if it's going to be published. Prioritizing human-care and sustainability is definitely a good target, and in keeping with the theme. Probably scientific method, schools of thought, etc. would be chapters or single pages - there's a lot to deal with overall.
Anywhere I can, as long as I can confirm the info is useful and correct.
Survival is sourcing primarily from stuff like the Boy Scout Handbook, the SAS Survival Guide, a few other online resources and a farmsteading book from the 60s-70s that I can't recall right now. That will be the v0.1 draft, and I'd like to practice some of the techniques as well to provide hands-on feedback on what's easier or harder.
Volume 2 is a vague outline, and probably needs to be combined with vol 3 and fleshed out before splitting in two (or more) pieces to research and write.
Dentistry: can we replace dental x-rays with infrared? Can we build optical panographs just using the reflections from a dental mirror? How can we monitor patients' oral health over the long term more often than just an annual visit to the dentist, and does that improve oral health outcomes?
Machine learning: what happens when we replace Euclidean metrics with p-adic ones? Distance is fundamental to so many algorithms (least squares regression; nearest neighbours; anything involving gradient descent). How do those algorithms behave over completely foreign metric spaces?
Do you have a link where I could read more? Because it sounds like you're describing a thing that's been around for like a decade, that I've seen in use in (my region). But I may just be comparing apples and oranges under a too-vague description.
Today's web is a collection of applications that largely provide a frontend for browsing data. The applications and data they contain are silos: there is no easy way to separate the data from functions and compute across datasets. Every application must (re)invent its own UI for querying and displaying data.
But if the web is actually a collection of datasets, why don't we have a web browser for consuming and interacting with arbitrary structured datasets?
We can model most popular sites (HN, Instagram, Twitter, Amazon etc) as a collection of hyperlinked JSON records. Let users adjust how these records are displayed. Provide a universal way to query and navigate any dataset and invoke associated functions (eg the upvote function for an HN post).
Full separation of data and functions instead of application silos is necessary to achieve general AI compute in the future.
Example: can you email Mark a summary of the top 5 most popular HN articles 3 days before our meeting?
Not quite. The semantic web focuses on normalizing all data into a single ontology, which is great for computers but does nothing to standardize the user interface side of accessing data. Needless to say, it also hasn't gotten far.
That's a good point, and I completely agree that it hasn't gotten far.
I do think that if it were to be embraced as a more common standard on websites, building a more standard UI on top should be _relatively_ simple as each websites underlying data would _hopefully_ be structured the same.
I (well, the nonprofit I lead) am trying to solve for the general problem of municipal public policy but, as you might expect, it's a series of discrete, linked problems. Some of these problems have thousands of people focused on a solution; in other areas, it's virtually greenfield.
The space doesn't lack for good research and policy recommendations, but it has historically lacked (on the right and left), non-screedy, nonpartisan voices that can be trusted when policymakers look for solutions. We're attempting to fill that space.
WHAT WE WANT:
Ultimately? We want every major American city to work better for the people who live and work there. It'll look different from community to community and our job isn't about applying a cookie-cutter approach. Instead, we want to get a wider range of ready-to-implement tools in front of the decision makers, and educate engaged citizens that solutions exist.
If you can use machine translation, countries with more than two major political parties often have "centre-" parties that tend to argue for centrist platforms on pragmatic grounds.
(incidentally, I am just at the point in Simone Weil's reports from germany where, early 1930's, the communist unions and nazi unions are ganging up against the middle-of-the-road pragmatic-reform-not-ideological-revolution social democrat unions. Very depressing.)
The foundation is flowcharts, with support for individual layers distinguishing levels of abstraction, and scenarios for exploring use-cases. From there:
- Live data. We look at metrics on dashboards but it doesn't put into perspective how they relate to each other. Imagine seeing on your flowchart of servers, that one worker has an anomalous CPU reading, and you can click into that to see the individual readings of the running services on it. (rudimentary version: https://app.terrastruct.com/diagrams/1404897320)
- Automatic generation and sync of diagrams. Having access to sources like AWS account and version control to create and keep in sync diagrams of your infra, db schemas, UML classes, etc.
- Collaborative editing, seamless integrations with written documentation, linking directly to code where appropriate, version control, etc.
So much of software can be better understood visually. Still early on, if you're interested in learning more, https://terrastruct.com. And would love to chat (email in profile) with anyone with ideas.
Every time I share this, someone shares some tool I haven't heard of, and I've researched and tried a lot. It lines up with my experience working at software companies where every 3 weeks or so there's a thread asking for diagramming tool recommendation, and every time it's dozens of mixed responses of "I've used X but caveat A,B,C".
This has a lot of potential, usually enjoy flow charting things on draw.io. One thing though, the export to PPT is a big feature because most people will present through that but it is losing the navigation between each frame.
Also I suggest to not only market to Software engineers but people who work with data like drop-shippers, digital marketing agency, etc.
It loses the on-demand navigation, but my reasoning for deciding to support PPT is that when you're presenting, you'd want to present it sequentially anyway, instead of jumping around. The PDF versions are linked nicely, as an alternative.
I actually started this marketing it as "diagramming tool for systems". Talk about trying to be as generic as possible. I want to focus on one segment first, but totally, a side effect is that flowcharts enriched with data acting as a dashboard would be really beneficial to non-technical people to understand e.g. analytics funnel.
Fair enough, I missed out the scenarios when I first try it since I just click on the link to download the PPT. It make sense now when I use it side by side with what I have created.
Interactive flow charts should be the main selling point, as for usage, it is up to the user.
Also the pricing model, restriction on frames and layers would chase free individual away but they are those who plays a main roles in making something mainstream. I would recommend removing the usage restriction but adding collaborative restriction. A good example for this is how Google Sheet works.
Good luck, hoping to see it being used everywhere someday.
I've been journaling my dreams for years and I'm working on an app that makes it easier to (visually) map them out & find patterns: https://oneironotes.com/
I like the idea of accessing other (inner) dimensions during sleep, like an explorer (an "oneironaut"). The problems to overcome are related to capturing and recollecting experiences that only take place in the mind. You asked about the weird stuff...
Hey, I also did a lot of work on dreams - I have also journaled my dreams for years and had therapy at the same time. I got super into them, read a bunch about them, even did a psychology masters to spend time researching them. I couldn't find theory that matched with my experience so I wrote this paper on them:
"A Suggestion for a New Interpretation of Dreams: Dreaming Is the Inverse of Anxious Mind-Wandering."
Hey Joshua, thanks for your kind comment! Very interesting thesis in your paper.
Do you think that besides as a framework for anxiety diagnosis, this could suggest lucid dreaming as part of the therapeutic treatment? Consciously inducing lucidity and choosing confrontation of the dream scenario rather than defaulting to avoidant behavior?
1. Set your intention as you are falling asleep by repeating “tonight I’m going to remember my dreams”.
2. When you wake up, don’t move! Stay in bed for 5-10 minutes and try to remember. Once you remember one thing try to ask yourself what happened before or after.
3. Dream journal.
I practiced the above back i. High school. Went from occasionally remembering a dream to remembering about 4-5 a night. Some small snippets and others longer. Was really incredible and I plan to try it again.
- The #1 way to remember more dreams is to be interested in them! The only personality trait correlated with high dream recall is "openness to experience" (https://en.wikipedia.org/wiki/Openness_to_experience#Dream_r...). If you want to remember dreams, first consider WHY you want to remember them. Hone into that curiosity and strengthen it.
- Second best tip I can think of is: more sleep! REM periods get longer in the later cycles of your sleep, meaning a higher chance of dreams.
- Also, if you really really want to recall dreams (and perhaps induce lucid ones) and you don't mind being tired the day after, try interrupting your sleep at ~90 min intervals (the average length of a sleep cycle) with a (silent) alarm clock - or raise a baby :)
- Finally, quit smoking weed if applicable, as it suppresses REM sleep ;-)
The KDE window manager allows you to set window specific hints based on title or x resource. Once upon a time I wrote a tool that managed the current state in case of a log off (bad graphics driver that caused Xorg to restart)
Blocking sites was done with squid time-based ACLs. Now I’m wondering how productive I could be combining the two
I'm trying to replace SQL by building a language that compiles to SQL. It has first-class functions, nicer syntax, better type-system, introspection, and other things you would expect from a modern language. But in the end, you still get SQL's performance, and the ability to use it with dozens of database engines.
yea, I read it the same way, but ORM's like EF let you leverage an existing language to do basically the same thing as this does. Abstraction, data types, etc... I am unsure as to the value of something like that unless it is included with a client application that handles the conversion natively.
If this can allow me to write complex SQL (lots of JOINs, Multiple CTEs building on top of each other, Analytic functions etc), then I'm definitely interested. I've created an SQL generation engine in the past and can definitely appreciate how hard the problem is.
Can a programming language based on Willard's Self-Verifying Theories  self-interpret in the limited sense established by Brown and Palsberg ?
Which is to say, what is the relation between a sub-Peano logical system that can prove itself consistent, and is complete, thus bypassing the 2nd Incompleteness Theorems, and self-referential decision problems posed in the programming language that corresponds to that logical system?
My suspicion is that such a language could provide very strong guarantees on the auditability of self-modifying code.
Trying to find a solution to the maximum-entropy probability distribution Q(x,y,z) constrained to reproduce the marginal distributions P(x,y), P(y,z), and P(z,x) from some other distribution P(x,y,z).
It is known that Q takes the form Q=a(x,y)b(y,z)c(z,x) for some functions a,b,c to be determined by solving the system of equations:
P(x,y) = sum_z Q(x,y,z)
P(y,z) = sum_x Q(x,y,z)
P(z,x) = sum_y Q(x,y,z)
It's not clear there exists a general closed-form solution. Iterative algorithms are known. This type of problem comes up in a number of interesting contexts. For instance, testing for non-trivial multi-variable interactions in dynamical systems such as neural networks or spin networks, performing joins on probabilistic databases, constructing reduced models of probability distributions, and in some cooperative game theory problems.
oh this is neat, it's like adding an additional dimension to the Wasserstein distance / optimal transport problem. Well, in the sense that you are using marginals as constraints. Kantorovich won a nobel prize for this kind of stuff, so it's definitely hard.
> to appear as if they understand and control their shops [emphasis added]
Ideally, the manager would provide coordination and clerical support, and insulation from the rest of the bureaucracy (ie actually managing) without necessarily needing to understand or control the details. But if that's how they appear, upper management will classify them as overpaid secretaries, and gut their authority (and, on a selfish note, salaries). So it's important that they appear to understand and control their nominal subordinates, even if they're actually following sound advice of the form
> It doesn't make sense to hire smart people and tell them what to do; we hire smart people so they can tell us what to do.
(It may still be a impossible task, but it's a different task from having them actually understand and control things; even technical managers rarely accomplish that.)
An illusion that only works for half the audience is not very reliable. I see opportunity to [by email] assign reading materials to them and possibly offer to tutor them weekly. When you own the problem it is yours to solve.
Spot on. I know for a fact that it's a need to appear so because the issue was brought on by a change in upper management. The same middle managers had previously just smiled warmly and told me I was doing important work, never asked to see (let alone influence) architecture diagrams.
And I'd like to say I'm surprised and encouraged by the number of people empathizing with this problem; at work it is a lonely predicament. It somewhat dashes my hopes of escaping it by jumping to another firm, but at least I know I'm not the only one.
My current side project is https://feedsub.com. Right now, - it's not great - I started by building an simple tool for getting regular updates from RSS feeds, but longer-term I want to turn this into a system which can absorb all the data streams you're interested in (news, stocks, weather, social, communities) and give you dials (filters, curation, signals, etc..) to surface a healthy amount wherever you want (SMS, email, web, RSS, chatbots, etc..).
The crux of the problem is endless scrolling feeds we're sucked into 24/7, which is why I based my MVP on email.
My current solution is trivial on a technical level. Honestly, my biggest problem is thinking about the problem on a non-technical level, balancing this with working life and branding, since my software and vision are very far apart right now.
(TBH, this isn't nearly as hard a problem as some of the others here - but I enjoy the ideas/feedback I get from communities like HN)
This is not completely related, but it reminded me of when I used Sup (a console email client that took a lot of inspiration from Gmail - labels, fast FTS, etc) and the author had an inspiration to create a client/server version, which could handle "Email, RSS feeds, notes, jabber and IRC logs" and more.
Unfortunately, I don't think he even finished splitting Sup into the server+client (called Heliotrope and Turnsole), let alone get them to handle other stuff beyond email. Felt like a lost opportunity.
For your project, seems like the direct competition is IFTTT and similar. What do you think differentiates your project from those? Could be useful to push more in that direction.
I reckon you could pull my project off with IFTTT. The big differentiator would ideally be the fact that this would be a product geared towards content consumption and that lifecycle (ingest, filter, consume), rather than just general purpose stuff that IFTTT does. I have a bunch of ideas in this regard, but it's very exploratory and not a short-term thing.
Building a relational lang (https://github.com/Tablam/TablaM) I allow myself to get derailed in how provide a nicer "linq/iterator" protocol that work inside rust and outside in the lang (so, how I write them in rust is close to what the user could write in the lang).
The regular iterator protocol, as today in rust, make hard to do stuff like JOINS, GROUP BY and other fancy stuff (because you need to decompose the computation in a partial state machine. This is hard even for a developer, impossible to ask for a regular data user). Also, you need to duplicate all that for async (with streams) and other abstractions...
(It's a germ of an idea but I hope to work on this!)
Public schools in India don't do justice to students. Private schools in India charge a bomb but most of the money ends up in the hands of the "owners" and not enough to teachers (For reference, an average primary school teacher earns lesser than what an Uber driver earns) .
The solution :
A network of "not-for-profit" schools where the fee structure is reasonable ( can't be free ), but the profits are shared amongst the people who make the schools run. Think "community banks" but for schools. I can't solve the problem for everyone but hope to set a good example by attracting the cream of teachers. It's time the teachers got their due.
Any such initiative in India will have hard time because of factors like
- Many a times, kids school has nothing to do with quality of education in that school but the stature of the school. Parents in India use this to flex among their peers.
- Non-involvement of parents.In India, the parents who actively take part in their kids schooling are a minority. For most, its more like "paid school fees" and just make sure that the kid is doing homework, that's all.
- Regulation. Once you try to open a school, you will come to know that there is no way you can get your school registered without paying hefty bribes to the education department if you dont have bureaucrats(IAS or IPS) or politicians in the family.
Education is a problem not only in India, But across the world. In Africa, in remote villages, Possibly on Mars,if we setup a Mars colony. It is not only about financial resources, but also simply non-availability of teaching resource.
The second part, Why is education only relevant for children. Why not adults.
This is where Edtech comes in play. Online Learning and other things. However it is only a small component in the entire solution. Other components are personalization, social interaction, assessment, credibility of these online Learning.
Also it is very easy to monetize online attention. How to build financial model with the true goal of Education is also going to be very critical.
Good news, all the components are technically available or can be built. Bad news. It requires culture change.
I was thinking about something very similar.
like, the teachers will get a lions share of the fee.
this will instantly motivate teachers to better themselves, and move the school away from for-profit model.
but there are some real problems to solve.
1. how to get such school started ? who will provide initial capital ?
2. even if such a school is started, how to stop it from becoming for-profit again ?
3. is there a healthy balance between 'for-owers' and 'for-teachers' model ?
1. Who pays for it?
There are philanthropic trusts like the Azim Premji trust which focus on this.
2. I think that self interest is a good motivator to keep the system in place :-).
3. I honestly dont know the balance yet but I know for sure that the current one just wont cut it
The resort finding part seems representable as a table where worst to best overlap(including dynamic weather forecast part) is displayed in a color spectrum, sort of like a many circled ven diagram. Send an email for firetool at protonmail dot com if you’re interested and have the info you’ve described(minus the weather api) tabulated. I’d be happy to cook demo up a form and grid.
Such a crazy difference with the amount of resorts in japan. In Colorado there are <10 resorts within 3-4 hours, and your pass only works at a couple of them. Our problem is optimizing the time you leave to reduce traffic time and maximize skiing time.
I nearly fell into the river at Kagura on New Year's day! It was new snow that hadn't settled yet so every step I took the snow beneath me crumbled. I was on a ledge and trying very hard to climb up, luckily a few Chinese passed by and pulled me up. Then it was a long 2 hour hike back to the lifts.
Given the earth's population trajectory and the reduction in fertile arable unpolluted lands, there is a coming crisis in the distribution of food. How do we distribute food to high density Asian urban populations efficiently, minimizing needless motor vehicle trips, packaging and spoilage, when convenience purchasing is on the rise and average household sizes are shrinking? Our answer is a network of robotic service locations with automated stock-keeping and a shared, wholly owned logistics network plus personalized direct from fresh ingredient preparation.
Bounds for the best possible designs for optical devices: well-studied [0, 1, 2, 3, 4], yet really hard.
More specifically, whenever you give a designer a design spec, it is always worth asking, how good is the best possible design for this spec? And, of course, can the designer actually achieve it, or something close to it? This is the question here.
In this scenario, the design spec is the optimization problem (what you want to optimize), the designer then gets to choose how to best approach this problem. In this case, you want to give a number that states, independent of how this problem is solved, what is the best any designer (no matter how smart or sophisticated, how much computational power they have, etc) can hope to do. In many cases giving such a number is actually possible! (See below references.)
Fully offline, fully searchable copies of personal data (email, tweets, calendar, etc.), English Wikipedia, IMDb, OpenStreetMap (tiles, routing, points of interest), geocoding. Fully offline and state of the art speech recognition. Fully offline voice assistant with almost complete coverage of the most common usage.
Representing musical and audio (sample) time in ways that maximises reversible conversion between the two of them. Musical time is typically spoken of (in western culture) in terms of "bars" (or "measures") and "beats". The relationship to audio (sample) time is defined by a "tempo map" which defines the number of beats per minute and the number of beats per bar.
The mapping between the two is monotonic, non-linear, and can be stationary. If the tempo map is allowed to contain ramps (accelerando and ritardando in music speak), there are implicitly exponential sections in the function that maps between them.
Using floating point arithmetic leads to errors that have immediate effects. A musical time that should be considered to be at sample N is instead considered to be at sample N-1 or N+1.
Designing a combat robot for the UK Antweight division, which is only 150g max weight. (or 175g for some groups).
Despite this tight weight budget, I intend to build something rather interesting, but it is causing me to spend a lot of time in Fusion designing the parts along with slicing and reslicing 3D printed parts to shave partial grams of components to save a bit of weight.
I am partly obsessed with and researching / writing about how you could make carbon-free heating cheaper than fossil fuel based heating. If you can make geothermal systems cheaper than a natural gas furnace, then homeowners would have the same economic incentives as drivers, where operating an EVs is far cheaper and cleaner than ICEVs.
A lot of people still use gas for hot water heating in the northern USA. Electric hot water heaters often use too much power for them to be replaced. Some sort of battery-powered heater might help here.
Private file access between computers. Even in 2020 it appears that building a connection between machines (macOS, Linux and Windows) requires a miracle if you don't like to share everything with cloud providers.
I recently used magic-wormhole  for this. It was a 1-line command that gave me a keyphrase that I told to someone else. They typed the same command on their end, put in the keyphrase, and received my file.
No IPs, no forwarding ports or files. It was a magical experience and Just Worked.
Everyone seems to ignore the only critically significant hard problem: how do we mitigate the negative impacts that humans have had on the planet? Right now it looks like we are careening towards a quick extinction of the human race and most other species and we are not doing anything about it. True, the magnitude of the problem is daunting. True, it may be too late as we are past the tipping point. True, it is easier to ignore the risks that it is to confront them.
Maybe the real hard problem is getting people to pay attention to what is happening in their world and to forecast what the impact of their actions will be in the next few decades.
Solutions to the stated problems are, from a technical and policy stand point very simple. Yes, we know how to solve all of this - it's called market regulation and we know it works reeeely well.
But it's politically hard.
What we need is an economically pragmatic green political movement, who will not keep shooting itself in the foot with left-wing ideological purity competitions.
Profit seeking motive is only a problem when you let it of it leesh (because you somehow convinced yourself that the state of perfect market competition is the stable state of a market, when in fact all natural forces point toward domination - monopolies).
So, green parties... Drop the bullshit feel good stories and let's just make saving the earth profitable.
I think you are begging the question. Market regulation and a more economically pragmatic green political movement, and a fettered profit-seeking motive won't respond quickly enough to save the day. We have a crisis here and have not much of anything which makes a difference. For example, when huge fires burn, dumping significant CO2 into the atmosphere, it would be to everyone's best interest to put them out. But we (the world collectively) has not done that. Likewise, despite regular warnings about the dangers of pandemics, the global medical system was designed under the assumption that "pandemics" would be small and could be localized. Turns out to have been the wrong choice. And in this case, every serious Global Trends study points out that the risk of a pandemic was significant and that a pandemic could be costly.
Why do you presume it won't respond quickly enough?
I think that if the EU made any carbon fuel that was created from the atmospheric CO2 free of any and all tax ... we would see it on the market in a few years at max with plants cropping up very quickly. When that industry matures in a few years just force anyone polluting with CO2 to buy and sequester this carbon fuel.
Besides that's only half of the solution. The other is pragmatic policy. If the green movement was pragmatic they would have championed "EU builds 200 nuclear plants in <10 years". All the problems with nuclear right now (who will build it? it's expansive, takes too long etc.) go out the window when someone like the EU decides to build 200 of them. Wind and solar are nice and all, but they can't do in 10 years what a few dozen, let alone a few hundred, nuclear plants can.
Solutions are there. But solutions are not the point. Getting elected is the point. And that's where ideology and election-efficiency (it's much easier and cheaper to campaign on emotional social issues and virtue signaling that it's on suboptimal, ideologically-unpure, hard, painfull, expansive solutions) reign supreme.
And regarding the pandemic thing - the question you should ask: "wrong choice for whom?". The general population? Or the people in power? The biggest problem in our society is that we've fallen for the lie "in a democracy the general population rule and the system inherently works in their best interests". Democracy has nothing to do with taking care of the people. It can be used for that ... WHEN the general population realizes that it first has to BECOME a power player in the game of politics.
Simple enough--there is not enough time. Even if we were to stop dumping CO2 into the atmosphere today, the global temperature change will exceed acceptable limits. If we continue with business as usual, substantial parts of the earth will become uninhabitable. Nothing on a global scale can be done in the twenty or thirty years. Collaborative global problem solving does not seem to work.
I agree, In fact you'd think conservatives forces would recognize that the greatest threat to the status quo is CC, and that they'd act in favor of saving the world if only because it will ensure their dominance.
Looking at the current state of things, I think you are guilty of wishful thinking. Survival of the fittest depends upon reproductive success over multiple generations. The time change need to adapt is not commensurate with the time scale leading to extinction.
Its only wishful thinking if you assume that humans are expected to survive. Will that happen? No idea? But even if humans survive then evolutionary pressure will eventually push them towards being a different species.
I want to give everyone a digital identity. In some countries (including mine) basically everyone has an e-ID which we use to sign in to things like government services, banks, payment providers and much more. This is absolutely essential to everyday life and many startups are built around it.
Unfortunately, many countries don't have useful e-IDs and the ones that do are limited to that one country. I want to create a single digital identity which works for everyone, for all applications, across borders. The basic features are:
- App based with no special hardware necessary.
- Privacy friendly with the user always fully aware of what data they are revealing.
- Simple to integrate for developers. It's a standard SSO flow over OAuth/OIDC.
I'm currently calling it Pass: https://getpass.app. If anyone wants to have a chat about digital identities you can reach me at fabian (at) flapplabs.se
Does this mean that a user can use their identity on two separate sites, and those two sites can't collude to build a shared profile of the user, without the user's permission?
Does the user have to choose a specific server to be involved in all their identity interactions? If the server stops working, does the user lose their identity?
Also, is it possible to create an account without a phone (or rather without a SIM, since those are often tied to real identities)? Does your proposed system assume that people can't register multiple identities (using multiple phones) if they wanted to?
> Does this mean that a user can use their identity on two separate sites, and those two sites can't collude to build a shared profile of the user, without the user's permission?
That's precisely what it means. User IDs will be unique for each site and I'm hoping to anonymize email addresses as well, similar to what Apple has done for "Sign in with Apple". Some companies might be required by law to collect some PII but in that case their needs will be vetted before.
> Does the user have to choose a specific server to be involved in all their identity interactions? If the server stops working, does the user lose their identity?
I'm currently building this as a centralized product so no, there is only a single server maintained by us. I'm mostly concerned with building a great product but the prospect of decentralized, verified identities is also very interesting. I'd love to see what that could look like!
> Also, is it possible to create an account without a phone (or rather without a SIM, since those are often tied to real identities)? Does your proposed system assume that people can't register multiple identities (using multiple phones) if they wanted to?
The current product is in the form of an app so you will need a phone but you won't need a phone number (or SIM). An email address is currently required though.
My current system assumes one identity per person but it's fully possible to have multiple devices which acts as that identity. This might change depending on regulation though and is not set in stone.
If you have any more questions I'd be happy to answer them!
Thank you for those excellent answers. I do have a couple more questions if you are interested:
> [companies'] needs will be vetted before.
Is the plan that a single entity offering this centralized product will control not just which users are allowed to have identities, but which companies are allowed to access users' IDs? Presumably there is a somewhat costly process to vetting companies and their requirements, so would companies pay a fixed amount to cover this vetting process, or pay more based on the level of personal information they hoped to receive from users?
> the prospect of decentralized, verified identities is also very interesting.
What type of verification do you imagine being necessary or available for user identities?
> The current product is in the form of an app so you will need a phone but you won't need a phone number (or SIM).
Are there any technologies specific to phones that mean this couldn't run as a web app instead?
> My current system assumes one identity per person but it's fully possible to have multiple devices which acts as that identity.
So if you can install multiple copies of the app on your (Android) phone, you could have multiple identities on the same device?
> Is the plan that a single entity offering this centralized product will control not just which users are allowed to have identities, but which companies are allowed to access users' IDs? Presumably there is a somewhat costly process to vetting companies and their requirements, so would companies pay a fixed amount to cover this vetting process, or pay more based on the level of personal information they hoped to receive from users?
That's the plan, yes. The current pricing structure is to let companies pay a monthly price per active user. They would not be able to pay more to get access to more data. As this is early stages, I'm not sure what the vetting process will look like yet. It's mostly there to ensure that the the data the companies request are actually needed for their core business and will not be used for tracking. For example, a company can only request the legal name of a user if the law requires them to know it. This might be true for a bank but not for a dating app.
> What type of verification do you imagine being necessary or available for user identities?
The verification we will be performing is at the level required by some laws, for example Know Your Customer (KYC) and Anti-Money Laundering (AML) laws. Our goal is to make Pass suitable for fintech companies which have quite stringent requirements. I can also see lighter forms of verification being good enough for other applications, like the Web of Trust model used by PGP.
> Are there any technologies specific to phones that mean this couldn't run as a web app instead?
Yes. Many modern phones have a built-in Hardware Security Module (HSM) which can be used to store and use asymmetric keys securely. Browser storage can't offer the same level of security currently but there have been some interesting developments which might change this, for example WebAuthn.
> So if you can install multiple copies of the app on your (Android) phone, you could have multiple identities on the same device?
I can't really answer this right now as I'm not sure which way we'll go. It will depend on what regulations require and what we can achieve in terms of verification.
eIDAS is a great initiative but unfortunately I think it’s going to take a while before it becomes useful outside government. As you said, many countries don’t have well established e-IDs and that is what I’m trying to remedy. My main target is not government. Ht
Smart cards don’t work very well today where the average person uses most services through their phone. With the HSMs in modern phones, security in a e-ID app will get very close to that of a smart card though which is really exciting!
How do you plan to insure that your digital identity system can be used for illegal/criminal purposes, such as aiding Jews and other undesirables in evading lawfully authorized detention and relocation (or by said undesirables for said evasion)?
(I don't mean to single you out specifically; this is something that any digital identity system needs to deal with. And most of them enthusiastically don't.)
Why have you chosen to express this point in such a trollish way? We've already received complaints about it.
It sounds like you're trying to say something about the danger of such systems being abused by the state in holocaust-like situations, but the wording is so odd that it's unclear what your real intent is.
FWIW, I didn't read that comment as trollish at all. It's quite awkwardly written, and it could do with scare quotes around "undesirables" to make it clear that it refers to the view of authorities in their scenario rather than the view of the comment author, but which part of it is trollish?
My understanding of the comment was expressing a concern that some kind of universal identity system would undoubtedly become an agent of the state, which is fine most of the time but in various situations (including holocaust-like ones) means that it could be used as an instrument of oppression, since it would likely be unable to be used by those who wanted to hide some aspect of their identity for safety reasons.
This seems like a legitimate concern for something that has a goal of becoming a global universal identity system. I would hope that something like that would look more like cash - usable by anyone, whether the state likes it or not, with fundamental privacy aspects - rather than like VISA with some privacy bolted on.
I actually specifically omitted the scare quotes since, in such a situation, from the perspective of the identity system creator, the victims would in fact be undesirables.
It's hard to give a reader a example of someone who they actually believe is a Evil Mutant(TM) who clearly (to that same reader) is a innocent victim, and I didn't bother to try in favor of "well obviouly a non-negligible portion of 1930s Germans agreed with the Nazis".
> Why have you chosen to express this point in such a trollish way?
I am legitimately confused by this question, since it does not appear trollish to me.
I chose to express it that way to emphasize that (some) abuses of a digital identity system would present as lawful actions by authorized law enforcement personnel, and (apparently too tacitly) that "holocaust-like situations" are not a qualitatively different operating regime that system can assume it is not in during 'normal' operation.
I was a bit worried that it would be percieved as overly confrontational (hence the second paragraph attempting to disclaim that), but I assume that's not what you mean by "trollish".
Problem: A lot of professional jobs spend a shitload of time in front of the computer doing nothing else than googling or research on websites, taking notes (or transforming that knowledge to another medium or file) and repeating that all over the next day until they come up with a conclusion.
Idea: A web browser for self-automation that learns semantics and sentiment of content on the web, whilst trying to respect references or articles’ sources to recorrect the ground truth.
Solution: Still far away from it, but am implementing a peer to peer Web Browser that can share its information (or states) with trusted peers. Trying to implement a recordable, editable and repeatable GUI for everything, which is a lot harder than it sounds.
Not tried yet but I’d love to make a JS front end library to “bring your own storage” so that if I provide you with an app online (think SPA) it can then save the data to where he used wants (s3, Dropbox, local disk using extension, etc.)
Another similar idea is a very simple rest protocol so that you can save to server and then make it easy to self host it.
I like the idea of people building apps as web pages without needing to worry about the server and the user owning their data but having the convenience of a cloud like solution where you just visit the site, log in and work.
Im almost finishing something that can do this, and my hope is that it can be a "browser dissident".
The storage can be shared through bittorrent with applications and data(files and key-value database) already layed out on top of a userspace filesystem layer (the key-value goes through a sqlite btree).
All the data blocks can be shared thought bit-torrent, and everything, including the ui apps and the service application are also shared the same way.
When the dev application is launched, there's a middle-man that works like a server instance (the developers have total control into what is handled via RPC and the RPC api layout), and will launch rpc services, where the embedded application will consume by opening a url served/routed through the central host process (that centrally manages applications, processes, ipc, rpc, data, etc)
I've made this starting from a Chrome codebase, so its multi-process, with a web engine embedded (that will only be used by the ui application) but that you can control much better through your application api, aka the UI process.
The first language that have access to this is swift.. so the devs will deploy in their shared storages code/binaries for the rpc handlers (the app instance or service process) and the ui applications that happen to have full access through an api to the web layer/renderer.
It was hard to get where i am right now, but im almost finished, and even now, it looks pretty cool already.
I think that with a little more work, it can even serve to create not just p2p applications, but also distributed with pools of servers working through RPC with a help from the p2p layer.
I’ve thought about this concept a bunch and I think there are real commercial opportunities there.
Namely I don’t want to pay Dropbox a markup over S3 and develop a dependency on them. What I prefer is to purchase the software from Dropbox and self host/store it wherever I want. Same goes for email, office suite, photos, etc.
It feels a bit like hard work to use. They seem to have support for Dropbox but just as an afterthought and not the main thing. The main thing means running your own server, but it doesn't look easy to set up.
I want to think of an approach that is really easy to use so it becomes popular for that reason.
Let’s say you have multiple people who want to share their location with each other, but to different degrees of precision. Some people might be willing to share their street address, while others only want to share the city that they live in. Some only want to say “Northern New Jersey”, or “Southern France”. Some might want to just share their timezone.
How do you store this information in a database, in a geographically meaningful way? How do you represent it on a map?
I’m working on a formula that factors ML model run-time cost (speed / memory-usage) into its performance evaluation. I generated a very simple scoring function from economic first principles. Now, I’m trying to test performance on cumbersome transformer models published by Google / FB / Microsoft etc in order to prove that many of these models are not “state-of-the-art” if run-time cost is taken into account.
Working on building a self-hosted app that would alow you to save, organise and search your knowledge in one center.
It would contain information like notes, bookmarks (it would download the links contents) and in general provide a programmable, opensource interface to preserve the info you'll find useful and even sync with external apis to save your online presence locally (think reddit posts, hn links, etc...)
Wow that is quite similar! Mine is a bit simpler but insists on providing flexibility with different apis so you can sync your online presence locally while also providing an easily programmable interface.
I’m trying to figure out how to ship DNA effectively. It’s really inefficient right now, so I’m genetically engineering bacterial spores to make it a lot better. Not sure if people will adopt, but at least it’ll be 10x better than what is currently available!
Exotic in this case just means uncommon or rare in the wild, not expensive. Something like requiring both salt and aspartame would help prevent gene transfer in the wild (to related or unrelated strains) as well as making it unlikely to grow on its own.
I’m trying to map relational algebra onto Rust’s type system. If I’m successful, I’ll have a bunch of collection types with different performance characteristics that are all drop-in replacements for each other.
Our two projects actually look like they have very different goals and approaches: I’m aiming for a rigorous solution for small design problems, like needing to add an index to a datastructure that wasn’t designed for one— nothing really to do with databases proper. I need the overhead for these simple cases to be low, so I’m trying to do as much as I can at compile-time inside the type system.
You’re doing everything at runtime (at least in Rust), which is a more flexible approach but can’t do things like trigger a compile error when attempting to access a field that isn’t present in a record.
Not right now; I haven’t set up a repository for it yet; I’ll certainly post it here when it’s ready to show.
I’m working from the bottom up and am almost ready to start working on collections— I started with newtypes to represent columns and then tried to get them to combine together properly.
As of today, I’ve got what looks like a good scheme to treat arbitrary objects as records, and can do joins, projections, and column renames with the type system keeping track of which objects have which columns.
The plan for tables/collections is to implement a sequential-scan interface backed by something simple, and then add wrapper write-through index objects to speed up particular query types.
Trying to predict where electrons go in a molecule, but using a classical model. This can be done with supervised machine learning - you can use quantum mechanics to get lots of labeled data - but it's a tough problem, because chemical physicists have very high standards for accuracy.
I work at Schrodinger Inc, collaborating with the people who make TorchANI (https://aiqm.github.io/torchani/). This currently predicts only molecular energies in the public codebase, but I expect they will add electron density using the same framework.
Currently I am using hybrid DFT (wB97X-D) with plans to move up to wB97M-V and possibly Quantum Monte Carlo. The TorchANI group likes wB97X and up-training with DLPNO-CCSD(T).
Density functional theory is still quantum mechanics, but operating on the expected electron density itself, rather than the many-body problem of all the electrons. It's pretty good, but not fast - around a CPU-minute for a medium-sized molecule.
I'm working on approximating the electron density using just the nuclei positions and some neural networks. The throughput is tens of thousands of times higher than for DFT. But since the model itself contains little or no physics, the training data has to be very clean and complete.
Would you be able to leverage any of the work that's being done around so called "neural ODE's"?
The Julia community seems to be doing some really cool at the intersection of the 2 fields and it seems like it could be useful if you've already got some kind of pre-existing model/structure to hang the ML part off:
Not really - the generated code isnt some mystic code, it's simple templates.
I originally got there by making a complex magic data structure that held relations to everything in multiple dimensions so I could generate a huge amount of stuff, but that turns out to be just like 4GL - a load of slow confusion. The reason I am doing it is exactly because of ML/AI hope - with enough data and proper structures, I can generate a lot.
Unfortunately there seems to be a lot of infantilizing in and around the ADHD sphere. Either we're treated like helpless children or we are encouraged to lower our expectations/goals.
It's hard to explain unless you've been open about having ADHD, or have been part of the community. The zeitgeist revolves around accepting and maintaining a status quo. The problem I'm trying to solve is how to build a growth/thriving lifestyle despite an ADHD diagnosis.
Hey this is cool. Would totally use this. Just got diagnosed at 28 and trying to implement meditation, yoga and better work habits so I can get the most out of the medication and maintain the lowest possible dosage
It sounds like you're already on the right track. Meditation and mindfulness are the foundation of building the right strategy. I'm still so appalled how neither my therapist nor my ADHD specialist doctor ever told me to go beyond medication.
I was very fortunate to meet some of the people I spoke to. Here's a few off the top of my head:
- Circus clown/performer
- Microsoft senior engineer (or whatever they call them these days)
- ADHD coach
- A few CEOs
- Product designer
- Habits coach
There are a few more who wanted to stay anonymous, but you get the idea. The most interesting part was how similar all of the success stories were. I think ADHDers have a lot more in common than we think.
Online learning, particularly personalised learning aimed for self education and continuous learning. I'm trying to model the knowledge space as a graph, index learning material (free online resources right now then user generated content) on this graph and then provide different ways to navigate this space. I'm trying to address the "best way" for someone to learn a specific concept, and to help people identify the knowledge they miss/are looking for.
The project would be open, collaborative and a non profit btw.
I will soon begin working on a somewhat similar problem involving personalized self education. So far I've thought of it as a search engine for YouTube which allows a user to lookup something like "home improvement" and then builds them a "course" comprised of the most relevant YouTube education videos that they can follow along with. There would be an element of self curation and social media (voting and reviews of courses).
Anyway, good luck with your project it is very interesting and I'm sure you'll learn a lot!
I will be honest. I really don't get it. How come I am working harder, and more hours, and studying more but I don't earn more ? - It should have been a direct correlation. Shouldn't technology and automation and all of this good stuff make us work less but earn more ?
This has been a philosophical concern as well as an economic one for a while now. John Maynard Keynes had some notable thoughts on the future of work leading to a utopia of less work and the same/more reward (and he was incorrect).
But rather than get into the political/economic/philosophical argument, there might be an individual solution: people don't pay for what takes the most work, they pay for what is in the most demand. These are often not the same thing, and you can leverage that.
I am working on a system to parse the English language using a hand-written compiler and then store the IR in a database, so all human knowledge can be searched and queried in all its facets (who said what when and where) using English language. I believe that a database is the key to NLU, and machine learning is mostly useless for true NLU, because machine learning currently has no good way of interacting with a database AFAIK, and without the knowledge of a full database of human knowledge and knowing who said what it's impossible to truly understand human language. Storing all human knowledge in a neural network just isn't practical anytime soon.
I wrote a new programming language, Eek, because it was impossible for me to handle the complexity of doing all with current languages (that lack built-in support for asynchronous database access and parsing). So far the first generation of the programming language is working, but as an interpreter written in TypeScript, and I wrote an English language parser with it, and a simple database. Now I am working on a better, LLVM-based implementation of Eek. I started this thing about 3 years ago, and it will take some more years before this will be even demoable...
I've been tinkering in the reverse-engineering space. My problem amounts to reusing compiled binaries by combining them in novel ways. That is, I would like to take algorithm/subsystem X from software Y, combine it with something else. The goal is to have a library of components which I may be able to combine.
It has lead me to investigate a few technologies I have been meaning to invest time into, like llvm, qemu. There are a few projects which combine these as well as related technologies like DECAF, radare2, DynamoRio, mcsema.
The hard problem which I am facing, but by no means have an effective solution is of having to extract semantically the essence of the program in spite of ISA, and to find a balance between emulation and readapt-ability (i.e abstract out the code that is dependent upon some base-address assumption.
The value on the surface seems counter intuitive to the investment, especially from my roots in SWE where one can have a hard enough time trying to accomplish that with source-code available. Although the application is broad, I've focussed intermittently on video games. I believe this is where some value lies, as a finely-tuned subsystem can be the heart of a franchise.
Very interesting. I'm trying to do something closely related for other reasons.
One thing I'm thinking of is that it might be possible to brute-force the semantics of short snippets of code using genetic algorithms. A similar technique has been demonstrated a few times by author of .
I want to use this to eventually rapidly search a large number of binaries for insecure behavior. But to do that I need to be able to formulate questions like:
"Find me a function where attacker controlled data is marshaled to a size type and then used to allocate memory, to which a different attacker controlled amount of attacker controlled data is written".
UDP DDOS mitigation. It'll be important for protocols like HTTP3/QUIC where decrypting every packet is costly, and malicious packets can't be identified until after the application tries to decrypt them.
The idea is to assign a random 8 or 16 byte number (DdosID) to each connection. The DdosID is unrelated to any other identifier; like the HTTP3 connection id. The client puts the DdosID at the beginning of every UDP packets data section (raw; no encryption or compression.) Packets that have a valid DdosID are processed. Packets with an invalid DdosID are dropped. New connections use zero as the DdosID. Any client can use zero to re-establish a connection or if something goes wrong.
If attackers make random DdosIDs it will result in packets that are dropped before decryption. If a valid DdosID (or hundreds of valid DdosIDs) are used, the volume of packets with that DdosID will make the attack obvious, and the DdosID can be invalidated and the packets dropped.
Attackers will need to spam new connection attempts (DdosID zero) to deny service. New connection attempts could be dropped, or a small percentage allowed when the application can handle them. The rate of allowed new connection attempts could be adjusted very easily and quickly, but a high rate of bad packets would still deny new connections. Existing connections to continue to work.
The challenges are mostly creating functional thresholds: what packet volume is too high? how long is a DdosID valid? Should a load balancer record the IP and Port and to force a new DdosID if it changes?
I would also like to see a way that applications and load balancers can use a DdosID system without it being dependent on the specific application; DdosID should not depend on the protocol it's protecting.
Half of the Indian farmers do not have access to institutional/formal credit. They end of borrowing from local loan sharks at extremely high interest rates. The vicious cycle continues as they do have access to markets favoring their produce's selling price, and end up being exploited.
Most banks, despite their mandated percentage of loan for agriculture, are not too keen (and unable) to work closely with farmers.
By leveraging technology, and international connections, we help farmers by giving them access to credit at a favorable interest rate. We work with international community, such as the Japanese, to lend their capital on our platform, thus helping the underserved farmers. In turn, the Japanese investors get returns much higher their national savings interest rates.
Banks too can benefit from our ability to deploy their cash.
## Why Us, Why Now
I have struggled to articulate the philosophy/motto on how to help the farmers, as they are the most exploited populace. I'm zeroing in on -- "BE KIND".
If you want to hear more, contact me and I will send you our Executive Summary and/or Pitch Deck. We are fund-raising.
I'm doing it in podcast form because it's 2020. This is going to be multiple years and I'm totally fine with that.
I divide the history up into multiple facets each with separate timelines.
Currently I'm going all the way through "electric communications" from the electric spark up through relay networks talking about switching, encoding, error correcting, signalling, all the important developments along the way.
In 2021 I'll close that out and do the same for storage (music boxes, looms, etc) and computation, starting with clocks, pascal's mechanical calculator and going forward from there, mechanical registers, overflow, adders, etc...
Each one is going to take at least a year or so.
I'm already about 2 months in to recording, about 6 months into the project. This week will be du fay, boze, desaguliers, watson, and the electric wire. Next week will be leyden jars, this is a long, slow project, and AFAIK it has never been done before.
I'm doing the odd episodes as a timeline and the even episodes as diversions and discussions to keep things entertaining and light.
What an ambitious and important project! I love the concept, and that it is meant to be a "long, slow project."
It probably should be paced slowly due to the scope and complexity of material. I think a slower pace should also help "smooth" the exponential change rate of technological progress into a narrative listeners can follow, without sacrificing too much detail for the sake of listenability.*
I'm going to sub right now.
*I listen to a lot of podcasts, and this happens more than it should, IMO.
Edit: I haven't seen a podcast with its own git repo before, but it makes a lot of sense, and I imagine it is immensely helpful as a creator. I took a quick glance to satisfy my curiosity, and the notes I looked at (show notes) were comprehensive and really well done. This is inspiring, no joke; I have renewed enthusiasm for my own podcast-to-be. Thank you for that!
I've also cleared out a closet and installed acoustic foam. I tested a number of microphones... I'm sure acting like it's a real thing. Hopefully I'll keep at it.
The hardest part has been trying to get it done on a weekly schedule. I don't have this week's done for instance. There's two parts I really want to fit in and I need to find a place for them. I'm going to try to take a nap and get up in an hour or two and work a bit more.
I don't know if I'll be able to pull the weekly schedule off honestly. I'm going to try to sweat it out a bit longer and hopefully I'll get faster as I accumulate more notes, references and materials.
>It's all scripted. Google books and archive.org have been great resources.
Do you use any off-browser tools to navigate archive.org / google books? if so which? or the web sites always sufficed?
>There's also a couple torrents of 17th-19th c. journal articles.
Could you expand a bit on this? This sounds very interesting for a project of mine! Where did you find torrents of 17th-19th c. journal articles? It sounds like the copyrights on these would have expired, did you find human curated database of this era journal articles? how comprehensive is it, all journals of this time span?
(one of the things I would like to do is map out the transition from alchemy to chemistry, for example WP states Lavoisier convinced the scientific community that sulphur was an element and not a compound, which sounds utterly bizarre for essentially all of us who were exposed to chemistry as a deductive system without the abductive reasoning that led to it)
>I've made my own sqlite databases from the meta information and ran ocr over them to make things searchable.
That is pretty amazing work!
> I'll be putting more of these tools in the repo and eventually put the search systems online for the general public. Google's navigation of annualised volumes is a joke so I'm going to do better.
I'll just reiterate that I find it quite impressive. The above just reinforces that.
I mentioned that I listen to a lot of podcasts; have been, in fact since ~2006. I've watched the evolution of the medium over the years, have seen many pods come and go, and have witnessed many content creators experiment with different methods of attaining/maintaining/interacting with listeners.
Later on, as the form started to mature and gain more of an audience, the issue of monetization became more and more important and necessary; I've seen many experiments focused on that facet as well. This was still before "podcast" had entered the vernacular for most people I would classify as tech-adjacent--the general public was still not really aware of podcasting. Those who were didn't exactly advertise their listening habits yet, and podcasts were almost like an embarrassing thing one did in private.
You've done so much correctly, at least from my observations over the years, from the get-go.
If some of your ideas come to fruition, it would be a boon to those who'd like to make a podcast of their own, especially if technical or academic in nature. I love the work you've done, appreciate the open distribution of your expanding toolkit, and also want that auto-hyperlinked footnotes a whole lot.
I am doing what I can to encourage you for two reasons. One, I am personally interested in the subject(s) being covered, and I think projects like this are needed if we aren't going to lose a lot to the mists of history in the face of accelerating acceleration, to borrow a phrase. Two, I want the tools for my project and do not possess the requisite skills to make them myself, although I'm working on it.
Yeah, I'm trying to target the general (curious) public. Like Carl Sagan Cosmos level (or soul of a new machine, accidental empires, masters of doom, etc). I'm still finding my element. Hopefully it will be generally enjoyable by all and genuinely 80-90% new information per episode to people who aren't scholars or academics.
Simply put, earning a living. I didn't travel the conventional path of high school > university > job, and so I'm struggling to get my shit together, the concept of earning a living (outside of making minimum wage) is so foreign to me that I really don't know how I'm going to do it in a reasonable (< 5 years) amount of time.
I’m working on a way to design free standing domes using mortar-less bricks. The idea is to build a cut-list for each brick, such that when stacked, there are no gaps, and the entire thing is held in place with just gravity (maybe the first layer is permanently attached to the ground)
I’m looking for a good framework to simulate the physics and visualizations.
My father-in-law lived in Mexico and had his home custom-built. He was fond of brick domes (with mortar). I can remember being in my wife's room with her staring at the ceiling when we went to bed wondering how the hell they managed to make that roof without the whole thing collapsing.
Although the first release is not officially out yet, the NodeJS code is working and you can install the development version of the app server and try out the hello world app locally.
The solution involves running Mozilla DeepSpeech inside an Electron desktop application with a websocket server and client API that NodeJS scripts can interact with, to receive speech recognition results, utilize "alexa" style hotword commands, and text-to-speech. The electron app handles all the heavy stuff, and you just use a simple API.
A web browser extension can also make use of this API to bring these capabilities to web sites, but that part isn't finished yet.
Web Page <-> Bumblebee JS API <-> Bumblebee Extension <-> Bumblebee Electron App (DeepSpeech)
DeepSpeech with the pretrained english model is enormous (1.4GB) it's not feasible to load it into a web worker. It can run in a server, but then every website would have to run its own server side speech recognition servers which is difficult and expensive to scale.
JHipster style templating for SAAS onboarding design patterns.
UX description language for forms that respects high level constraints. Compiles to desktop browser, phone browser, and Alexa layouts.
Solving the complexity of matrix matrix multiplication by brute forcing the lower bound with semigroup combinatorics.
DSL for linear logic.
Ending Iowa’s criminalization of “annoying” speech. (Iowa Code 708.7)
Exposing Polk County Iowa Sheriff Kevin Schneider torturing inmates with denial of basic medical care.
Exposing pure nepotism corruption between Iowa Attorney General Chief of Staff Eric Tabor and his sister Iowa Court of Appeals Judge Mary Tabor (mom of @ollie).
Exposing that the prosecutor on Tracy Richter’s murder trial had relations and ended up marrying the daughter of his star witness Mary Higgins - and that the blood spatter expert Englert is a known fraud who wrongfully convicted David Camm and Julie Ray Harper.
That never worked for me, I always needed some sleep around midnight. Usually I used to be up around 0400 but the last few months it has been more like just before the kids wake up.
Holiday started on Friday afternoon and this morning I woke up 20 or so minutes ago, around 0230, but I will probably sleep some more around 06-08 to be able to do anything meaningful afte4 my kids get up.
(And that is probably a good tip: If late nights doesn't work for you,try very early in the morning.)
After the twitter account of DDOSecrets got shut down (due Blueleaks), this got me thinking: How would you leak / provide data but are not directly attributable. (At least like a retweet - not your tweet, just amplified.).
And how to add some resilience and protection to the distribution, since there were indicators that the torrent and download of leaked data was being attacked.
So far, I have come up with an encryption matryoshka: you distribute leaks without telling whats in and enable gradually a few to look inside until they are public.
All thats missing is a better document describing it and a command line tool to help to walk through the multi level encryption ... so there is 90% still to do ¯\_(ツ)_/¯.
Not really sure what this is trying to solve. If it's some legitimate leak of public interest then the organizations active in this space often have a tip line / encrypted drop where you can put it.
If it's something that's not of interest to many but you want to put out there for some reason then what's stopping you from uploading it to some random one click hoster and posting the link in random places on the internet?
Keep tables balanced, move players from high numbered tables to lowest numbered tables, try to move players to similar positions, Mark players waiting if they drop in small blind and skip the button next hand.
I would guess that you still get to 2—because the more times you add 2, the greater the impact of dividing by 2, and the more times you divide by 2, the greater the impact of adding 2. Is that the case?
If you run the random process many times and then look at a histogram of the vector of outputs, the distribution peaks somewhat higher than 2 , but the output of the random process does not tend towards any particular number or sequence of numbers.
Indeed, the distribution looks highly discontinuous and fractal. It's a very intriguing question. I suspect thinking in terms of rings (abstract algebra) would be helpful.
 "somewhat higher" - I think this is because the sequence "add two then divide by two" gives you back your input when you start with 2, while "divide by two then add two" does when you start with 4; the distribution should peak between these two "stable" points. Exactly where the distribution peaks probably depends on your histogram bin width.
Working on siuba, a data analysis tool for python. It's a port of the R library dplyr, and can produces SQL queries!
I've programmed in python for much longer than R, and really want to be able to move at the same speed when using python for data analysis :o.
It's a weird problem though because the two languages have basically opposite approaches to DataFrames. pandas has a very fat DataFrame implementation, R an extremely minimal one. (Pros and cons to both approaches).
This is a hard problem. It may not even be perfectly solvable at any time in the future.
When you increase the size of any kind of raster image, you're creating new information from the old.
There's been some pretty good approaches out there, like this . It uses a GAN topology for some impressive results, but is incredibly memory intensive and can take a very long time to run.
I've been working on something  for a good long while which is a less expensive approach. Instead of attempting to replicate everything 1:1, it intentionally allows some detail loss, whilst attempting to preserve everything important.
It's not ready for the public, and video still needs some significant improvements to remove some of the artifacts. But I've released one TV series upscaled with it , thus far.
But as it stands, at x2 and x2.5 scales, it does pretty well, with the average person preferring it to most other resizing methods. It doesn't reach the GAN-approaches quality, but you're looking at an average 12 seconds for upscaling, versus 80 seconds for the GAN approach for the same size upscaling and what people perceive as the same quality.
It already beats most of the traditional resizing algorithms pretty soundly. 
Would Fremen Stillsuits like in the Dune novels actually be possible?
These suits allow to survive for weeks out in the deep desert, by catching and recycling all of the body's lost water. Making sure no perspiration can escape is doable. Filtering the sweat to produce clean, salt-free water should be possible as well - membranes for water desalination do already exist today. To have all the required pumping action provided through walking and breathing is a mechanical problem that should be theoretically solvable. The big, unsolved problem I see, is heat.
The book says that the suit's layers closest to the skin allow the sweat to evaporate and thus provide cooling to the body. But the water then has to condensate again somewhere. From my (limited) understanding of the laws of thermodynamics, the amount of extra heat created through condensation, should be exactly equal to the amount of cooling the evaporation provides making the whole cycle a zero-sum affair. But for this cycle to work in the first place, the skin would have to be of higher temperature then the layer where the condensation occurs. If the desert heat is above body temperature, we'd need some sort of heat pump like in a fridge. Using changes in pressure and density (through a compressor) you could cool the suits' inside below body temperature, while heating the outside above ambient temperature - which is necessary to actually give off heat to the outside.
Those compressors are heavy and powerhungry though, and you'd need additional high pressure water lines through the suit - increasing the bulk of the whole thing considerably. Future technology might be more miniaturized and more energy efficient, but still... Maybe piezoelectric/thermoelectric cooling (exploiting the Peltier effect) would be a better choice for a suit like this. Then you'd have a light inner layer that allows for air circulation - so that the sweat can evaporate on the skin, and condensate again at the thermoelectrically cooled middle layer - where it's collected and pumped away to the membrane filters and catchpockets.
The outer layer of the suits would have to be made of flexible solar cells, in order to provide the electricy required for the cooling of the middle layer. Not sure if that could work though. Those solar cells produce a lot of heat on their own, and they sit right on top of the heat-producing side of the thermoelectric cooling. And the sun burning down on it as well - that's a lot of heat right there at the outer layer. I don't think thermoelectric cooling can overcome that high a temperature difference. It works best, when the temperature difference between the cool side and warm side is pretty small.
How much heat a material can store is called thermal mass or heat capacity. One of materials that can store the most heat per kilogram happens to be regular old water.
I wonder if anyone here could do the math of how many kilograms of water would be needed to absorb the day/night temperature changes of a desert and keep a human body inside at an acceptably stable temperature.
I'm kinda afraid the weight would be more than enough to crush any human into a bloody pulp?
The most effective way to store a whole lot of heat in very little mass is to use phase changes - some materials take an enormous amount of energy to transition from solid to liquid and vice versa. This can be used to pack a lot of thermal energy into very little volume with very little temperature change.
Telling where the output of an individual stdout print statement ends and the next one begins, so that I could color code my terminal with alternating colors to more easily tell apart individual log messages from a running process by visual differentiation. Turns out this is impossible! Aside from time passing in between outputs, there's no way to tell. It's just a continuous byte stream with no terminating character or pattern.
Can you elaborate on this? Are you saying these tools could be used to wrap a running process in such a way that you could tell apart individual print statements during runtime? (I haven't used the tools you suggest.)
You can use Hall effect sensors to sense chainwheel teeth. Two sensors and you can get quadrature and direction. I've done that on a mobile robot. With analog-output Hall effect sensors and some processing, you can get sub-degree precision, although 0.05 degree is asking a lot.
We're trying to measure aerodynamic drag, thing is there are huge vertical forces (weight of rider pressing down on pedal) and relatively small horizontal forces (due to drag). So if we don't know the angle accurately, we can't separate out the components - make sense?
Gyroscope or mems accelerometer. Otherwise convert rotational crank movement to push-pull linear slider (see piston, flywheel, linear actuator, etc) and measure linear change in resistance between contact points (like a cheap caliper does, you can even use a chinesium one off eBay as they have a pin out under the battery tab, if your resulting linear motion is within that scale). You don’t need to drive anything besides the read head, so it should have extremely low losses.
Thanks for reply, and do you already work for us? :)
We've burnt through a pile of MEMS devices, currently working with CRM100 gyro's. As far as I've got is we definitely need a non-contact solution, either optical or MEMS with a once-per-rev index pulse (which obv. needs to be within the same 0.05 degree requirement).
I’ve been tackling my own issues with ideas that are sound on paper for physically small products where I run into mechanical or industrial limitations/difficulties which I’m not (or at least certainly didn’t start out) well-versed in rather than any electronics or programming issues. I hit a brick wall years ago (asked here on HN in the distant past but didn’t get anywhere) and actually never progressed past that.
I’m not sure what issues you ran into with the MEMS but my instinct (if I were forced to use them, as they are still rather nascent tech) would be to use multiple physically spread out on a plane and then clean up the signal in software. I’m not sure what price point or “genericness” you’re looking for, my assumption is that you need something that a customer can install or have installed on any bike and you don’t have the luxury of making your own bike parts because otherwise you could basically make it a stepper motor in reverse if you have access to the very axis rather than your device being off-center.
If you can mount something on the shaft itself (externally coupled at the end or by replacing it or other parts) your options become a lot more reliable as you’re in “tried and tested” territory.
Other options (warning: brain dump not thought through) would be an LVDT (see earlier piston suggestion) as they’re basically infinite life, contactless, and can give you micrometer accuracy; if you can somehow determine the distance between two points and have a sensor at a fixed position on each pedal so you can get the distance between them; coating the crank with a reflective substance and using IR sensors to get the distance at arbitrary angles (I’m pretty sure IR raw output can give you the accuracy you seek, though PIR definitely won’t); and a crazy idea based off a tilt switch of using a conductive liquid in a sealed non-conductive cylinder with two strips of differing conductivity on either (long) side, calculating the resistance with a high precision ADC could easily give you what you are looking for but I can’t find any off the shelf parts that do this (which is weird since it seems to be very obvious and is free of moving parts).
Good ideas though, we've come across the multiple MEMs approach (someone put I think 16 on a single PCB to get 4 times the resolution). The CRM100  we're currently using is the best we've found so far for our application.
We're been using liquid-level inclinometers for calibration and lab testing, unfortunately they are too slow for use on the road.
LVDT's and similar would be great, however the engineering cost (at bicycle scale) puts them out of reach, which is the same for precision optics for IR solutions. Plenty of $1k+ encoders out there but way over budget for us!
Good to get your attention :) but still got to print the gray code to 1 part in 8192 and fit it all on a bike.
Along the lines of your idea, we've looked at using the CCD array out of desktop scanners with a spiral pattern printed on the crank. But bike bottom brackets are a hostile environment, plus (ideally) we'd actually be measuring the angle against local vertical (i.e. the direction in which gravity sucks).
Our test bike currently runs 4 Pi zeros, 4 Cortex M0s (nRF52), a fistful of 24-bit ADCS, 2 high precision gyros, air pressure/air speed sensors and a 2.4GHz network, all in order to measure realtime CdA. Still unable to crack the angle problem though.
Indeed, manufacturing is hardly a problem in the software world. =)
Maybe get it laser engraved into aluminium and mount it... somewhere? Grey code is essentially a more robust spiral pattern, so that's the same idea.
What if you put a roll sensor on the chain instead and infer crank angle from that? Chain slip is rare enough not to worry about it, right? You'd also have more space and could use off the shelf components.
I don't think measuring against gravity will be precise enough to be one in 7200 parts. So you'd need a second sensor for that in any case (and likely already have, don't you?).
Brainstorm here - could you put a number of lower resolution encoders firmly on the same shaft? The theory would be they'll all pulse on slightly different but consistent phases.
Simple example, an encoder is capable of 60 degree resolution. I'll mount 5 of these which will all have some variation in phase, and I'd average out to having 12 degree resolution, with variation that could be modeled statistically.
Conceptually I would have a shaft with n different key slots, one per encoder. This would ensure there will be variation as opposed to having the phase be super similar, such as +/-1 degree in the above low resolution example.
Interesting, but I think we're run out of space. Sort of related, we've been considering how we can gut an optical mouse and use the sensor from them. The idea is broadly similar, basically learn the pattern around one revolution and work out where we are in it.
Real-time, in fact we're acquiring data at quite a high rate and boiling it down with a view to presenting the results to the rider a few times a second. You can see more at https://bodyrocket.cc if you're interested!
Decomposition of glyph sequences in phonetic transcription alphabets (e.g. IPA representations of phonemes) into phonological feature sets.
Existing attempts to solve this problem are hackish and difficult to customize: they typically treat each glyph as a set of features and handle diacritics and digraphs by naive composition and awkward special-casing. They also aren't written with an eye to customization in either alphabet or featural model: they typically map an ad-hoc extension of IPA to an ad-hoc featural model.
I think a natural improvement would be to develop a specification language in which each individual glyph (base character or diacritic) is a function from a feature set to a feature set, with Haskell-style pattern-matching to allow graceful handling of digraphs and context-sensitive diacritics - although syntactic sugar for digraphs would in practice be required for usability. Ideally it would also be possible to map feature sets to feature sets, in order to preserve a human-readable intermediate form (e.g. "unvoiced dental plosive") which is later mapped to the more customary binary features.
In addition to the utility for phonological databases and the like, this would also enable more rigorous testing of crosslinguistic feature sets: every feature set is implicitly a set of proposed linguistic universals and existence claims. If two segments have the same featuralization, they should never contrast in a given language; if they do, the featuralization is unsound. And if a featuralization proposes the existence of many contrasts that aren't attested anywhere, it could probably stand to be optimized.
But most of my interest in this comes from my work on a phonological database. The database needs some method of handling featuralization to facilitate feature-based search, and I just haven't seen a good way to do that yet.
Local solutions to stop humans from destroying their environment.
We've implemented recycling maximally and created programs for reuse, repair, toxic waste, general waste reduction, battery disposal, returnables, etc. But it isn't really making a significant impact.
People still spend most of their disposable income buying things they don't need that can't be easily repaired, or that they can't bother repairing, or that they discard due to fashion, or because they are mostly packaging, etc.
We can't get them to stop buying useless stuff that they don't need. We can't stop them from buying new cars every few years.
We can't get them to spend money on quality infrastructure, like insulation, that would reduce their energy needs by about 80%.
We can't get them to stop going to restaurants or getting food delivery or take-out, which expends a multiple of the energy and greenhouse gases of cooking your own locally-sourced food.
We can't get them to understand that we're all going to die in fairly short order unless we bring the environmental disaster under control.
You’re probably already aware of it, but for generative design tool inspiration check out Grasshopper. It’s for building geometric algorithmic designs based on various inputs and constraints. It also has some tools for exploring permutations / genetic algorithms.
A prediction/promise competition to increase accountability in general.
Politicians/companies spew words and we tend to accept them bc they're "authorities." I believe the best way to increase "skin in the game", accountability, and humble expertise is to predict and have your predictive performance be visible.
There's prediction markets to trade real money with but none that I've seen that ranks your performance against others.
I'm building a platform where it's free to enter/submit predictions in categories of interest. Top ranked players will receive prizes.
I think once players can measure themselves against "authorities" (whose public predictions will be scraped), both will become more accountable.
After predictions, I aim to work on "promises" since they both increase future skin in the game.
Safe is interesting but it’s still pretty much still a ledger. I am not sure this will work at current-internet-scale, let alone in a couple of decades worth of data. In addition to that they still pretty much keep building on top of the web, which is really not accessible to machines.
I’ve been coming up with a new pedagogy for the fundamentals of interactive and computer art, and trying it out with kids on TikTok. Maybe you saw the software I put out a few weeks ago on HN called No Paint: https://nopaint.art. However, most of the time I’m using pen and paper for this work.
detecting the tempo of a rhythm a drummer is playing in real time (including tracking variations/drift in tempo) based on the time stamps of each drum hit (and starting at an assumed tempo). I've found in depth resource on problems that are similar, but not the same (wave form data rather than time stamps, all-at-once rather than real time, etc). I'm trying to keep myself open to the fact that the solution might be incredibly simple, and just unrelated to any path I've gone down, but it's led me down some interesting paths that I'm enjoying, so also just taking that for what it's worth
My live goal is to bring back more programming and design techniques
that we as a society used to make the best ps1 and gameboy games and old school animations to the way we are currently developing games. (Any tips and advice how i could help improve the game development industry is welcome. Next year I will be doing my masters so i'm also still looking for a subject for that. The last couple of months I have been experimenting with water color effects in openGL, so I'm looking for something like that)
Just about every company these days has their data spread out all over the cloud: marketing data on Facebook and Google, social media data on Twitter and Snapchat, customer data on Salesforce, sales data on Shopify and Amazon, and so on. Most companies will either (a) hire a team of data engineers to collect and exploit this data or; (b) hire an expensive consulting firm to build an ETL pipeline, or; (c) let this data rot in the cloud. For the past 6 years, I've worked as a data engineer (where I became intimately familiar with Facebook and Salesforce APIs), and I'm confident that I can automate around 80% of my job.
It's clear that the value prop is astronomical: just one data engineer will run you at least 150k/yr and most of the work will involve maintaining API data pipelines. Having a "one-click" solution where one simply provides an API key and what data they'd like to warehouse (e.g. marketing data, social media data, customer data) and where (FTP, S3, Redshift, DynamoDB) would be invaluable to companies that want to make sure they exploit this treasure trove.
Some hard/interesting problems:
- API specs constantly change (Facebook, for example, has a quarterly update schedule)
- Inferring JSON schemas is hard
- Data integrity is hard (data types sometimes change willy-nilly)
- API rate limiting is tricky
- Resilience is hard
- Recovering old data (especially for certain services) might be impossible
Everyone is starting to become keenly aware that letting the data rot is starting to have a higher and higher opportunity cost. Not warehousing your own data is simply not a tenable option any more: the world’s most valuable resource is no longer oil, but data.
I'm actually in the middle of building something really similar, but targeted at manufacturing/distribution companies (i.e. they have an ERP/inventory system, how does that data get onto the eCommerce website?)
Would you be interested in talking with me about how you're building your system? Email in my profile.
Working on an algorithm for recovering the text you type just by analyzing the keyboard sound captured through the computer's mic (i.e. acoustic eavesdropping). The hard part is doing it without having train data for the keyboard.
Problem: Currently, users (or attackers) can easily manipulate the location provided to an app on a phone.
Solution: Use raw measurements from positioning sattellites to check if the location reported by a user actually lines up with the measurements of their phone.
Why is it hard?
- Lack of documentation, standardization and support on collecting raw measurements on phones.
- Processing raw measurements is tricky
- Finding anomalies in this raw data is even harder
Some of it is working - yay! - and there's also a public API, such that others can use it too: https://claimr.tools
Hmm, I see your concern. I'm pretty big on privacy myself, so I feel I should be able to answer this in a satisfying way.
Most importantly, this always requires the user's concent. On Android you still need the same location permissions as for normal GPS location positioning. Hence, as a user you're always free to reject location permissions same as before.
I have to admit, this tech can also be used for evil purposes. For now, not all phones support collecting raw measurements - either hardware or software support is lacking, but in the future if some entity could force you to have your location verified, then having you cannot lie about it anymore.
I'm working on a website that helps me to keep track of the podcasts that I'm listening to across platforms (Think Last.fm / Trakt.tv but for podcasts) by automatically importing the listening data from various podcast apps.
I'm trying to build a storefront experience for building customized home improvement services. These services can be anything between a new set of curtains, new tiling for your kitchen or a false-ceiling throughout your home. This is difficult because these services are dynamic in nature, and as such we cannot sell fixed SKUs like normal e-commerce services do.
We are trying to develop a unified service-building experience wherein the user will be able to punch in their requirements and get a product tailor-made for them, along with the estimated price (which comes with ~10% tolerance). It's tough, but we're getting there.
I'm trying to build a program to optimally schedule time to work on your tasks based on your schedule.
I have a bunch of tasks I have to complete by a certain deadline — these include things like engineering sprint tasks, drafting a design document, completing an assignment, etc. I have to get these tasks done in between my regular schedule of meetings, lunch breaks, and rests. I want to get a program to tell me when I should work on what depending on a task's due date and priority. If something comes up, I want my schedule readjust to accommodate the interruption.
Figuring out how to fill the gap between code search and static analysis (code checks.)
Right now the tools we have for programmatically reading through code are:
1. Code search, which is fast, but inaccurate/heuristic.
2. Static analysis, which is slow to run and difficult to write, but very accurate.
I'm building a tool that is as fast and easy to use as code search, and is as accurate and expressive as static analysis.
Still just a landing page . Looking to get a public playground people can mess around with this week.
A piece of music playing software that can react to the lead of an instrument. I'm picturing the software will be able to play a concerto (or simply a duet) with a real instrumentalist and react to dynamic and tempo changes in real time, like an orchestra under a conductor.
The same principle can also be used to create a real-time software harmonizer  for live performances, but this problem already has a reliable solution through hardware.
Are you familiar with Magenta? 
It's not real-time, but definitely an important area of research.
Also check out Dan Tepfer  who is doing amazing work with an algorithmic approach to reactive live performances, with great call and response tactics.
I myself am slowly prototyping a fully artificial AI band which can be orchestrated using very high-level musical ideas and a big helping of intelligent randomization and algorithms based on music theory.
I've been prototyping in Andrew Sorensen's Extempore  and have laid much of the groundwork like melody/harmony/rhythm generation as well as basic modulation of these elements to create longer musical structures which utilize motifs in multiple ways
Currently it is a matter of shedding pre-computed tables of "nice" sounding progressions or purely random progressions, and creating a more fundamental approach which can derive the appropriate progressions from the given user parameters. I am also expanding the program's ability to generate aesthetically pleasing and unique singular motifs which drive these algorithmic compositions.
I have a band leader / conductor module which provides cues and other synchronized data, which even without advanced motif generation and modulation still allows for things not currently possible in any non-code musical production software such as global dyamics, timing changes, progression sharing, (eventually) directing impromptu solos, etc.
Reach out via email if you'd like to discuss more!
Thanks for your resource. Never heard of Dan Tepfer or Extempore - such a great way of imagining music!
What I was planning was something simpler - much like generating sounds from a written score, but like live classical performances, the generated sound reacts to the cues of the player.
I'm not exactly familiar with Magenta, but the thing I'm currently trying to implement (at a very early stage) is Deepmind's Wave2Midi2Wave which is part of the dataset released with Magenta . I'm not aware if they'd released any code as well.
Maybe not weird but certainly hard. I’m working on a way to build better habits. I know there’s a million apps out there that present the typical log a habit behaviour however I see this as mostly punishing.
A positive streak can easily turn into a negative streak.
At the moment I’m brainstorming with a google sheet and a few manual tweaks. I believe there’s a way to help people/myself build a better life by removing the nasty things (like smoking) and grow the nicer things like healthy eating or exercise. But our desire for instant gratification and our lofty goals get in the way.
Related to the future time travel. You somehow get thrown into the past, say 100-150 years. How do you put out a message proving you're from the future and requesting time travelers to come back and rescue you?
My proposal is to take out classified ads in newspapers, couching it as a code or puzzle to be solved, and put in dates of major future events you know about. D-day, Kennedy assassination, Challenger explosion, 9/11. Top it off with Murder Hornets and they'll know about when you came from.
I believe that computation, mathematics, information and semantics all share the same set of simple foundations. And that the means to understanding these foundation is to look at how physical computational processes use information from their environment, and produce information that is used to make "real world" outcomes happen.
I am working on this at the moment, and no, I don't expect that what I write here will convince anyone. And I don't have any summaries of the work at the moment.
We're in need of an information-theoretic definition of computation or information processing, in analogy to Shannon's definition of communication. I'm trying to work it out.
It's clear that there is a relationship between computation and information via Landauer's principle. It's also clear that it's got to do with nonlinearity of dynamical systems: "The essence of computation is nonlinear logical operations." J. Hopfield, PNAS 79, 2554 (1982).
Shannon's theory of information doesn't offer a way to tell whether a channel is doing computation or merely transmitting the information -- the mutual information merely characterizes how much information goes across a channel, but is insensitive to any changes to the representation.
OTOH, the algorithmic complexity theory (a la Kolmogorov) doesn't really have the same generality as Shannon's theory. Flops is not a well-defined measure of information processing rate for the brain, for instance.
I got inspired by the "integrated information theory" folks -- they have this notion that combining information streams in a nontrivial* way is necessary and sufficient for consciousness. I disagree that it's sufficient for consciousness, but it might be sufficient for a definition of information processing or generalized computation.
> You can't have the latter without the former.
>That doesn't explain meaning in the case of imaginary or abstract details, or the system's conception of the meaning.
The mapping is in our heads. I don't know what you mean by "the system's conception of meaning" -- which system, and what is a conception of meaning?
> Shannon's theory of information doesn't offer a way to tell whether a channel is doing computation or merely transmitting the information -- the mutual information merely characterizes how much information goes across a channel, but is insensitive to any changes to the representation.
In my view, it is more than a change of representation. The computation is using information that might true of something, to produce new information that might be true of something else. I don't see how anything like Shannon's theory could explain how it is able to do this.
> I got inspired by the "integrated information theory" folks -- they have this notion that combining information streams in a nontrivial way is necessary and sufficient for consciousness. I disagree that it's sufficient for consciousness, but it might be sufficient for a definition of information processing or generalized computation.
Ok. I share the same view, that it isn't sufficient for consciousness.
> I don't know what you mean by "the system's conception of meaning" -- which system, and what is a conception of meaning?
The computational system. Consider the case of the human brain, which may be computational. People can understand that some information is about X (say, a particular tree, or the notion of Justice). But it's not just that they know what the information is about, but they understand something of the character of that thing -- of the tree, or of what Justice is like. If the brain is computational, then that would mean that such an understanding was computational (or computational plus bodily interactions with the environment, etc). But that doesn't tell us how it is that computation is able to "embody" an understanding of the character of something. That needs to be explained.
> Here's a link to the paper
Is that the same paper? I notice it has a different title to the one you mentioned above.
I don't think so, in the sense that it's not concerned with the same kinds of details as information theory (which it is not to say it is incompatible with it), and it provides a set of foundations that are common to information, semantics, mathematics and computation.
As is well known, information theory doesn't deal with the meaning of the messages. As far as information goes, I am primarily concerned with information in the sense of such "messages", and their meaning.
Computation is considered by many to not involve semantics, because it processes information in a "blind, pattern matching" fashion, without regard to its potential meaning. But the sense in which it is semantic does not have to do with its intrinsic characteristics. It it a matter of its "extrinsic" details - how the information states in it relate to details that are (typically) outside of the computation. Seeing the computation as a physical process, processing information "about" details in its environment, highlights this.
The key to all this is appreciating that, once you see semantics and information processing as a matter of physical processes, you can see that the correct semantics can be necessary for producing a physical outcome. So you can analyse how that physical outcome was produced, in order to understand the semantics.
Ok, so less about the flow and nature of information and more about the semantics.
> Seeing the computation as a physical process, processing information "about" details in its environment, highlights this.
Are you familiar with Karl Friston's work on the Free Energy Principle?  He touches on how the nature of a living being (computational agent) is to model its external environment and act upon that model. That making decisions based on internal logic requires external stimulus and semantics being analyzed and augmented in sort of a feedback loop.
> The key to all this is appreciating that, once you see semantics and information processing as a matter of physical processes, you can see that the correct semantics can be necessary for producing a physical outcome.
I think I understand what you're saying and it touches on some of my own inquiries. A more concrete example, if you have one, might better help align me with where your thoughts are at regarding this.
> Ok, so less about the flow and nature of information and more about the semantics.
I'm working on explaining the fundamental nature of information, and arguing that it is fundamentally semantic. I think there's a lot of confusion about what information theory actually tells us. I don't think it actually tells us about the fundamental nature of information. This isn't to detract from its importance, or to say the theory itself is wrong -- I'm saying that the usual interpretation of what it means, regarding information, is flawed.
> Are you familiar with Karl Friston's work on the Free Energy Principle?
Not terribly familiar with it. I watched the video.
I don't think it gets down to the a precise understanding of the fundamentals. I don't think there is a good understanding of the concepts like information and modelling, that it uses. It doesn't seem to provide specific mechanistic explanations of how the phenomena work and how they have their apparent properties.
As a couple of examples, what explanation does it have of how a system may have an understanding of the character of some entity? Where that entity might be something apparently "abstract" or "imaginary"? If the semantics are about mathematical details, what exactly are they about? What are those mathematical details?
But I don't think I can really hope to explain my position here. It requires a lot of explanation, and I'm still working on the explanation.
>> The key to all this is appreciating that, once you see semantics and information processing as a matter of physical processes, you can see that the correct semantics can be necessary for producing a physical outcome.
> I think I understand what you're saying and it touches on some of my own inquiries. A more concrete example, if you have one, might better help align me with where your thoughts are at regarding this.
Imagine a robotic arm that, when a electrical current is sent to it, reaches out and grabs at an area in front of it. If that current happens when there's an object in that position, then the robotic arm will end up picking up the object. If the current happens when there isn't an object in that position, the robotic arm will have picked up nothing. Thus, the semantics of the electrical current has a necessary causal role in the arm picking up the object. And thus we can analyse the causal details to get a concrete understanding of the semantics and its role.
I built out a sequence where you take some large integer n, and a smaller positive integer x, the first number in the sequence, such that the next number in the sequence, x_1, is the remainder plus the dividend of n/x, x_2 is the remainder plus the dividend of n/x_1, and so on until you reach a cycle. The problem I'm trying to solve is given some n and some x can you give at least two solutions for x_-1 (one number previous in the sequence)?
I'm doing the hardware part of a CB packet radio infrastructure that can be deployed quickly after a disaster, and allows basic BBS functionality in addition to passing messages, and works with existing cell phones.
It's inspired by the CellSol network in the "Left Behind" novels.
I don't know what language you're using, but if you want help I'd be happy to do it. I just built an E-commerce site with NodeJS using Stripe as the payment processing (PayPal is next). My contact information should be on my profile.
I have a MYSQL database and i'm trying to order tables by "update_time" but it only works some of the time. I'm not sure if it's an issue with the time of the system or if it's innodb. I've search for answers on stackoverflow and someone said innodb had a bug that affected update_time but it's since been patched and i am running the latest MYSQL version in Ubuntu 20.04.
I wrote a bit about using stylometry to identify the author of a tweet. I'm coming up with additional features to add to this model and testing it with identifying attributes about the author (political affiliation or gender).
Why don't gig workers just work for themselves?
Gig apps are basically trivial to engineer in 2020 until they get scaling problems, largely due to the contributions of the open source community. It would be nice if there were also open source tools for people's gig businesses, so they didn't have to sell their labor through a company extracting such a heavy rent.
The whole idea of the so-called gig economy is that you have a huge platform that connects gig workers to clients. How could you replace that with one app for each worker? Would you install 200 taxi apps and start trying each of them when you want a cab? Or would you order an Uber?
And if you are envisioning an open-source platform, who will run it? Who will vet workers on it, even to the basic level that Uber does (e.g. Make sure they are real people, have a driver's license, and a functioning car)?
Write GUI for FFmpeg... or not exactly. I'm building an API + component library so other developers (and hopefully everyone down the road) can quickly assemble a UI that does a particular FFmpeg job.
Want batch transcode? Add the files selector, destination selector and render button. Want to mux audio and video? make that two file selectors.
Semi-supervised document clustering of research papers on arXiv. I struggled heavily in the first year of my PhD just learning the meta-game to staying on top of my field (computer security + computer architecture).
It turns out semi-supervised document recommender systems aren't easy to bootstrap with zero user data.
Teaching machines to diagnose cancer, because without that, there's no way we are going to solve cancer generally. There just aren't enough pathologists. Harder that going to Mars, probably easier than world peace. Definitely involves an enormous amount of data. Like, a non-trivial fraction of world-wide storage.
There is no one cause of cancer. While there are a number of well-recognized precipitants (HPV, trichloroethylene, UV radiation, etc), cancer is fundamentally a disease of disorder in the genetic code. As entropy invariably increases, the integrity of each of the trillions of individual somatic genomes per human start to degrade, they rarely degrade into a positive feedback loop of replication, and we have not invested enough of our evolution to prevent it. For example: mice don't normally get much cancer because they die before they have time for it. Elephants don't get much cancer because they accumulated enough copies of p53 over their evolution that they don't have as many cancers per N cells.
Humans have 1 copy of p53 from each parent and have extended their lifespan so many ways that their one copy of p53 is no longer enough. Lifelong accumulation of genetic entropy is inevitable. The game is to catch the offending cells and kill them.
Writing a no-code drag-and-drop tools for transforming data (join, filter, reformat, reorder etc). The hardest bit is handling cascading changes to the column structure (e.g. removing and reordering columns) intuitively - particularly if they change the input file to one with less columns or differently ordered columns.
Love this idea. At one point a few years ago I was keen on creating a visual interface for lodash for non-coders. Watching nontechnical users try to navigate repetitive tasks is always hugely frustrating. Decided in the end that if they won’t use a spreadsheet they probably wouldn’t use a website.
I know there are various Extract Transform Load tools aimed at professional data scientists. My product is aimed at numerate professionals who aren't programmers or data scientists and have never heard of 'ETL'. The idea is that they can install Easy Data Transform, transform their data and output it in a few clicks, without programming. It is also very cheap compared to many commercial ETL tools.
Nice! Are you aware of https://github.com/hooram/ownphotos? No affiliation, I'm just interested in using something like this, I'm not sure how actively it's being worked on (it was sort of a 'functional demo' last I saw, but not quite usable).
I found the generic approach NextCloud takes towards file-storage to yield underwhelming results when it comes to photos. Features like searching for photos by faces, locations, intelligent sorting are all lacking and making their mobile merely a file browser.
Building on top of NextCloud was an option I had considered, but understanding their abstractions was painful, and I did not want to tie myself down to their ecosystem.
You can imagine a version of the world where designers structure their designs in a way that can easily export to e.g. a vue component, and along with the base structural layout they define different UI states it can be in, with animation timelines. Designers should be able to specify every aesthetic variable, and developers just program the business logic to fill content tags and toggle designer-defined states.
I am building a tiny "replay" script that when run with a code file will print the code by line/block with a customized amount of delay OR keypress. The motivation is to be able to slowly read large source code files without having the whole file on the screen.
Working on building a monitoring and log trace stack for machine learning. What’s weird/hard is that we’re looking to deduce the performance of models that do not necessarily have ground truth readily available so it’s tricky to figure out if the model is working or not.
I would argue that open source as we know fails to balance the market. We now have monopolistic tech incumbents in the “GAFAM“ companies, that thrive on open source while paying little tax and outcompeting actual tax paying businesses. I see maintainers either burning out or selling out to venture capitalists.
I want to believe in free and open source, but I also see that it fully enables surveillance capitalism, casino capitalism and tax avoiding monoliths.
So, I realize that I need to move past classic licensing and consider ethical licensing that try to remedy society’s inequalities and injustices.
Call me a cyber hippie, but if I want to build cool stuff in my spare time to share I want to maximize its chances of doing something good in the world. To that end I’m evaluating some ethical licenses.
There are many ethical licenses out there which are evolving. Presently, I’m evaluating this one: The (Cooperative) Non-Violent Public License: https://thufie.lain.haus/NPL.html
After the weekend I’ll try to get in touch with a Lawyer to review the license implications. It’s arguably not open source in definition, but maybe more so in spirit.
The main reason I asked was because I recently saw a show where a scientist was studying the chemical composition of soil under decomposing animals. They are trying to determine the situations under which fossils were formed since it is extraordinarily rare for remains to actually form a fossil.
Problem: my team is decided in two: Corona A and Corona B, each working alternate days (m w f and t t). How many extra hours should I make Corona A work in exchange for them getting FOUR DAY WEEKENDS for the last two months!
This is pretty small and trivial, but I'm trying to implement DP problems in Haskell just to get some more practice with it. I'm stuck on Manacher's algorithm for finding longest palindromic substrings
Shameless plug, doing this for networking not advertisement. My company has several (open and non-open source) web-based products for parsing and rendering of spreadsheet files and execution of formulas. Please get in touch if you'd like me to tell more.
Not sure if this is weird so much as an area of business that not a lot of people think actively about, but get bit by.
How do you automate and aggregate context across business departments for various forms of activity, and then map that to marketing analytics in a way that gives relevant and sufficient insights beyond just channel or user data? How do you more fully answer the question of "what happened when [$thing happened]?"
THE VALUE OPPORTUNITY
Countless people hours and marketing dollars are wasted going down fruitless rabbit holes looking for what caused some change, or thinking they found the cause in a change in performance and pursuing that when it reality it was something else. In many of these cases, this could have been easily avoided if only there were sufficient data on the business activities (internal and external) logged and aggregated with marketing data in a way that was then automatically surfaced in an appropriate manner. As the scale of the company increases, so does the impact of this.
WHY IT IS WEIRD/HARD
It's weird in the sense that only a small subset of people are immersed in analytics enough are aware they should care about it, and probably fewer geek out enough about marketing analytics and process to care about trying to solve it. It is hard because it is just as much a people challenge as a technical one. The technical side is somewhat straightforward in terms of aggregating as many data inputs as you can--it's basically a ton of data plumbing and monitoring for changes with that. Whether that's bid management platforms and DSPs or SSPs, email platforms, site analytics, etc. But then also project management tools and properly categorizing the meta data for relevant updates to be surfaced. You have challenges around walled data gardens and comparing apples to oranges around things like attribution measurement, but that is something that can be handled. Surfacing it in timely and sufficiently useful ways is an interesting design and UX challenge though, from annotations and "pull" data, to modals and callouts that are more "push" in how they inform people of context before it bites them.
The people side however, is constantly in flux in a way that the data side is not. Some aspects of this absolutely rely on consistent adherence to process to capture key data that is hard to slurp up through an API. Some of it is quite ephemeral. I've encountered team situations where people object (or struggle to due to limited training) to filling out a couple fields in a Google Sheet, or need to be hounded to fill out a given form, etc. Some companies can enforce this to levels others cannot. Things also get really interesting at large companies (think FAANG). You're dealing with many teams, many overlapping or conflicting processes such a solution would need to be embedded into, localization, internal/external vendors of varying levels of visibility needs, and also personalities who may want more control over their orgs' processes and need persuading.
At the end, this all needs to be balanced against how much utility you get out of the insights because it is easy to over-index on investing in building this tech and process out only to not get insights out of it. Unfortunately you often only learn that after the fact when you've been bitten by it.
If there's any companies trying to solve for this, please do reach out (see profile). I love chatting about it and want to help build the tools and processes that solve for this at scale and have ~15yrs experience in the space, a good chunk of which have been spent trying to solve for variations of this.
I've experienced this problem in small companies/side projects I've started. This a great perspective and great take on the problem. I'd love to help out in development/anything you'd need help with, email is in my profile.
- Of the countries in the world, none are really free, and generally have quite burdensome taxes relative to their benefits.
- Representative democracy has not improved a whole lot from the v1.0 created a few hundred years ago. For example, voting is largely between "person I don't really like" and "person I hate," and you're just 1 vote in millions, meaning voting is irrational (https://en.wikipedia.org/wiki/Paradox_of_voting)
- In referendums, people vote without really knowing much about what they're voting on.
- There's a lot of people who want to move to a better country, but immigration policies are very restrictive.
- To create a new country, Sordelia, based on liberty, sortition, and deliberative democracy.
- Laws are voted on based on a small, randomly selected citizen's assembly. The random selection keeps the assembly small, so every vote counts, yet is statistically significant so that, if a law passes, we're highly confident it would've passed before all the voters. This assembly will learn and debate the pros and cons of the proposal before voting.
- Laws that would violate fundamental principles (like freedom of speech) are disallowed.
- We aim to purchase land from a developing country. There are plenty of good reasons they'd sell to us, beyond the immediate payment. The developing population and infrastructure will bring an increase in trade and jobs for their country. There have been many historical examples of special economic zones (SEZs) having positive influence on their neighbors, and Sordelia's effect on nearby countries will be similar.
- We'll have pro-growth and pro-immigrant policies to attract people to our country.
- The rise of remote work makes this even more attractive for moving to Sordelia.
I'm intrigued by the idea but I have some reservations:
1.) How likely is this to actually happen? Are there any talks with nations for purchasing land? How many members would actually uproot their entire lives to go live in this experimental nation?
2.) I'm absolutely a libertarian, but even if I have to admit that some of those that care so deeply as to self-select for something like this are likely to have some... oddities.
3.) Libertarian principles have never been very popular with women, so I imagine most of the interested parties are men. If this were to last for a long period of time, I would imagine that most of the unmarried men would compensate for that by finding wives from the neighboring developing nations. I don't have anything against that, but I feel like this new nation would quickly get the reputation of being Thailand 2.0.
how can we streamline the transfer of knowledge from the oldest, wisest, individuals on the cutting edge of their field to the youngest, most ambitious sponge-like individuals just starting out their careers?
I am trying to decide whether a political party could be created (in my country) which implements rational evidence-based policy. All policies and their implementation would require a postulate, and pre-defined criteria for success or failure. All regulations and driectives would then need to be consistent with, and balanced according to, the accepted policies. If necessary, laws would be changed/implemented to further this.
This is surprisingly easy. Unless you have some medical condition, in which case it may be impossible. ~2 years ago I was 110 kilos, 185cm. Borderline obese. Now I'm in the upper 60's. The 6(8 turns out in my case) packs became a thing only as a consequence of the wfh, and subsequently boredom and eventually "let's see if it's possible". Realistically it took me about a month. And while exercise played a role, most of it came from food. Without starving at all either.
Science solved this problem way before the shake weight.
Understanding some basic concepts in nutrition/wellness, such as macros and micros, calories, how Energy Expenditure (keeping you alive, walks, lifting weights, etc) and Energy Storage (Fat, Muscle, etc) all ties in with your unique genetics, body composition and lifestyle goes a long way.
Quick and dirty trick: be at a low Body Fat Percentage and have abdominal muscles (which we all do ha).
Skipping meals doesn't help you lose weight in most cases. Your body optimizes for the expectation of not having food and stores more energy.
Eating smaller, filling meals helps more. I actually don't know of anyone who's lost weight from salads. Usually they're just left feeling hungry and usually add dressing to compensate (which just gives you completely unfilling calories).
I've got a problem that's driving me crazy: find a way to present a huge amount of written (and some visual) content, through a website, in an interesting and accessible way.
One of my hobbies is constructing worlds. One world. I've been working on it for over 40 years now. 20 years ago I decided to share my work with the world, and built a website for it. The approach I took them was to divide the site into a homepage, with links to more-or-less self-contained subsections. The solution works, but I want something better. I just don't know how to achieve it.
A wiki-based approach would seem to be the obvious choice. But my experience of wikis is that they feel too, well, fragmented. There's ways of overcoming this (portals, wikibooks, etc) but I tried building a wiki ... it didn't work for what I wanted to achieve.
Another approach could be to give up on building my own site and instead rely on a cloud provider to do all the hard work for me. But I dislike this idea on every level I can think of. For a start, what happens to all my work when the host company collapses, or pivots to a more profitable idea? What happens when I want to introduce a feature - "teach yourself language X" lessons that the site architecture doesn't support?
Of course, if I had all the money in the world then I could employ many very clever people to design, build and develop content for a truly wonderful user experience ... yeah. That's not gonna happen. So the solution needs to be "doable".
Any ideas on how to solve the my problem will be very gratefully received!