The air traffic control sector is very dubious about AI in general because they need situation awareness at any time, and they don't want to rely on a system that will eventually fail.
Did sighbit measure the situation awareness of lifeguards with and without their solution? Are they sure lifeguards will not rely on the AI, potentially miss obvious distressed swimmers, and be useless when the system completely fail?
Will the lifeguards who always used this system will be trained and not panic when the system will fail, for example because of a power outage?
I fear it might be even worse than that. But first let me clarify: I'm not against people using technology to augment their natural capabilities. On the contrary.
However, I see a big risk in how this could cause a trend of less qualified (cheaper) lifeguards, hired by people who might feel more pressure regarding budgets than actual safety (probably lands mostly on the "lifeguards" neck anyways).
Considering how everything these days is run with a (often bad) cost/benefit analysis, I'm everything but positive about the "unforeseen" side effects of AI in this field.
As difficult as it is to spot a single drowning person in a sea of people (no pun intended), it might in fact be partly the feeling of sole responsibility that keeps (good) lifeguards as alert as they often are. I'm not so sure that AI will have a positive sum effect on that.
I was a lifeguard in high school, and I can assure you they are already very cheap to hire. To be red cross certified was a class one night a week for maybe a month or two, and I got hired at the YMCA for under $10/hr
> The air traffic control sector is very dubious about AI in general
What do ATCs require AI for, anyway? At least as a failsafe against operators issuing catastrophic orders or to catch pilots not following orders (or veering off course), spatial awareness and collision checks is basic geometry only, and the system can be automatically fed with radar and AIS data.
> Are they sure lifeguards will not rely on the AI, potentially miss obvious distressed swimmers, and be useless when the system completely fail?
That, plus it has been shown multiple times that AIs trained on majority White normal-weight Caucasian datasets will have issues with people of color, Asian people or people over/underweight. And given that many issues regarding discrimination in AI only pop up way after release, this scares me.
This is interesting technology, however I fear it will breed complacency in lifeguards. I worked as a lifeguard off and on for a few years when I was in highschool and college so I can say from experience, sneaking boredom is a problem you have to actively guard yourself against. It's a job where 99.999% of the time nothing bad happens, but you're supposed to be ready for that 0.001% in an instant; that's easier said than done. I'm not sure a system like this would have a productive effect on lifeguards.
If these systems are coupled with the right sort of training, they might be a net benefit. Or maybe the system could be designed in such a way that requires the lifeguard to stay attentive, such as requiring the lifeguard to input the current headcount. If the lifeguard's headcount starts to disagree with the computer's, that could be a signal that the lifeguard has become fatigued and needs to call in another lifeguard or call people out of the pool. (If the system isn't accurate enough to be used in this way, then perhaps it's not ready for use at all.)
> I'm not sure a system like this would have a productive effect on lifeguards.
> If these systems are coupled with the right sort of training, they might be a net benefit. Or maybe the system could be designed in such a way that requires the lifeguard to stay attentive, such as requiring the lifeguard to input the current headcount.
I too worked as a lifeguard and I think only a really bad implementation can have a counter-productive effect. Spontaneously, I can think of sunglasses with AR-overlay and a feedback loop:
* mark all people in field of view, color coded
* let lifeguard acknowledge/ignore problems
* allow for "problem"-handover to next post (e.g. if busy or if it is a swimmer in a current)
What definitely does not help is to have lifeguards behind monitors because they'd miss out on 99% of the real daily "action": dealing with littering, violence, ordinance violations, answering questions, pointing new arrivals towards safe zones....
Is it just me, or are there some sketchy privacy feelings surrounding this? Do beaches currently have CCTV systems? The idea of creating a large corpus of videos of people in potentially quite revealing swimwear, particularly when there’s children in frame seems like a bad idea privacy wise. I don’t know that there are necessarily any legal problems, but it just feels not okay.
Yes, it's quite common to have cameras pointed at the surf-zone, both public and private. Surfers and beachgoers use them to decide where and when to go to the beach. Utilizing these untapped resources with AI has been brimming with potential for awhile now for things such as this.
This makes me wonder - is it possible to make a "smart camera" with an open-source circuit diagram and laymen-interpretable circuit verification indicators? Like suppose you want to sell a CCTV system for this exact situation, that's got a pretrained detection model implemented on some inference circuitry hooked up to a streaming camera. You publish the circuit diagram and any other operating specifications, and you want to convince people that the physical device they have in their hands is impossible (or at least very expensive) to alter, say by adding a hard drive or wireless modem or whatever. Are there sensors you could attach to the circuit such that modifications to what is interconnected would manifest as deviations from published sensor values? Some sort of md5 hash analog for hardware?
That all assumes you've solved the problem of training a good enough detector (presumably from a similar dataset) which has its own difficulties, but OP's question made me wonder about the aspect i described above.
That's not inherently linked to the children being black, it's more linked to Black people being significantly more poor, as a result have way less access to swimming pools or vacations on water bodies, and therefore drown more due to lack of training.
I didn’t say it was inherent in being black, but if you are selling a drowning prevention technology, it’s imperative that it work well for dark-skinned people. As someone with dark skin I can say that the problem of tech products (Automatic red eye reduction, “Facetune” style image tuning, Face recognition login) not working as well for us is very real.
With all the wildfire danger of the past few years and climate change being what it is, you'd think utility companies, like PG&E, would have something similar deployed across their transmission network for early notification.
Pop a commodity 360 camera in a weatherized enclosure on top of all transmission towers, take a GPS reading, plug feed into a model trained on smoke and bright flashes, send a link to the live feed for a human to review.