I feel like the primary use case for such a technology is manipulating and profiling people over video chat, maybe even autonomously. Hiring managers, HR, landlords, and police are obvious customers.
The response I anticipate will be "But this will help doctors over telehealth and stuff!" - Please see https://calebhearth.com/dont-get-distracted
This tech (detecting pulse from regular video) has been around almost 20 years now, and this doesn't seemed to have happened yet.
You see this type of thing in spy movies, but I'm not sure it's that useful in real life. You're basically taking one piece of data a polygraph uses, but without the most important component (skin conductance). Polygraph accuracy isn't that great to begin with. You can profile and manipulate people more effectively based on their reactions and behaviour, and their pulse will be much harder to interpret.
Heart rate is very correlated with health. So at the very least they can (illegally?) filter out unhealthy candidates.
It's probably not great for this. Resting heart rate is correlated with fitness, but candidates can be nervous during interviews which can drastically change their heart rate. You'd probably be able to make better guesses about their health through looking at them (do they look sickly or overweight?) and interviewing them.
It's also not a specific enough signal for the types of health conditions that require taking time off work. Assuming they could get an accurate resting heart rate, most of the time it'll just indicate whether the candidate does a lot of exercise.
You are talking about healthy and borderline. I am talking about unhealthy. If someone has heart rate of say 150 they are far more likely to have other issues.
If someone has heart rate of 150 during an interview, it's most likely anxiety.
I don't think this tech has actually been used in practice for that long, if at all. It was only first demonstrated in 2012 at SIGGRAPH.
Can you cite any commercially available uses of such tech?
I don't know any commercial uses of such tech today. I'm not saying they don't exist. I just don't know of them.
I had said I don't think it's very useful for "manipulating and profiling people over video chat", so I wouldn't really expect there to be a commercial product for that. Probably it's used in fitness or heart rate monitoring apps for people that don't have a fitness tracker device and prefer not to manually count their pulse.
Here is the tech demonstrated in 2007: https://pubmed.ncbi.nlm.nih.gov/17074525/
The core algorithm is really simple. You find a patch of skin. Take the average color of the pixels in that patch. The color will become more reddish each pulse. Do an FFT and take the strongest peak in the plausible heart rate range. You could prototype this in a few hundred lines of python.
If this were useful for police or hiring managers, someone could have use the tech to make an app for them within the past 19 years.
Of course, companies have a history of trying to market a lot of BS metrics (e.g. graphology, MBTI) to hiring managers, so I wouldn't be that surprised to see a company claim they can predict employee success using pulse. Whether it works is another story.
> You could prototype this in a few hundred lines of python.
You mean Claude can one-shot this.
Liveness detection of what appears to be a person in video, but is a photo, a skulpture, or some other attempt to falsely identify to an identity system as another.
I don't it's ever been practical to ship in a product? You need ~20 seconds of data to stabilise the reading, and any large motion ruins it - even though Microsoft Research demonstrated a Kinect could detect heartrate in a lab setting, it wasn't viable to ship in a fitness game.
I'd expect the primary use case to be liveness detection, to validate that the person the facial recognition identifies is not a photo, a sculpture, person wearing facial prosthetics, or a mask. I've coding such software for that exact purpose.
Can you explain how https://calebhearth.com/dont-get-distracted applies to the potential response you described? I don't get it.
They will weaponize it.
[flagged]
This is a patronizing non-answer. If you don't see why, read my comment again and again until you do.
It seems like you're distracted. You wrote the same comment as your two siblings. If you're going to tell me I'm being disrespectful at least have the respect to see if my response will be the same. You should also have the respect to look at the blog post being referenced. At minimum, the title...
I read the post, and the GP had a good question that wasn't answered in it.
Then I'm concerned as the writer is quite explicit. So let me quote from the post
It doesn't matter if the tech is about finding WiFi or if your contract is with the DOD. Any technology that can do good can also do harm. It is easy to be distracted by the challenge of the project. It is fun and exciting, but it makes it easy to ignore how people who aren't well intentioned may use your creations. You'll never be able to prevent your creations from being used maliciously, but this adversarial process certainly can reduce the potential for harm.> We build tools, and ultimately some responsibility lies with us to think through how those tools will be used. Not just what their intention is, but also what misuses might come out of them. None of us wants to build things that will be used for evil. > The Association for Computing Machinery is a society dedicated to advancing computing as a science & profession. ACM includes this in their Code of Ethics and Professional Conduct: >> Well-intended actions, including those that accomplish assigned duties, may lead to harm unexpectedly. In such an event the responsible person or persons are obligated to undo or mitigate the negative consequences as much as possible. One way to avoid unintentional harm is to carefully consider potential impacts on all those affected by decisions made during design and implementation. > So how can we “carefully consider potential impacts”? Honestly, I don’t have any answers to this. I don’t think that there really is a universal answer yet, because if we had it I have to believe we’d not be building these dangerous pieces of software. > I do have a couple of ideas though. One I got from my friend Schneems is to add to the planning process a step where we come up with the worst possible uses of our software.I'll mention that in traditional engineering this is often a more explicit discussion. Ethics is required in the coursework and even outside the ethics classes you hear many such examples of unintended consequences. Of where people do their best yet mistakes are made that cost peoples lives or do other types of harm. If you were lucky you'd have a professor that walked you through this, showing you how easy it is to be blindsided by such things and where the harm is obvious post hoc but not before.
So if you want to not be distracted you have to know what the distraction is. You have to know what distracts you. You have to know that you too can be distracted. None of us are immune. The moment you think you cannot be distracted is the moment you are deeply distracted.
Here‘s also an advice: if you want someone to listen, try not to come across like you just did.
Your sibling said something similar, my response is identical
Sure, I read that but your comment still sounds like it does. You're doing yourself no service.
We’re just BSing on the internet. No need to tone police.
That internet is elsewhere.
This feels overly patronizing
Probably because I repeated "don't get distracted". But if you read the article then I think it'll take on a different context, as I'm mimicking the author, including their short paragraph style.
I get really annoyed at those articles which advocate the developer to sacrifice themselves towards a better future.
Companies externalize costs. I refuse to be the one, as an individual, with the burden of fixing society ills to my own detriment.
Tell me to get into politics, join an association, whatever. Now, as an individual, lose money for morals? No thank you. I may, and probably will, do it -- but don't expect I do it. I have no business, in a society with less and less public services, to harm myself and my family for refusing to do well paying jobs.
I will externalise those costs as much as possible. I will bring awareness. I will write letters. But don't ask me to leave a well paying job -- that's someone else job to fix.
But that's the problem. Your logic applies to everyone in an organization (a business, a family, a country, and so on). The organizations actions are not the result of any single actors decisions, even if weight isn't equal. The decisions of an organization are made of the decisions of the collective. The agglomeration of them. And that's why everyone's decisions matter. Because you don't know when your actions have more weight than when they have less.> as an individualWe're all in this together. One way or another, your actions affect others. Your actions aren't in isolation. Conversely this is true for others, and I suspect you would rather others treat you well, right? So which feedback loop do you want you contribute to? That's the only question there is
´"That's not my department", says Wernher von Braun.´
Well, I work at a telehealth company and yes, we do want to use this, so...