PokeVideoPlayer v23.9-app.js-020924_
0143ab93_videojs8_1563605_YT_2d24ba15 licensed under gpl3-or-later
Views : 248,028
Genre: Education
License: Standard YouTube License
Uploaded At Jul 14, 2023 ^^
warning: returnyoutubedislikes may not be accurate, this is just an estiment ehe :3
Rating : 4.828 (889/19,809 LTDR)
95.70% of the users lieked the video!!
4.30% of the users dislieked the video!!
User score: 93.55- Overwhelmingly Positive
RYD date created : 2024-03-21T16:03:42.189837Z
See in json
Top Comments of this video!! :3
I used to work at a Camera Manufacturer... and we found this interesting thing... the Autofocus on Cameras in incredible at identifying White & Asian Men and Women... we couldn't figure out why it was struggling with BIPOC people... until we looked at the set of data we fed the algorithm... less than 2% where BIPOC people... AI is incredibly susceptible for bias...
1.7K |
The problem is exasperated by the fact many female doctors arenāt addressed by their earned titles in media done about them. If āDoctorā or āDr.ā isnāt explicitly used in an article/video/etc., said media will not be included in the results ChatGP reports out, creating bias. The reason Dr. Mike was so high is because heās always referred to as DR. Mike- not general practitioner or any other title. MamaDoctorJones, on the other hand, have shown several examples where her title isnāt used at all- ex. Obstetrician Jones instead of Dr. Jones. This discrepancy happens way too often and needs to be called out more.
186 |
Recently graduated Software Engineer and this is an active discussion in the research end of the Computer Science spectrum. One avenue of research is using procedurally generated data sets instead of web scraping to train AI instances. This is STILL reflecting bias. It is important to continue to bring this issue to surface to drive grant dollars to fund research.
277 |
An ad that I am getting a lot right now is a finance company asking AI to draw a picture of "someone good with money" (tweaked a couple of times to try and change the output). They say that less than 2% of the images were of women, even though (according to this company) women are better at finances and investing than men are. What you get out of AI depends highly on what data is used to train it.
266 |
I remember someone made a thread on twitter a while back about messing up with it by mixing doctor and nurse, for example when specified "a nurse goes to a doctor, she was busy, who was busy?" It would answer the nurse 100% of the times, however when said "he was busy" it would answer the doctor
6 |
Focusing solely on the input from men has always been problematic, but it's even worse now with social media because men have no qualms about claiming expertise and speaking for others. Doesn't matter if they're right or wrong, their voices are just one part of the larger conversation. Women should always figure into the equation unless a journalist is asking for first-hand experience about prostate surgery and recovery or vasectomies. Even then, they should probably ask the women in the men's lives for input. Most of the he-man types (read: those who claim to be) were very likely begging for their partners to bring them fresh ice packs and maybe something special to eat. (I assisted with vasectomies and the reports from the men and their partners about recovery were VERY different.)
TLDR: do better, media! Women are part of this world, too.
101 |
I read that google translate (which has been using machine learning for the past number of years) had this problem translating from non-gendered languages (like Turkish) into English. Turkish doesn't have gendered pronouns (he/she are both "o"). So if you translated from Turkish "o bir doktor" it would always give you "he is a doctor" in english and "o bir hemÅire" into "she is a nurse"
I just tried it now and they both translated into "she is X". But I think they only looked into it after people started writing news articles about it, so it sucks that it took until there was a public outcry for them to even notice that their machine learning model had a bias
I have a degree in computer science and it is such a huge problem. People say that computers can't be bigoted, and yes the computers themselves don't feel hatred/disgust towards oppressed groups, when a model is trained on biased data, biased data is coming out the other end. And a lack of diversity in the field of software engineering only exacerbates this issue, because people are generally bad at noticing their own blind spots
10 |
Or (and I am a woman) computers donāt care abt being nice nor do they care abt feelings. You asked it a question in which it answered with a fact. If you wanted specific parameters met then you should have imputed them. It probably looked at a lot of things such as views, likes, subscriptions, ect. You canāt really be mad if a certain criteria isnāt met by a certain party š¤·āā
5 |
I disagree with the assertion presented here. Reason being for anything that you ask ChatGPT the information cut off period is as of Sept 2021 (almost 2 years ago) and to just say "top" youtube Doctor is very vague because it could mean total views, total subscribers etc so this is much more nuanced.
7 |
I remember searching for female lead channels about videogames (because they are far less likely to hold hostile comunities) and there were not even mentions of them. You had to be lucky and dig deep to find them.
If the AI extract from an already biased pool then the results will be biased unless you ask for specific characteristics.
The top 5 most viewed will alway be men most of the time.
16 |
AI bias is an actual thing that any competent machine learning programmer actively tries to mitigate or minimize as best as they could. Your message In itself is right and being aware of AIās potential biases and having a conversation about it is a good thing to do. However, not sure you picked the best example to demonstrate your point. Asking for the ātopā YouTubers can easily be read by the AI as list most subscribed YouTubers or most well known YouTubers and with ChatGPTās data set being capped in 2021 it could be that the reason it spit out 5 men isnāt because it is inherently bias against women YouTubers but that the question asked for the ātopā and the top 5 happened to be men. So unless we analyze the numbers of all doctor YouTubers in 2021 we cannot say this was due to bias. More specific non quantitative questions would be better suited to try and assess bias.
87 |
@Kaldylicious
1 year ago
My (computer engineer) dad says this all the time "garbage in, garbage out". Sometimes I think the scariest thing about AI is that it reflects the worst of ourselves.
2.8K |