Announcements Google and EU now penalize sites for extremist content

Abou Sandal

Abou Sandal

Legendary Member
Orange Room Supporter
I didn't really understand...Did they just decide to penalize it?
 
  • Advertisement
  • Jo

    Jo

    Administrator
    Master Penguin
    It seems like they expect the site to do something : false claim of downloadable or streaming content, redirecting to irrelevant and or misleading webpages... like they suspect this is a recruiting site for HA or something... is the definition of what the site is supposed to be clear?
    No, not that.

    The violation is for the content not the site. They understand that this site's content is user-generated and they expect the administrator to moderate better the content on it. Thats all.
     
    Abou Sandal

    Abou Sandal

    Legendary Member
    Orange Room Supporter
    Yep. "False claims of downloadable or streaming content"
    I might sound a little off the hook...But who claimed that there is any content for download or stream? And couldn't we put a disclaimer in titles in that case?
    So every thread where there are loads of video documents on, can easily fall in that category too?
     
    Abou Sandal

    Abou Sandal

    Legendary Member
    Orange Room Supporter
    @Jo

    Often in many posts, links become broken due to many reasons. Like videos or even whole accounts removed from YouTube or Facebook.
    Could that be the main reason?
    Because if so, then you might need to constantly run broken link checkers, in order to keep the pages free of broken links. Otherwise, this thing might get viral in all threads...Am I close?
     
    Abou Sandal

    Abou Sandal

    Legendary Member
    Orange Room Supporter
    @Jo (Kamen Ana :lol:)

    Is there a possibility to put the removed threads in the private section, at least until their final fate is decided, so we could still access their content?
     
    spacecreature

    spacecreature

    Well-Known Member
    yet still, its not totally everyone, publishing extremist leftist views like calling others nazis for not acknowledging 75 thousand new genders and doxing them is still a-ok
    Shou beha @Indie haha!
     
    JustLeb

    JustLeb

    Legendary Member
    Orange Room Supporter
    @Jo
    long time ago @X wanted to create a section in the forum to make it as a library or archive to some important subjects for ex threads that contain documents, analysis, newspaper articles etc...
    Unfortunately the idea didn't see daylight but it surely is a great one.

    All the information are already in google but the fact that they are scattered and not organized will make them hard to get and of course to constitute a global (unbiased) picture related to any subject.

    If you are able to revive such an idea again, I am sure Google search will love it, and you will get more and more references to the site, in addition to people who are looking for researching serious subjects and information as opposed to partisan discussions as it is currently the case.
     
    D

    Death To Google

    New Member
    Big governments and companies like Google want to insert their own judgement or the judgement of AI technology into search rank algorithm. This is almost as dangerous to the internet as the precedent it sets to freedom of thought and expression.

    At the very least classifications and reviews have to be more flexible. The AI was trained using reports from an internal focus group assembled by Google - impartiality of this group is questionable at best.

    Google deserves the middle finger of the internet community. Or at least a bold resistance against this precedent.
     
    D

    Death To Google

    New Member
    I wonder how many people in the world have free time to sacrifice to report for google.
    Google hired a 10,000 strong closed group and gave them guidelines on how to report. These reports are used as feedback to train an advanced AI. The idea is clever software will learn to do the flagging without human decisions.

    certainly google as well as twitter have keyword monitoring bots, not people monitoring websites. the "occurence" of some critical/sensitive keywords on the oroom could be the reason for such a warning. it is enough to check the "deep into " threads to verify this.

    the issue here no doubt, is related to the english language .. if you write arabic characters or arabic with latin characters. you are mostly safe.
    These are intuitive AI they do not follow logical rules like searching for keywords and expressions and what not. The same AI can learn arabic (probably does already) or latin arabic just as easy.

    Viewing the full detailed list of violations would be extremely useful to gain more insight. But in one way or another the AI learned (biased through the feedback of chosen focus group and reporting guidelines) that for example; "American Values" thread gives misleading information. Clearly the biases of the company and early training/reporting are effecting how automated flagging is being done.
     
    D

    Death To Google

    New Member
    Those interested or alarmed by any aspect of this topic should get an idea about the technology involved. Below are a couple of short and entertaining videos.

    Google's Deep Mind Explained! - Self Learning A.I.

    Elon Musk's 'Dota 2' Experiment is Disrupting Esports in a Big Way - No Playing Field

    The AI google uses teaches itself to classify content. The only human involvement is by feeding "likes" and "dislikes" to the AI. Using reinforcement learning, the AI will improve itself. The google reports initially carried out under the closed group and now open to public, are essentially a "dislike". They do not effect the search ranking of the page directly, they effect how the AI classifies the page. The AI learns continuously, it doesn't have a "final state" or "finished product". In theory, positive reinforcement can be given as well. For example, if someone in Google was pleased that "American Values" thread was flagged by the AI and gave the AI a "like" for a job well done, it will learn to identify and hunt down similar content more effectively in the future.

    While Google does have an AI ethics board, there is no transparency surrounding it. Some might be interested to know that head of Google DeepMind, Demis Hassabis, at a younger age created a video game in which the player is God. And the god/player's prophet on earth is a pet creature with it's own reinforcement AI learning. Treat cuddle and feed the pet after it demolishes a villagers home and it will turn into an abusive and dreaded beast upon maturity. Teach it to help the villagers and sing kumbaya and the pet will do just that, with a side effect of making the villagers lazy and quick to pray when they want something.

    I say this totally seriously; show the stiffest resistance early on because the lessons these AI learn early will stick hardest.
     
    Top