Google's video-streaming subsidiary YouTube has come under fire for a revision to its content rules which bans 'instructional hacking and phishing videos,' with critics arguing that it'll do little to stop ne'er-do-wells but much to hamstring the information security (infosec) movement.
Founded in February 2005 by Chad Hurley, Steve Chen, and Jawed Karim after their time working together at PayPal, launched in beta form in May 2005, then acquired by Google in 2006, YouTube is absolutely massive: Users, who range from people sharing home videos or quickly-removed pirated films to professional 'streamers' and large corporations, upload around 400 hours of footage to the site every minute, and it enjoys an Alexa ranking placing it as the second most popular site in the world - behind only parent company Google's own search portal.
The site isn't without controversy, however. YouTube has been accused of heavy-handed censorship, while at the same time criticised for allowing controversial or abusive material to remain available. The company relies heavily on algorithms, rather than human interaction, to categorise and filter uploaded content, resulting in instances of child abuse and suicide content appearing on its supposedly child-friendly YouTube Kids application.
Now, the company has been accused of censoring the information security (infosec) community with the introduction of a new rule banning 'instructional hacking and phishing: Showing users how to bypass secure computer systems.'
Clearly designed to prevent videos teaching outright criminal behaviour from being uploaded and shared, the new rule is already affecting recognised security researchers and other infosec community members. 'We made a video about launching fireworks over Wi-Fi for the 4th of July only to find out @YouTube gave us a strike because we teach about hacking, so we can't upload it,' writes Kody Kinzie, creator of the popular YouTube channel Cyber Weapons Lab. 'We haven't even uploaded the firework video. We can't due to a strike on a video about the WPS-Pixie Wi-Fi vulnerability.'
' This may sound like a commonsense measure, but consider: the "bad guys" can figure this stuff out on their own,' writes Cory Doctorow on BoingBoing. 'The two groups that really benefit from these disclosures are: Users, who get to know which systems they should and should not trust; and developers, who learn from other developers' blunders and improve their own security.
'Youtube banning security disclosures doesn't make products more secure, nor will it prevent attackers from exploiting defects - but it will mean that users will be the last to know that they've been trusting the wrong companies, and that developers will keep on making the same stupid mistakes...forever.'
The rule appears to have been added in an April 2019 update, though seems to have only been enforced more recently. Neither Google nor its YouTube subsidiary have yet responded to complaints from those affected.
July 1 2020 | 17:34