Replicating the credit score engine took just 1,150 queries, and copying the steak-preference predictor took just over 4,000. On the BigML service, they tried their extraction technique on one algorithm that predicts German citizens’ credit scores based on their demographics and on another that predicts how people like their steak cooked-rare, medium, or well-done-based on their answers to other lifestyle questions. In the demographics case they found that they could reproduce the model without any discernible difference after 1,485 queries and just 650 queries in the digit-recognition case.
![how to post on instagram from pc 2016 how to post on instagram from pc 2016](https://techforluddites.com/images/instagram-hootsuite-post-computer.png)
On Amazon’s platform, for instance, they tried “stealing” an algorithm that predicts a person’s salary based on demographic factors like their employment, marital status, and credit score, and another that tries to recognize one-through-ten numbers based on images of handwritten digits.
How to post on instagram from pc 2016 series#
They tried reverse engineering AI models built on those platforms from a series of common data sets. The researchers tested their attack against two services: Amazon’s machine learning platform and the online machine learning service BigML.
![how to post on instagram from pc 2016 how to post on instagram from pc 2016](https://blog.iconosquare.com/wp-content/uploads/2019/02/Mobile-view-on-Desktop.png)
How to post on instagram from pc 2016 software#
By training their own AI with the target AI’s output, they found they could produce software that was able to predict with near-100% accuracy the responses of the AI they’d cloned, sometimes after a few thousand or even just hundreds of queries.
![how to post on instagram from pc 2016 how to post on instagram from pc 2016](https://images.ctfassets.net/az3stxsro5h5/3feYRNrRv8gCw1w6Yolzwh/2b4f296b90ae72bd3f5d06b14129c069/How_to_post_on_instagram_from_desktop_.png)
In a paper they released earlier this month titled “Stealing Machine Learning Models via Prediction APIs,” a team of computer scientists at Cornell Tech, the Swiss institute EPFL in Lausanne, and the University of North Carolina detail how they were able to reverse engineer machine learning-trained AIs based only on sending them queries and analyzing the responses. In fact, they’ve found that the guts of those black boxes can be reverse-engineered and even fully reproduced- stolen, as one group of researchers puts it-with the very same methods used to create them. In the burgeoning field of computer science known as machine learning, engineers often refer to the artificial intelligences they create as “black box” systems: Once a machine learning engine has been trained from a collection of example data to perform anything from facial recognition to malware detection, it can take in queries-Whose face is that? Is this app safe?-and spit out answers without anyone, not even its creators, fully understanding the mechanics of the decision-making inside that box.īut researchers are increasingly proving that even when the inner workings of those machine learning engines are inscrutable, they aren’t exactly secret.