Machine learning applications are everywhere now. Chances are, you’ve interacted with a machine learning application today. Image and voice recognition, medical diagnosis, and data extraction all use machine learning. Your computer uses it. So does your doctor’s office. As the world becomes more complex, machine learning applications are showing up in more sectors.
Why is this?
Machine learning simplifies the complex. That’s what makes it so useful. It can quickly identify patterns in data and make decisions on that data, often without human intervention. It achieves this faster and better than any human could.
Tech giants Facebook and Twitter are experts at using machine learning. But their successes have come with some spectacular missteps. Keep these important tips in mind to improve your business model. Make sure to remember the end-user experience and strive towards the best result.
Facebook and Twitter left most other companies around the world far behind when it comes to using machine learning to improve their business model. But their practices haven’t always resulted in the best reactions from end-users. There’s much to learn from these companies on what to do–and what not to do–when it comes to scaling and applying data analytics.
Get the data you need first
It seems like Facebook uses machine learning for everything. The company uses it for content detection and content integrity. It’s used for sentiment analysis, speech recognition, and fraudulent account detection. Operating functions like facial recognition, language translation, and content search functions also run on machine learning. The Facebook algorithm manages all this while offloading some computation to edge devices to reduce latency.
Offloading allows users with older mobile devices (more than half of the global market) to access the platform faster. This is an excellent tactic for legacy systems with limited computing power. Legacy systems can use the cloud to handle the torrent of data. ntroducing accessible real-world metadata can improve cloud-based systems through customization, correction, and contextualization.
Start by thinking about what data is really needed, and which of those datasets are most important. Then start small. Too often, teams get distracted in the rush to do it now and do it big. But don’t confuse the real objective: do it right. Focus on modest efforts that work, then increase the application development to apply to more datasets or to adapt more quickly to changing parameters. Focus on early successes and scaling upwards. By doing this, you’ll avoid early failures caused by too much data too soon.Even if a failure does happen, the momentum of smaller successes will propel the project forward.
Automate training
Machine learning requires ongoing modification and training to remain fresh. Both Twitter and Facebook use Apache Airflow to automate training that keeps the platforms updated, sometimes on hourly cycles. The amount and speed of retraining will rely on computing costs and the availability of resources. However, ideal algorithm performance will rely on properly scheduled training for the dataset.
One of the biggest challenges may be choosing the type of learning to employ for the AI model. While deep learning methods have been the first choice for dealing with large datasets, it’s possible classic tri-training may create a strong baseline that will outperform deep learning, at least for neuro-linguistic programming. While tri-training cannot be fully automated, it may produce higher quality results through the use of diverse modules and democratic co-learning.
Pick the right platform
One of the challenges both Twitter and Facebook now face is trying to standardize their initially unstructured approach to building frameworks, pipelines, and platforms. Facebook now relies heavily on Pytorch and Twitter uses a mix of platforms, moving from Lua Torch to TensorFlow.
Look for a platform that will be scalable and think of the long-term needs of the company in order to successfully choose the right AI tool.
Don’t forget the end-user
A search for ‘machine learning’ and ‘Facebook’ together inevitably brings up hundreds of blog posts and articles on the negative feelings some users have about the AI feature built into the site. Loss of privacy, data mining, and targeted advertising are some of the less worrying accusations thrown at the company. And yet many of the same users appreciate other AI tools that allow them to connect to friends and family in other countries who do not speak their language and tools that keep the platform free from pornography and hate speech (if somewhat imperfectly.)
It was not the technology itself but the lack of transparency and how Facebook implemented machine learning on its platform that frustrated users and militarized some against it. Don’t make the same mistake. Trust and transparency should be keywords for all major decisions. End-users will appreciate it, and they will leave a well-designed site with the sense they have gained something from the interaction instead of feeling personally violated by it.
Machine learning, or rather the idea machines can learn to ‘do’ without an explicit set of instructions (programming), has been the basis of many movies where humans end up getting the short end of the deal. But is machine learning truly that dire?
Unlikely. Machine learning, which is a subcategory of artificial intelligence, is simply a way for machines to imitate intelligent human behavior. It’s a type of data analysis that allows programs to learn via experience in order to complete complex tasks, much like humans problem-solve. This type of learning typically breaks down into two specific types: deep learning and reinforcement learning. But what’s the difference?
Deep Learning
Deep learning is essentially what you see in any young child as they start to understand that while chickens are birds, not all large birds are chickens. It is based upon the ability to classify both the common features (in this case: feathers, beaks, wings, etc) as well as the uncommon features that separate each grouping from each other (sound, size, feather pattern, beak length). This kind of hierarchical feature learning stacks multiple layers of learning nodes as observed data from one layer produces new outputs that are then fed to a higher level.
In deep learning, the machine begins with raw data that must then be sorted into relevant and irrelevant subsets. The machine, exposed to more data, improves over time. This is similiar to how a baby learns.
Reinforcement Learning
Meanwhile, reinforcement learning relies more on trying out slight variations of a problem. As results occur (favorable and unfavorable) data sets change until the best outcome emerges. This is reminiscent of “The Good Place” as Michael tries to create a better version of his neighborhood.
Reinforcement learning uses a closed-loop algorithm where each action receives feedback in a trial-in-error process until the best action is determined.
You must be logged in to post a comment.