Category: AI and Machine Learning
What Twitter & Facebook Teach Us About Machine Learning
Tech giants Facebook and Twitter are experts at using machine learning. But their successes have come with some spectacular missteps. Keep these important tips in mind to improve your business model. Make sure to remember the end-user experience and strive towards the best result.
Facebook and Twitter left most other companies around the world far behind when it comes to using machine learning to improve their business model. But their practices haven’t always resulted in the best reactions from end-users. There’s much to learn from these companies on what to do–and what not to do–when it comes to scaling and applying data analytics.
Get the data you need first
It seems like Facebook uses machine learning for everything. The company uses it for content detection and content integrity. It’s used for sentiment analysis, speech recognition, and fraudulent account detection. Operating functions like facial recognition, language translation, and content search functions also run on machine learning. The Facebook algorithm manages all this while offloading some computation to edge devices to reduce latency.
Offloading allows users with older mobile devices (more than half of the global market) to access the platform faster. This is an excellent tactic for legacy systems with limited computing power. Legacy systems can use the cloud to handle the torrent of data. ntroducing accessible real-world metadata can improve cloud-based systems through customization, correction, and contextualization.
Start by thinking about what data is really needed, and which of those datasets are most important. Then start small. Too often, teams get distracted in the rush to do it now and do it big. But don’t confuse the real objective: do it right. Focus on modest efforts that work, then increase the application development to apply to more datasets or to adapt more quickly to changing parameters. Focus on early successes and scaling upwards. By doing this, you’ll avoid early failures caused by too much data too soon.Even if a failure does happen, the momentum of smaller successes will propel the project forward.
Automate training
Machine learning requires ongoing modification and training to remain fresh. Both Twitter and Facebook use Apache Airflow to automate training that keeps the platforms updated, sometimes on hourly cycles. The amount and speed of retraining will rely on computing costs and the availability of resources. However, ideal algorithm performance will rely on properly scheduled training for the dataset.
One of the biggest challenges may be choosing the type of learning to employ for the AI model. While deep learning methods have been the first choice for dealing with large datasets, it’s possible classic tri-training may create a strong baseline that will outperform deep learning, at least for neuro-linguistic programming. While tri-training cannot be fully automated, it may produce higher quality results through the use of diverse modules and democratic co-learning.
Pick the right platform
One of the challenges both Twitter and Facebook now face is trying to standardize their initially unstructured approach to building frameworks, pipelines, and platforms. Facebook now relies heavily on Pytorch and Twitter uses a mix of platforms, moving from Lua Torch to TensorFlow.
Look for a platform that will be scalable and think of the long-term needs of the company in order to successfully choose the right AI tool.
Don’t forget the end-user
A search for ‘machine learning’ and ‘Facebook’ together inevitably brings up hundreds of blog posts and articles on the negative feelings some users have about the AI feature built into the site. Loss of privacy, data mining, and targeted advertising are some of the less worrying accusations thrown at the company. And yet many of the same users appreciate other AI tools that allow them to connect to friends and family in other countries who do not speak their language and tools that keep the platform free from pornography and hate speech (if somewhat imperfectly.)
It was not the technology itself but the lack of transparency and how Facebook implemented machine learning on its platform that frustrated users and militarized some against it. Don’t make the same mistake. Trust and transparency should be keywords for all major decisions. End-users will appreciate it, and they will leave a well-designed site with the sense they have gained something from the interaction instead of feeling personally violated by it.
Read our longer blog post if you’re still asking “What is machine learning?”
AX Control can help you with your industrial automotive parts replacement needs. Talk to our team today. We’re here to help!
Benefits of Open Platform Communications
OPC, or Open Platform Communications, has been around in some form since 1996. It is based on OLE(Object Linking and Embedding), now known as ActiveX, as well as COM (component object model) and DCOM (distributed component object model) technologies.
Component Object Model (COM) is part of The Microsoft .NET framework and provides the required software infrastructure for data sharing in Windows NT and similar operating systems.
OPC is the standard used in industrial automation and other process-control applications and is primarily a way to address the needs of computer-integrated manufacturing.
OPC is designed to provide a common communication interface for diverse industrial devices. For example, ActiveX/COM technologies allow controlling software components to share data and to interact with one another. This brings disparate process control devices together in a near standardized format.
Continue reading “Benefits of Open Platform Communications”
You must be logged in to post a comment.