SUN introduces new AI platform for corrugated converting industry

SUN introduces new AI platform for corrugated converting industry

Helios, SUN Automation Group’s new AI and machine learning platform tailored specifically to the corrugated converting industry, launched today.
The platform is OEM-agnostic and engineered to provide corrugated manufacturers access to insights into the performance of their machines – reportedly enabling minimized downtime, optimized maintenance schedules, and maximized profit.
“IIoT makes every bit of data actionable,” says Helios’ director of technology, Matthew C. Miller. “So many corrugated plants rely on human intuition and experience to drive their decisions.
“With Helios, anomalies that are imperceptible to even the most well-trained operators can be detected in real-time and acted upon. And the machine learning capabilities will mean that the platform only gets smarter the more data and user reactions that it is able to process.”
The new platform is designed to minimize downtime, maximize profitability, and decrease the opportunity costs associated with only taking machines offline for preventative maintenance (as opposed to for major malfunctions).
Other features include preventative/proactive parts ordering, knowledge about the exact time and cost of parts replacements, the ability for operators to pinpoint the source of slowdowns and other issues, and operator-efficiency training to help machine operators learn and adapt to best practices.
“We understand that data is only as powerful as the actionable insights it can provide,” says Chris Kyger, president of the SUN Automation Group. “That’s why we are so excited to bring Helios to the corrugated industry. This incredible technology will help box plants increase productivity and efficiency while reducing costs and downtime.”
Helios provides core insights from an accessible, user-friendly dashboard enabling three key benefits: remote monitoring, predictive maintenance, and anomaly detection.
Remote monitoring provides deep insights into current and historical machine operation and performance that can be seen and accessed in real-time from any device. Meanwhile, predictive maintenance optimizes machine maintenance intervals using artificial intelligence that adapts based on the machine operation and usage.
Anomaly detection notifies users about abnormal machine states that allow operators to react to a potential issue before the failure occurs. The company says that more robust predictive analytics will be phased into the platform over time.
Published at Tue, 02 Mar 2021 08:48:45 +0000
Artificial Intelligence (AI) tools realities: 5 questions to ask your developers
IT leaders need to understand some hard truths of Artificial Intelligence tools in order to shape AI strategy. Consider these key questions to discuss with your developers

Artificial Intelligence (AI) tools that just a few years ago would have been found only at the most cutting-edge companies are now becoming commonplace. But while specialized hardware, software, and frameworks may be more mainstream these days, the knowledge and experience to use them effectively has not kept up.
Here are five essential questions to ask your development teams before you finalize your AI strategy.
1. What do you mean when you say you want to use AI tools?
Artificial Intelligence covers a broad range of definitions, algorithms, approaches, tools, and solutions. For example, there are the underlying approaches such as machine learning (both supervised and unsupervised), and rule-based systems. On top of these approaches are packages that provide solutions from image recognition to natural language processing (NLP). Which approaches and tools are the best for addressing the problem?
[ Check out our primer on 10 key artificial intelligence terms for IT and business leaders: Cheat sheet: AI glossary. ]
Developers need to choose the right tool for the problem at hand. BERT, GPT, and other complex neural network-based approaches might often be in the headlines but that doesn’t mean they’re the most appropriate tool for most use cases. The simplest algorithm that solves the problem is typically best and innovation leaders should be sure their teams are selecting their tools accordingly, not based on hype.
2. How well do you understand the problem you want AI to solve?
The goal of AI tools is to mimic human intelligence – so how well do you understand the problem? Is the approach that subject matter experts take to solve the problem well understood? What is the expected accuracy of the subject matter expert? How confident are you that the AI tool can meet or exceed that level of accuracy?
If your developers are not experts in the domain where the AI will be trained – and they probably aren’t – they’ll likely need input from subject matter experts in that field. These experts will need to work alongside data scientists and engineers to craft, tune, and evaluate the models, so it’s important to designate knowledgeable experts the development team can turn to with questions.
Also keep in mind that even experts make mistakes, so don’t expect an AI system trained on a non-trivial problem to be perfect either. AI may be able to compete at or above human levels in chess, Go, and Jeopardy, but these narrow domains are outliers rather than the rule. Be realistic about how well the system can be expected to perform – claims that AI systems will surpass their human counterparts in terms of quality and accuracy rarely turn out to be true.
3. Many AI tools depend on data for machine learning. What data do you have?
Algorithms are only as good as the data they are given. Machine learning models can require massive quantities of data to build accurate statistical models. Depending on the use case and the algorithm, data requirements can range from thousands to millions of examples. Is the data available? Has the quality of the data been checked? Bias inherent in the data is also an issue – can you be sure that the data doesn’t include biases?
[ How can you guard against AI bias? Read also AI bias: 9 questions for IT leaders to ask. ]
Many leaders believe they have mass quantities of valuable untapped data just waiting to be mined by an AI algorithm. And it’s true that most companies maintain logs, transactions, old emails, customer information databases, and so on – but frequently that data is noisy, inconsistent, or unsuited for training an AI system for the task you want to address.
A thorough assessment of the available training data is a prerequisite for any AI endeavor. In some cases, data preparation can take up to 90 percent of the development effort in an AI project, so validating the quality of the data available should be a top priority from the beginning.
Let’s explore two more questions:
IT leaders need to understand some hard truths of Artificial Intelligence tools in order to shape AI strategy. Consider these key questions to discuss with your developers

4. What kind of compute resources will you need?
If you are planning to host your initiative in the data center, have you scoped the compute resources needed? AI tools such as machine learning algorithms can be compute-intensive and may need additional hardware such as GPUs to handle the processing load.
If you are planning to host it in the cloud using resources such as Google’s GoogleML or Amazon’s Comprehend, have you scoped the costs? What are the infosec issues regarding sending data outside the firewall?
Training and deploying AI models may require hardware resources not typically available at organizations that are new to AI. Cloud providers can facilitate access to the necessary equipment, but they can be expensive, and transferring training data to the cloud may be out of the question if your dataset contains confidential information. Make sure you have not only the compute resources you’ll need, but also clearance to transfer the data to train your AI systems there.
5. How long will it take to get the solution into production – and once it’s there, how will you update the solution?
In addition to best practices for application tools and solution development, AI solutions require model training and testing. What percentage of false positives is acceptable (precision)? What percentage of missed targets is acceptable (recall)?
Training and testing the model can take weeks, months, or even longer. Once the solution is in production, how will updates be handled? Will the model need to be completely retrained and tested? How will the integrity of the model be ensured once it’s in production? Nuances in live data can change over time so periodic re-evaluation and tuning may be required.
AI systems will inevitably make mistakes, so you’ll need plans in place to deal with those false predictions when they occur. And just as traditional software deployments still require maintenance and administration after the core development is completed, AI systems also need to be continually evaluated, tuned, and updated. Just because the project has gone live doesn’t mean you should immediately assign the experts who trained the models to a new project.
Setting an AI project up for success can require significant preparation – in terms of engineering effort but also in embracing the right mindset within the organization. Above all, it’s important to be realistic about the expectations an AI system can achieve. Even if your system is deployed on the fastest processors money can buy, it won’t necessarily outperform – or even match – the accuracy of the human subject matter expert it’s meant to emulate.
[ Get the eBook: Top considerations for building a production-ready AI/ML environment. ]
Published at Tue, 02 Mar 2021 07:52:30 +0000