Trust is the glue that holds enterprises and processes together, and lately, more of that trust has being relegated to artificial intelligence. How much decision-making can and should be entrusted to the machines? We often trust AI recommendations for books related to the ones we have purchased. We are learning to trust AI to help guide our trucks and cars, applying warnings and brakings in traffic situations. Our call-center staff trust AI-generated recommendations to upsell the customers they have on the line. We let AI move more valuable customers to the head in line of queues. But how trustworthy is AI? Maybe more, maybe less trustworthy than we perceive it to be — it depends on the situation.
That’s the conclusion drawn by Chiara Longoni and Luca Cian in a recent analysis posted in Harvard Business Review. Consumers, for example, “tend to believe AI is more competent at making recommendations when they are seeking functional or practical offerings.” But they prefer human judgement “when they are more interested in an offering’s experiential or sensory features.”
In terms of corporate decision-making, at least one in four executives responding to a survey released by SAS, Accenture Applied Intelligence, Intel and Forbes Insights, report they had to manually intervene to override an AI-generated decision. Still, a majority are still happy with the results of their AI efforts and intend to keep moving forward. Close to three-fourths of executives, 74%, recognize that close oversight of AI is essential, the survey also shows. (I was part of the team that designed and analyzed the study, as part of my work with Forbes Insights.)
Longoni and Cian explored consumer trust with AI in a series of experiments involving 3,000 consumers. Among their conclusions: “Simply offering AI assistance won’t necessarily lead to more successful transactions. In fact, there are cases when AI’s suggestions and recommendations are helpful and cases when they might be detrimental.”
They call reliance on AI’s recommendations a “word-of-machine effect,” which stems from a belief that AI systems are more competent than humans in dispensing advice on “utilitarian qualities” — such as selecting hair-care products. However, the opposite is true, as humans are just as capable of assisting with such choices. “Vice versa, AI is not necessarily less competent than humans at assessing and evaluating ‘hedonic’ attributes” — involving sensory experiences. “AI selects flower arrangements for 1-800-Flowers and creates new flavors for food companies such as McCormick.”
Leveraging the best of both worlds may be the best approach to building trustworthy AI. “Even though it is clear that consumer confidence in AI assistance is higher when searching for products that are utilitarian (e.g., computers and dishwashers), this does not mean that companies offering products that promise more hedonic experiences (e.g., fragrances, food, and wine) are out of luck when it comes to using AI recommenders,” Longoni and Cian conclude. “In fact, we found that people embrace AI’s recommendations as long as AI works in partnership with humans. For instance, in one experiment, we framed AI as augmented intelligence that enhances and supports human recommenders rather than replacing them. The AI-human hybrid recommender fared as well as the human-only recommender even when experiential and sensory considerations were important.”
There are even situations where AI may be akin to swatting a fly with a cannon. In a recent article in Entrepreneur, Ganes Kesari explains where AI simply may be overkill for problems that don’t require AI. “A majority of business problems can be solved by simple analysis,” he points out. “Only a fraction of businesses really need AI. With AI capability getting democratized, it can be tempting to use it for every business problem.”
Plus, Kesari adds, AI often requires large volumes of data — the right data at that. “AI has a huge data appetite and it needs hundreds of thousands of data points for basic tasks such as detecting pictures. This data must be cleaned and prepared in a specific format to teach AI. Unfortunately, a high volume of quality, labeled data is not a luxury that every organization can afford.”
The key is to expectations appropriately about AI. It is not a magical force that will lift businesses to new heights of profitability as many vendors suggest. Importantly, it needs to be trustworthy to both corporate decision makers and consumers. Consider this a work in progress.
Published at Tue, 12 Jan 2021 22:41:15 +0000
One of the threads we’ve been picking up is how Artificial Intelligence driven Security Analytics can play into improving the response times and overall efficiency of the Security Operations Center (SOC). Automating Incident Response with machine learning adds enormous value to the SOC. The whole “New Normal” we established last year (which seems a little weird to say, I admit) has already changed how the SOC does what it does. Remote work and distributed teams are where it’s at, and it seems like it’s where we’ll be for a good long time to come. Even with the vaccine rolling out to the general public, the change is probably here to stay.
To be honest, with the massive reduction in commuter traffic, I’m OK with it.
Revisit and Refresh
A quick glance at any cybersecurity headline will show that the number of attacks hasn’t dropped. Worse, this just reflects the ones that have made the news. For every major event that gets written up on one of the security sites or hits an influential blog, there are an untold number more that are remediated without ever hitting the paper, and several times that number that go undetected.
Trying to stay on top of the sheer flood of attacks has been an ongoing challenge for the SOC, which is why we’ve been talking about using artificial intelligence driven security analytics to improve the effectiveness and efficiency of the Security Operations team. A large focus in this effort is reducing the workload the team faces by giving them a clear picture of what’s happening in the environment and offloading whatever is practicable to an automated system. After all, the more the machine takes off their plate, and the clearer the picture of what they’re facing, the more effective the SOC staff can be. Automating Incident Response with machine learning makes all the difference.
Recognize and Respond
There are a number of places where Artificial Intelligence (AI) can make the SecOps team’s life easier, and we’ve talked about them before. First, identifying threats based on risk and highlighting them for the operations team. Second – dragging true positive threats out of the flood of data coming in from across the security stack. And third, giving threats context lets the team react to the most important threats as they happen. AI also gives them the ability to automate a lot of the mundane issues they see come across the screen every day.
That automation plays heavily into the Incident Response part of the SecOps team’s role. A lot of what’s involved in the initial response to an event can be easily automated. Or at least it should be easy to automate. One of the challenges with rule-based automation is getting the rules right. While it’s pretty easy to hit the basics, setting up rules that can handle cases closer to the edge is more difficult. You have to find the balance between rules that catch the bad actors without making them so strict that they interfere with normal operations. After all, there is little the SecOps team likes to hear less than user complaints that they can’t get their job done because of something the SOC controls.
The advantage with automating Incident Response with machine learning is that it can be considerably more flexible than a purely rules-based system. Where rules will have fixed responses that may, or may not, be easily adapted to a changing situation, an Artificial Intelligence driven Security Analytics system can learn and adapt on the fly. Where it starts with a rule, it can then alter and update that rule to meet a changing threat surface, a changing environment, and changing attacker methods.
Learn to Be Flexible
Behind the AI are machine learning algorithms that learn what’s normal in the environment over time. It’s ultimately what a SecOps analyst does with manually configured rules. They see what’s happening. They see the new threats as they evolve. They see how the rules effect security and, nearly as important, effect overall performance, and rewrite them as needed.
Automating Incident Response with machine learning takes a lot of the burden off the Human analysts. The machine can do the same tasks, automate its responses, and keep everything running effectively and smoothly. By lightening the workload, the analysts can focus on the important parts and have confidence that the system has their back.
Attend the Webinar
If you want to know more about how Artificial Intelligence driven Security Analytics automates Incident Response, check out the webinar I’m doing on the subject.
Webinar: Automating Incident Response with Machine Learning
Date: Thursday, January 14, 2021 @ 10:00am PST
The post Automating Incident Response with Machine Learning appeared first on Gurucul.
*** This is a Security Bloggers Network syndicated blog from Blog – Gurucul authored by Mike Parkin. Read the original post at: https://gurucul.com/blog/automating-incident-response-with-machine-learning
Published at Tue, 12 Jan 2021 21:33:45 +0000