This week:

3 – Another reason to stop using SMS for security codes

2 – Some questionable ethics in Avast

1 – Your chatbot could be as reliable as a child on a sugar rush.

 


 

3 – Another reason to stop using SMS for security codes

A database owned by a company that handles 5 million SMS text messages every day was accessible to everyone on the internet.

This in itself is not be ideal. But, according to a report on TechCrunch (and recently shared by Graham Cluley), the database “included the contents of [SMS] text messages sent to users, including one-time passcodes and password reset links for some of the world’s largest tech and online companies, including Facebook and WhatsApp, Google, TikTok, and others”, and the database was “growing by the minute”.

So what?

It’s a reminder that security codes sent via SMS text messages are not as secure as stronger forms of Multi-Factor Authentication (e.g. security codes generated by an authenticator app on your phone).

 


 

2 – Avast said it would help eliminate tracking on the web, when it actually did the tracking itself

“Avast, the cybersecurity software company, is facing a $16.5 million fine after it was caught storing and selling customer information [to over 100 third parties] without their consent.”

According to a report in The Verge (and recently shared by Secure The Village), the FTC says that “from at least 2014 to 2020, Avast harvested user web browsing information through its antivirus software and browser extension [, allowing it] to collect data on religious beliefs, health concerns, political views, locations, and financial status. The company then stored this information indefinitely and sold it to over 100 third parties without the knowledge of customers [..] The FTC also claims Avast deceived users by saying its software would help eliminate tracking on the web — when it actually did the tracking itself.”

So what?

I was astounded to hear these allegations when they first emerged in 2020. Avast has been the ‘go-to’ free anti-virus solution for so many home users for so many years. How someone in that organisation thought that this practice was perfectly reasonable is mind-boggling. They definitely need their moral compass checked because that compass has steered the Avast Ship into the rocks. Even if this data harvesting stopped in 2020, could you continue to trust an organisation that did this?

 


 

1 – Your chatbot could be as reliable as a child on a sugar rush.

Air Canada has been forced to issue a refund after its website’s chatbot misled the customer by inventing a policy rather than basing its answer on the real policy that was published elsewhere on the site.

According to a report in Ars Technica (and recently shared by Benedict Evans), Air Canada argued that the customer “never should have trusted the chatbot and the airline should not be liable for the chatbot’s misleading information because Air Canada essentially argued that the chatbot is a separate legal entity that is responsible for its own actions”. They also argued that because the chatbot included a link to a document that described the real policy, the passenger should have disregarded what the chatbot said and read the real policy instead. However, Air Canada did not explain “why customers should have to double-check information found in one part of its website [with information found] on another part of its website.”

So what?

IT Risk is not just about IT security risk. The risk of an AI tool inventing answers should not be a surprise to anyone. It is a foreseeable event, and the risks associated with foreseeable events should be managed. So, how could we manage the risks in this type of scenario? If I was a cynic, I’d say all AI-powered chatbots should display a warning along the lines of “You can trust me as much as you’d trust a six year old who has just eaten a bag of Skittles, washed down by a glass of Coca-Cola”!