Apple Failure Outrage | Linux Random Redo | Okta hacked (or not)

Welcome to The long run– where we read the news of the week and narrow it down to the essentials. Let’s go training what really matters.

This week: Why Apple services are down, Linux is getting a major RNG overhaul, and we’re wondering if Okta has been hacked again.

1. Rotten Apple Ops?

First thing this week: Most of Apple’s services were down yesterday (at least in some regions). Very little worked for at least two hours including the dev site and some internal applications.

Analysis: “It’s always DNS”

Despite rumours, it wasn’t a Russian hack. It wasn’t a BGP attack either. Cupertino may not have confessed, but it looks very much like it it was DNA (Source surprise).

Joe Rossignol broke the story: iCloud and many other Apple services are down

Services and apps affected include the App Store, iCloud, Siri, iMessage, iTunes Store, Apple Maps, Apple Music, Apple Podcasts, Apple Arcade, Apple Fitness+, Apple TV+, Find My, FaceTime, Notes, Stocks and many others.

Apple’s developer website is also not accessible. … Some of Apple’s internal systems have also failed.

What went wrong? Animated adds:

Apple’s own DNS servers redirect developer.apple.com to something on “akadns.net” operated by Akamai. But Apple’s own DNS servers refuse to resolve that, likely because it’s not in the apple.com zone.

It’s clearly a botched DNS configuration. Not clear what the intention was. …Anyway, this looks like an attempt to outsource something to Akamai that has gone badly wrong.

ofc: “It is always DNA”, Amirite? Sadiq Saif says:

As far as I can tell, the issue was caused by a DNSSEC validation error on aaplimg.com. I noticed many DNSSEC BOGUS messages in my local pihole’s log for names like proxy.safebrowsing.apple which are CNAME to aaplimg.com.

And here’s more digging from Lima:

Looks like their DNS servers are responsive but are refusing to serve records. …Most likely a configuration error that will be reversed once they figure out how to redeploy their DNS servers while DNS is down.


2. Linux random number generator revised

Correct, unguessable random numbers are key to DevOps intrinsics like strong encryption and secure TCP. Linux has fallen behind other operating systems in areas like entropy capture, VM security, and more outdated hashing.

Analysis: Technical debt – gone!

Not only has Jason Donenfeld improved on these areas, but he’s also delved into a couple of decades of code-cruft by improving readability, maintainability, and documentation. I agree that such are “unsexy enhancements”. critically important investments.

Michael Larabel observed the development: Linux 5.18 is said to bring many improvements to the random number generator

WireGuard chief developer Jason Donenfeld has recently pushed many improvements to the Linux kernel’s random number generator. [These] RNG improvements [give] better VM security, massive performance improvements, and more.

The horse’s mouth would be Jason A. Donenfeld:

[It’s] an attempt to modernize both the code and the cryptography used. … The goal was to support the existing design of the RNG with as much incremental rigor as possible without changing anything fundamental for now. … The focus was on evolutionary improvement of the existing RNG design.

The underlying algorithms that… turn sources of entropy into cryptographically secure random numbers have been overhauled. … The most significant outward-facing change is that /dev/random and /dev/urandom are now exactly the same. …I started by swapping out SHA-1 for BLAKE2s…since SHA-1 was broken pretty mercilessly this was an easy change. … This change allowed us to improve the forward security of the entropy input pool from 80 bits to 128 bits [and] set the stage for us to do more interesting things with hashing and keyed hashing… to further improve security [and] Performance.

random.c was introduced in 1.3.30… and was quite an impressive driver for its time, but after a few decades of tweaking, the general organization of the file, as well as some aspects of the coding style, were showing some age. … So a lot of work has gone into making the code generally readable and maintainable, as well as updating the documentation. I consider these types of very unsexy improvements to be just as important, if not more so, than the various fancy modern day cryptographic improvements.

but sinij Worries about the new and shiny:

What about SP 800-90B compliance? These changes—particularly the move to BLAKE2s—almost guarantee Linux would not be able to earn NIST certification (and consequently be adopted in government, healthcare, and financial applications that require certification). Well done giving Red Hat even more reasons to completely fork the kernel in RHEL 9.


3. Insider-as-a-Service scrotes claim another victim

LAPSUS$ group claims to have hacked Okta, a giant identity and authentication provider. If true, it’s a worrying development for everyone DevOps team using Okta.

Analysis: Hack Redux or APT?

So far, Okta’s public statements aim to minimize the issue, saying the screenshots it has shared are just a retread of data stolen in January. But the group claims it’s persistent in Okta. Whatever the truth, the fact that groups like LAPSUS$ can so easily bribe employees and contractors should draw attention every DevOps professional.

Raphael Satter: Okta investigates data breach report

Hackers posted screenshots showing what they claimed [Okta’s] corporate environment. A hack… could have dire consequences as thousands of other companies like FedEx, Moody’s and T-Mobile rely on the San Francisco-based firm to manage access to their own networks and applications. … Okta describes itself as an “identity provider for the internet” and claims to have more than 15,000 customers on its platform [for] Identity services such as single sign-on and multi-factor authentication.

The screenshots were posted by a ransom-seeking hacker group called LAPSUS$. [They included] Images of Okta’s internal tickets and his… Slack.

In a statement, Okta official Chris Hollis said the breach may be related to an earlier incident in January, which he says has been contained. Octa discovered an attempt to compromise the account of an outside customer service technician at the time, Hollis said: “We believe the screenshots shared online are related to this January event. … There is no evidence of ongoing malicious activity beyond that detected in January.”

How come this group keeps popping up lately? By bribing insiders, like nstart explained:

They use the weakest human link in the chain. … They specifically recruit people with access to VPNs/internal support systems. This octa break seems to have happened in a similar way. The group has made specific calls for access to gambling companies, hosting providers, telcos, call centers and BPM providers.

[Not] It’s hard to imagine how an overworked, underpaid support rep (or even a well-paid, disgruntled one) could choose to do this. It only takes one well-placed agent to start giving credits, and this group has instant access to a massive attack surface. That might sound like an overreaction, but going forward, organizations may need to make least privilege access and access logging a priority.

So an inside job where the perpetrators are in control for two or three months? Creepy thinks upuv:

It’s going to be a wild mess. … My colleagues in many organizations and companies are now striving to identify and contain any breach. … My guess is that all other MFA vendors are in full panic.


The moral of the story:
Most importantly, be true to yourself.

they have read The long run through Rich Jennings. You can reach him at @RiCHi or [email protected].

Image: Roman Bolosan (above Unsplash; leveled and cropped)

About Willie Ash

Check Also

Don’t commit yourself! Here are solid alternatives for Apple’s weaker software

One of the best things about Apple’s Macintosh computers is that they come with a …