Technology is consistently entertaining new crazes. Some examples include blockchain, subscription juicers, netbooks, 3D televisions, hyperloop, and "hoverboards", just to name a handful of examples. All of these were going to be "the next big thing", but none of these have panned out as the inventors intended.
There has been a term bandied about that people think may be the end-all for computers. Said term is "Artificial Intelligence", or "AI". The term "AI" can mean a variety of different things, depending on whom you ask. However, when most use the term AI, what they are expecting is a fully conscious and sentient entity that can think, act, and rationalize as a human would. This is called "Artificial General Intelligence". Today's technology is nowhere even close to being able to come to this reality. It is not yet known whether or not Artificial Intelligence will actually live up to its ultimate expectations.
Apple is not known for being jumping on bandwagons and being the first to create new categories of technology; they typically leave that to others. However, if there is a technology that they can put their own spin on, they might do so. At their World Wide Developer Conference 24, they introduced one of these types of technologies, called "Apple Intelligence".
Apple Intelligence is not a single item; in fact, it goes against the grain of other AI assistants and only works on your data. Apple Intelligence consists of a variety of tools to help you accomplish a specific task. When introduced, Apple indicated that the initial features of Apple Intelligence would be released over the course of the iOS/iPad 18 and macOS Sequoia releases.
The items that comprise Apple Intelligence include: Writing Tools, Image Generation, and Personalized Requests. Initially, Apple wanted to have the first items available with iOS 18; however, during the beta, Apple realized that the features would not be far enough along for an initial iOS/iPadOS 18.0 and macOS Sequoia (15.0) release, so they were pushed to iOS/iPadOS 18.1 and macOS Sequoia 15.1.
Not every device that can run iOS 18.1, iPadOS 18.1, or macOS Sequoia 15.1 is able to support Apple Intelligence. To be able to run Apple Intelligence you need to have one of the following devices:
iPhone 16/Plus (A18)
iPhone 16 Pro/Pro Max (A18 Pro)
iPhone 15 Pro/Pro Max (A17 Pro)
iPad mini (A17 Pro or later)
iPad Air (M1 or later)
iPad Pro (M1 or later)
Apple Silicon Mac (M1 or later)
The reason that these devices are the minimum is a combination of needing 8GB of memory, as well as a neural engine.
This article is part of an on-going series that covers the features of Apple Intelligence, as they become available. This article focuses on the Apple Intelligence feature called "Summarization".
Summarization
Communication is an important part of human society. As humans, we have become quite adept at creating ways of communicating. There are effectively two types of communication: asynchronous communications and synchronous, or real-time, communications. Asynchronous communications could be something like newspapers, magazines, and for something more modern, email, and even social media. Real-time communications can include things like text messages, iMessages, WhatsApp, and Google Chat, just to name a handful.
There are those communications that are more informational and more than likely a one-way. The prime example of this is notifications from an app. This can be a notification about an email, a new podcast episode, or even just a notification about a new post from one of your friends.
With the amount of text that everyone comes across each day, it can easily become overwhelming. For notifications, you can just disable all notifications for an app within the Settings app on iOS and iPadOS, or System Settings on macOS, but this is not always a viable solution depending on your needs.
There are a number of areas where you can get summaries. This includes notifications and email. Let us start with notifications.
Summarizing Notifications
Sometimes, it would be great to be able to get a brief synopsis of the notifications that you have received. Now with Apple Intelligence, you can actually have this occur. Below is the summarized post from Ivory from my friend, Barry:
"Sequoia and Time Machine backups issues, one SSD stopped working, the other slow."
Here is the original text:
"Have you had any issues with Sequoia and Time Machine backups? I have two SSD's that used to alternate backups but one has stopped working and the other takes forever to run the "cleaning up" portion of the backup at the end."
This is a pretty good summary of the original text. When I saw this message, I immediately tapped to see the entire message. This is not the only example of summarization. Here is another example from Overcast:
"No episode today; return on Friday, October 10th; Google's Play Store remedies discussed."
The way that this seems to work is by summarizing the titles of the podcast. In most cases, this might be okay, but this is missing some key details, in particular, which podcast does not have an episode today. Now, later in the day, after additional episodes were downloaded, this was the summary:
"Stratechery discusses Google's Play Store remedies; Rebound Prime episode bootleg available"
As you could have surmised, this is a much better summary of the notifications that I received for the various podcasts I subscribe to.
Now, it should be noted that this is with iOS 18.1, which means that developers do not have access to any sort of application programming interface, or API, for suggesting anything to Apple Intelligence, so this is strictly what Apple's own models think is the proper summary.
Another tidbit to note is that each app will be summarized on its own. Therefore, you will get a different summary for your iMessage conversations, Instagram posts, and Overcast podcast notifications. That is not the only summarization that you can get; you can also get summaries of emails.
Email Summaries
Everyone has received a rather long email, and you may want a short summary of the email. Mail on iOS 18.1, iPadOS 18.1, and macOS Sierra 15.1 will handle this for you automatically. When you view your list of emails, you will see a summary directly below the sender and subject line.
While each email is automatically summarized, you can also get a longer summary within the email message. The way that you can do this is by using the following steps:
Open Mail.
Locate the email message that you would like to summarize.
Scroll up to the top of the email message.
Click on the "Summarize" button.
Once you click on this, Apple Intelligence will then analyze the email message and then provide a summary directly above the email.
Here are three different summaries of Justin Robert Young's "Free Political Newsletter."
From September 30th, 2024: "The article discusses the possibility of an October Surprise in the upcoming election, categorizing potential surprises into four types: policy surprises, opposition dumps, acts of God, and legal surprises. It also highlights James Carville's opinion that swing states are likely to move as a block, rather than splitting evenly."
From October 4th, 2024: "The article discusses the possibility of an October Surprise in the upcoming election, categorizing potential surprises into four types: policy surprises, opposition dumps, acts of God, and legal surprises. It also highlights James Carville's opinion that swing states are likely to move as a block, rather than splitting evenly."
From October 7th, 2024: "Democratic ads focus on healthcare and portray Kamala Harris as caring, while Republican ads portray her as frivolous and unserious. The GOP Senate map is favorable, but the party may not have the funds to play in all the states they could win."
All of these are decent summaries of the email messages. As you might suspect, you can only summarize a single email message at a time. You cannot summarize multiple emails, and this makes sense because the emails could be a variety of different topics. Now, the items above were decent examples, but not all emails are great for summarization. Here is what each of Audible's Daily Deal Emails results in:
"Today's Daily Deal is $2.99 and ends at 11:59 PM PT. Offer is not transferable, cannot be combined with other offers, and sale titles are not eligible for return."
Now, honestly, these are completely useless because the title is never displayed. The reason for this is because the emails from Audible never include the title within the email. Instead, the data is not shown until it is downloaded.
To Preview or Not to Preview
Mail provides you with the ability to control whether or not each message preview should be summarized or not. By default, this feature is enabled, but you can change it if you do not want any previews. The method by which you accomplish this depends on the operating system. You can use the steps below to change the setting.
On macOS
Open the Mail app.
Click on the "Mail" menu item.
Click on Settings.
Click on the "Viewing" tab.
Uncheck "Summarize Message Previews".
On iOS/iPadOS
Open Settings.
Scroll down to "Apps".
Tap on Apps to open up the apps list.
Scroll down to, or search for, Mail.
Tap on Mail to open its settings.
Under Message List, tap the toggle for "Summarize Message Previews".
These are pretty straightforward steps to change whether Mail summarizes message previews within the message list. This is not the only Apple Intelligence item related to Mail. Mail has a couple of other features, including smart replies and priority messages. Let us look at both, starting with Smart Replies.
Smart Replies in Mail
When you receive an email, you may want to write a reply, but may not always be able to come up with the right words. It could be helpful to have an appropriate reply generated for you. This is possible with a new feature called "Smart Replies". Smart Replies are designed to create a reply to an email on your behalf. This is done by looking for any questions within the email and then generating a contextual response.
As an example, I looked at an email that I got from Patreon for an episode of "The Morning Stream" with Scott Johnson and Brian Ibbott. Live listeners generate possible titles during the show, and sometimes topics can also generate titles. Within this particular episode, one of the titles was "Is it too early for a Chicken Big Mac?". The mail app on iOS provided two possible responses within the Quick Type bar, "Yes" and "No". If I clicked on one of these, it would provide an appropriate response.
For " Yes", it was "Yes, it is too early for a Chicken Big Mac. I'll have to wait until later in the day to enjoy one." For "No", it created "No, it's never too early for a Chicken Big Mac." For any TMS listeners, the answer is always "No, it's never too early for a Chicken Big Mac". This is just one example of how it might be used. Here is another example.
Recently, I went to a book signing for John Scalzi's Starter Villain at my local bookstore. I received the confirmation for the event, and the mail provided two options for replying.
The first option was "I'll be there", and the generated response was "I'll be there tonight. I'm looking forward to meeting John Scalzi and getting my book signed." The second option was "Can't make it", and the generated response for this was "Hi, Unfortunately, I won't be able to make it to the event tonight. Thanks…"
Both of these are appropriate, and for the "I'll be there" option, it absolutely took contextual clues from the email to provide an appropriate response. Obviously, your mileage will vary given that each email is different. I tested a bunch of emails, and some did not provide any smart reply options, so you may not always see suggestions. There is one last feature: Priority emails.
Priority Messages
A lot of people receive a tremendous amount of email in the course of a day. I am not one of these people. The emails that I receive are generally just informational emails, like from Patreon, bills, or even newsletters. It is not often that I get a personal email sent to me. However, there are those that get a lot of emails. For these individuals, it might be crucial to see the most important emails. Now, with iOS 18.1, iPadOS 18.1, and macOS Sierra 15.1, this is a feature that you can utilize.
Much like Smart Replies and Summarization, Priority Inbox is enabled by default, including on the "All Inboxes" mailbox, if you have more than one configured mail account. You can configure each inbox for Priority Messages by performing the following steps:
Open the Mail app.
Click on the inbox you want to configure for Priority.
Click on the "…" icon in the upper right corner.
Uncheck "Show Priority".
If you have Priority inbox enabled, Mail will attempt to bring the most important messages to the top of your inbox. This is useful to make sure that you see the items that you really need to see. Now, it should be noted, that this is not Mail Categorization. That is not available in iOS 18.1, iPadOS 18.1, or macOS Sequoia 15.1. Mail Categorization will be available in a future update.
Closing Thoughts on Summarization and Mail
You can easily get a quick summary of notifications. This could be a series of messages from a group chat, notification about new podcast episodes, or even notifications. Each summary is grouped by app, and these summarizations will be updated as new notifications come in. But these are not the only summaries that you can receive. Mail will automatically provide a summary for you. These summaries are shown below the sender and email subject and are typically only a line long. If you want a slightly longer summary, you can get this by clicking on the "Summarize" button above the email.
Mail will automatically organize your emails to show "Priority Messages". Priority Messages are those messages that Mail thinks are the most important to you. While it is enabled by default, you can configure this behavior on a per-inbox basis.
Be sure to check out all of the other articles in the series:
Today Apple has unveiled a new Mac mini that has the M4. This is not just a spec bump, but it includes a couple of new features, chief amongst them is a new form factor.
Form Factor
The Mac mini was introduced in 2005, and was a smaller version of the Mac, hence the name Mac mini. The Mac mini was 6.5 inches wide, had a 6.5 inch depth, and was 2 inches tall. This remained the form factor until 2011 when a new Unibody version was introduced, one that eliminated the internal disc drive. This Mac mini was physical larger at 7.7 inches wide, 7.7-inches in depth, and only 1.4 inches tall. All Mac minis introduced since 2011 have had the exact same physical footprint, including the M1 and M2 Mac minis. This all changes with the M4.
In 2022 Apple introduced a whole new machine, the Mac Studio. This took some of the design elements from the Mac mini but expanded them. The M1 and M2 Mac Studio were 7.7-inches wide, had a 7.7 inch depth, but was significantly taller at 3.7 inches.
The M4 Mac mini takes some design cues from the Apple TV. The M4 Mac mini is 5 inches wide, has a 5 inch depth, and is only 2 inches tall. This means that it is smaller than the previous Mac mini, but still a bit larger than an Apple TV. Before we dive into the ports, let us look at the processor.
M4 and M4 Pro
The Mac mini has come with a variety of processors. The previous M2 Mac mini was available in both M2 and M2 Pro variants. The same continues for the M4 Mac mini, with the M4 and M4 Pro. The M4 consists of a 10-core CPU, with 4 performance cores and 6-efficiency cores, and a 10-Core GPU. According to Apple, the M4 Mac mini is significantly faster than the M1 Mac mini. Specifically,
When compared to the Mac mini with M1, Mac mini with M4:
- Performs spreadsheet calculations up to 1.7x faster in Microsoft Excel.
- Transcribes with on-device AI speech-to-text up to 2x faster in MacWhisper.
- Merges panoramic images up to 4.9x faster in Adobe Lightroom Classic.
The M4 Pro has tow configurations, a 12-core version with 8 performance cores, and 4 efficiency cores with a 16-Core GPU. The other M4 Pro option is a 14-core CPU, with 10 performance cores and 4 efficiency cores and a 20-core GPU. From Apple’s press release:
When compared to the Mac mini with M2 Pro, Mac mini with M4 Pro:
- Applies up to 1.8x more audio effect plugins in a Logic Pro project.
- Renders motion graphics to RAM up to 2x faster in Motion.
- Completes 3D renders up to 2.9x faster in Blender.
All M4 and M4 Pro models have a 16-core Neural engine for machine learning and Apple Intelligence tasks.
Ports
The M4 Mac mini has a total of 7 ports, an ethernet jack, an HDMI port, and 5 USB-C ports. Of these ports, two are on the front, much like the Mac Studio, and three are on the back. The two on the front are USB-C with USB 3 speeds up to 10 gigabits per second. The three ports on the back are Thunderbolt/USB 4 ports. For the M4 models, these are Thunderbolt 4 ports, which can delivery data up to 40 Gigabits per second. The M4 M4 Pro devices are Thunderbolt 5 ports, which can deliver a whopping 120 Gigabits per second. The USB portion can deliver up to 40 Gigabits per second.
The difference in Thunderbolt ports does mean that there is a difference in DisplayPort compatibility. The Thunderbolt 4 ports support DisplayPort 1.4 while the Thunderbolt 5 ports support DisplayPort 2.1. The HDMI port on either model can support one display with 8K resolution at 60Hz, or 4K resolution at 240Hz.
By default the ethernet port is a gigabit port, but you can opt for a 10-gigabit per second port for $100 more. The Mac mini has long had a headphone jack this is still present on all models of the M4 Mac mini.
Pricing and Availability
The M4 Mac mini starts at $599 for 16GB of unified memory and 256GB of storage. You can configure the M4 models with 24GB or 32GB of memory, and up to 2TB of storage.
The M4 Pro Mac mini starts at $1399 for a 12-core CPU and 16-core GPU, 24GB of unified memory, 512GB of storage. You can configure the M4 Pro Mac mini with 48GB or 64GB of unified memory, and 1TB, 2TB, 4TB, or 8TB of storage.
The M4 Mac mini is available for pre-order today and will be available for delivery and in store on Friday November 8th.
Closing Thoughts
While other devices have received a redesign specifically for the lower power usage of Apple Silicon, the Mac mini was not one of them. The Mac mini has finally received its redesign. The smaller form factor takes cues from both the Mac Studio and Apple TV. The M4 and M4 Pro should be great upgrades from anyone who has an Intel Mac, and if you are upgrading from the M1, it will still be a solid update.
Technology is consistently entertaining new crazes. Some examples include blockchain, subscription juicers, netbooks, 3D televisions, hyperloop, and "hoverboards", just to name a handful of examples. All of these were going to be "the next big thing", but none of these have panned out as the inventors intended.
There has been a term bandied about that people think may be the end-all for computers. Said term is "Artificial Intelligence", or "AI". The term "AI" can mean a variety of different things, depending on whom you ask. However, when most use the term AI, what they are expecting is a fully conscious and sentient entity that can think, act, and rationalize as a human would. This is called "Artificial General Intelligence". Today's technology is nowhere even close to being able to come to this reality. It is not yet known whether or not Artificial Intelligence will actually live up to its ultimate expectations.
Apple is not known for being jumping on bandwagons and being the first to create new categories of technology; they typically leave that to others. However, if there is a technology that they can put their own spin on, they might do so. At their World Wide Developer Conference 24, they introduced one of these types of technologies, called "Apple Intelligence".
Apple Intelligence is not a single item; in fact, it goes against the grain of other AI assistants and only works on your data. Apple Intelligence consists of a variety of tools to help you accomplish a specific task. When introduced, Apple indicated that the initial features of Apple Intelligence would be released over the course of the iOS/iPad 18 and macOS Sequoia releases.
The items that comprise Apple Intelligence include: Writing Tools, Image Generation, and Personalized Requests. Initially, Apple wanted to have the first items available with iOS 18; however, during the beta, Apple realized that the features would not be far enough along for an initial iOS/iPadOS 18.0 and macOS Sequoia (15.0) release, so they were pushed to iOS/iPadOS 18.1 and macOS Sequoia 15.1.
Not every device that can run iOS 18.1, iPadOS 18.1, or macOS Sequoia 15.1 is able to support Apple Intelligence. To be able to run Apple Intelligence you need to have one of the following devices:
iPhone 16/Plus (A18)
iPhone 16 Pro/Pro Max (A18 Pro)
iPhone 15 Pro/Pro Max (A17 Pro)
iPad mini (A17 Pro or later)
iPad Air (M1 or later)
iPad Pro (M1 or later)
Apple Silicon Mac (M1 or later)
The reason that these devices are the minimum is a combination of needing 8GB of memory, as well as a neural engine.
This article is part of an on-going series that covers the features of Apple Intelligence, as they become available. This article focuses on the Apple Intelligence feature called "Typing with Siri".
Siri
Siri is Apple's personal assistant. Back in 2010 Apple acquired a voice assistant called Siri. In 2011, with the release of iOS 5 and Mac OS X 10.7 Lion, Siri became integrated into the operating system. When integrated with the operating system, Siri could perform a few more actions and over time you have been able to perform even more actions with Siri, like getting information about the weather, asking who was in a particular movie, or even getting the latest sports scores. Beyond being able to perform more actions and get more information from Siri.
Siri has expanded to more than just the iPhone and the Mac. You can use Siri on your Apple Watch, Apple TV, as well as on the HomePod. In order to use Siri with these devices you can either hold down a particular button, or you can use the phrase "Hey Siri" to activate Siri. This has been the wake word since 2011. Last year, in 2023 with the release of iOS 17, iPadOS 17, and macOS Sonoma, Apple provided the ability to use the word "Siri" instead of "Hey Siri". This was a boon, but this may not be the only way to interact with Siri.
Type to Siri
One of the limitations of Siri is that you need to use your voice to use Siri. This may work in a variety of situations, like while at home, while driving, or even in any area where you are alone. However, you may not want to use voice interactions but still may want to use Siri. There is a new way of using Siri, by typing to it.
The way that you use "Type to Siri" differs depending on the operating system. To use Type to Siri on iPhone and iPad you simply double-tap on the home indicator. On a Mac, use the keyboard combination Globe + S. If you have a keyboard connected to your iPad, you can also use the same keyboard combination.
It is different on macOS. By default the keyboard shortcut is to hit either of hte "command" keys twice. But this is not enabled by default. Before you can type to Siri, you will need to enable it. On macOS this can be done by using the following steps:
Open System Settings.
Click on "Apple Intelligence & Siri" to bring up the Apple Intelligence & Siri settings.
Enable the "Siri" toggle.
Once enabled, you can use press either of the command keys, twice in a row. However, you may want to have the same key combination as on iOS and iPadOS. This can be done by selecting the appropriate "Keyboard Shortcut" option within the Apple Intelligenice & Siri setting. The system options are:
Globe + S
Press Left Command Key Twice
Press Right Command Key Twice
Press Either Command Key Twice
Custom
If you select "Custom", you will need to enter in the keyboard combination that you want to use. It is best to avoid any existing system key combinations, otherwise you might become confused. Now, let us look at actually using Type to Siri.
Using Type to Siri
Once you bring up Type to Siri you will have a text box where you can enter in your request. After tapping the "send" button or hitting the enter key your request will be sent to Siri. Instead of your result being spoken out loud, your result will be shown on the screen. As you type, Siri will provide suggestions for items that you may want to do.
Suggested Actions
As an example, if you start typing "Create", you may get something like "Create a new note". Another example, if you type "Play", you may get suggestions for playing certain music playlists. For me, it was "Play New Music - 2024/09", "Play Heavy Rotation playlist", and "Play Guilty as Sin? by Taylor Swift". Each of these have been playlists, or songs, that I have been playing a lot lately.
The suggestions I got are from my iPhone. When I tried the same thing on my MacBook Pro I got "Open Playgrounds", "Play the news", and "Play some music". Similarly, on my iPad Pro I got "Play my voicemail", "Play my Audiobook", and "Open Playgrounds".
The different responses make complete sense because it is being processed locally and it is contextual to what you do on that device. Becaue I do not play music on my iPad Pro, Siri did not suggest that as an option. To be honest, I am a bit confused as to why it would suggest to "Play voicemails", when there is no phone app on the iPad.
Results
Just as when you use your voice with Siri you can perform more thn just the suggested actions. You can type the same requests that you would normally say. My go to example is asking the tongue twister "How much wood would a woodchuck chuck, if a woodchuck could chuck wood?". Siri, naturally responded with:
About as much ground as a groundhog could hog if a groundhog could hog ground.
How about another tongue twister?
These are just a couple of examples of what you can do when you type to Siri. This may not seem like a big deal, but being able to use your keyboard with Siri is a huge shift in how and when you might use Siri. You are no longer required to use your voice, which means that this can be used in almost ANY situation, which is something that many have wanted since Siri was introduced.
Closing Thoughts on Siri
Now, you do not need to be self-conscious about using Siri in public, because you do not need to say anything, you can simply type your request and have Siri show you the results. When you make a request, suggestions will be shown and you can simply type in your request and Siri will provide you the answer.
Siri will be getting even more features later, but this is the current new feature for Siri, at least as of iOS 18.1, iPadOS 18.1, and macOS Sequoia 15.1.
This post is just one in a series about Apple Intelligence, There will be more articles in this series, so be sure to check out those articles.
Today Apple unveiled a new iMac, one powered by the M4. While it might seem like a small update from the M3, there are a number of improvements, including the M4, ports, and colors, just to name a few items.
M4
The 24-inch iMac is powered by the M4 chip. This comes in two processor configurations, an 8-core CPU with 8-Core GPU model, and a 10-Core CPU with 10-Core GPU model. According to Apple, the M4 iMac is up to 1.7x faster for daily productivity and up to 2.1x faster for graphics editing and gaming; at least when you compare it to the M1 iMac.
Display
The size of the iMac has not changed, but there is a new option, a nano-texture display option. This is a similar display as on the iPads and on the Apple Studio Display. This is an option and will cost $200 more. This option is only available on the
Beyond this, there is a new 12Megapixel Center Stage camera. This should provide even better quality, because this camera is capable of providing Desk View, which is the ability to show your desk while in a video call, the previous iMac could not provide you this functionality.
Colors
The 24-inch iMac has come in a variety of colors. The available colors have been updated. There are seven options:
Silver
Blue
Purple
Pink
Orange
Yellow
Green
Unlike like the previous model, all of the colors are available for any processor choice. There is a difference depending on the model, and that is with the ports. To go with this, are new color-matched accessories, including the Magic Keyboard with Touch ID, Magic Trackpad, and Magic Mouse. These all now have USB-C cables, instead of the previous lightning. Beyond the port change, the design and port locations have not changed at all.
Ports and Connectivity
Depending on the processor, you will either get two or four ports. The 8-Core CPU model has two thunderbolt/USB 4 ports. The 10-core CPU models have four thunderbolt 4 ports. All of the iMacs have Wi-Fi 6E and Bluetooth 5.3. The four thunderbolt four ports means that you can have up to two 6K external displays, which is an improvement over the M3 model, which only supported one external 6K monitor.
Pricing
There are actually four different configuration options available. These starting configuration options are:
8-Core CPU with 8-Core GPU, 16GB of unified memory, and 256GB of storage - $1299
10-Core CPU with 10-core GPU, 16 GB of unified memory, and 256GB of storage - $1499
10-Core CPU with 10-core GPU, 16 GB of unified memory, and 512GB of storage - $1699
10-Core CPU with 10-core GPU, 24 GB of unified memory, and 256GB of storage - $1899
You can configure the 10-Core models with up to 32GB of unified memory and up to 2TB of storage. The 10-Core models also come with Ethernet, whereas the 8-core model is Wi-Fi only, but you can add Ethernet to that model for $30.
Closing Thoughts
You can pre-order the new iMac today and they will be available starting on Friday, November 8th. If you are looking for a new iMac, now is the time to upgrade, particularly if you have an Intel machine, or want to upgrade from an M1 iMac.
Here is the iPhone 16 and 16 Pro availability for the Monday, October 28th, 2024. There are a few changes and only for the carriers.
Highlight of Changes
For Apple, there are no changes.
For AT&T, there are no changs.
For T-Mobile, the iPhone 16 Plus changes are slips in availability, the iPhone 16 Pro change is an improvement. There are a mix of changes for the iPhone 16 Pro.
Technology is consistently entertaining new crazes. Some examples include blockchain, subscription juicers, netbooks, 3D televisions, hyperloop, and "hoverboards", just to name a handful of examples. All of these were going to be "the next big thing", but none of these have panned out as the inventors intended.
There has been a term bandied about that people think may be the end-all for computers. Said term is "Artificial Intelligence", or "AI". The term "AI" can mean a variety of different things, depending on whom you ask. However, when most use the term AI, what they are expecting is a fully conscious and sentient entity that can think, act, and rationalize as a human would. This is called "Artificial General Intelligence". Today's technology is nowhere even close to being able to come to this reality. It is not yet known whether or not Artificial Intelligence will actually live up to its ultimate expectations.
The term "Artificial Intelligence" can garner a number of thoughts, and depending on who you ask, these can range from intrigue, worry, elation, or even skepticism. Humans have long wanted to create a machine that can think like a human, and this has been depicted in media for a long time. Frankenstein is an example where a machine is made into a human and then is able to come to life . Another great example is Rosie from the 1960s cartoon The Jetsons. In case you are not aware, The Jetsons is a fictional animated tv show that depicts the far future where there are flying cars, and one of the characters, Rosie, is an robot that can perform many household tasks, like cleaning and cooking.
We, as a society, have come a long way to creating modern "artificial intelligence", but we are still nowhere close to creating a robot that is anywhere close to human. Today's modern artificial intelligence falls into a number of categories, in terms of its capabilities, but it is still a long way off from being the idealistic depiction that many expect artificial intelligence to be.
Artificial Intelligence comes in a variety of forms. This includes automated cleaning robots, automated driving, text generation, image generation, and even code completion. There are many companies that are attempting to create mainstream artificial intelligence, but nobody has done so that we know of.
Apple is one of those companies, but they are taking a different approach with their service called Apple Intelligence. Apple Intelligence is Apple's take on artificial intelligence. Apple Intelligence differs in a number of ways from standard "artificial intelligence". This includes the use of on-device models, private cloud computing, and personal context. Before we delve into each of those, let us look at artificial intelligence, including a history.
Artificial Intelligence
Artificial intelligence is not a new concept. You may think that it is a modern thing, but in fact, it harkens back to World War II and Alan Turing. Turing is known for creating a machine that could crack the German Enigma codes. In 1950, Turing released a paper which was the basis of what is known as the "Turing Test". The Turing Test is one where a machine is able to exhibit intelligent behavior that is indistinguishable from a human.
There have been a number of enhancements to artificial intelligence in recent years, and many of the concepts that have been used for a while have come into more common usage. Before we dive into some aspects of artificial intelligence, let us look at how humans learn.
How Human Brains Operate
In order to be able to attempt to recreate the human brain in a robot, we first need to understand how a human brain works. While we have progressed significantly in this, we are still extremely far from fully understanding how a human brain functions, let alone even attempting to control one.
Even though we do not know everything about the brain, there is quite a bit of information that we do know. Human brains are great at spotting patterns, and the way that this is done is by taking in large amounts of data, parsing that data, and then identifying a pattern. A great example of this is when people look at clouds. Clouds come in a variety of shapes and sizes, and many people attempt to find recognizable objects within the clouds. Someone is able to accomplish this by taking their existing knowledge, looking at the cloud, determining if there is a pattern, and if there is one, identifying the object.
When a human brain is attempting to identify an object, what it is doing is going through all of the objects (animals, plants, people, shapes, etc.) that they are aware of, quickly filtering them, and seeing if there is a match.
The human brain is a giant set of chemical and electrical synapses that connect to produce consciousness. The brain is commonly called a neural network due to the network of neural pathways. According to researchers, humans are able to update their knowledge. In a technical sense, what is happening is that the weights of the synaptic connections that are the basis of our neural network brain are updated. As we go through life, our previous experiences will shape our approach to things. Beyond this, it can also affect how we feel about things in a given moment, again, based upon our previous experiences.
This approach is similar to how artificial intelligence operates. Let us look at that next.
How Artificial Intelligence Works
The current way that artificial intelligence works is by allowing you to specify an input, or prompt, and having the model create an output. The output can be text, images, speech, or even just a decision. All artificial intelligence is based on what is called a Neural Network.
A Neural Network is a machine learning algorithm that is designed to make a decision. The manner in which this is done is by processing data through various nodes. Nodes generally belong to a single layer, and for each neural network, there are at least two layers: an input layer and an output layer.
Each node within a neural network is composed of three different things: weights, thresholds (also called a bias), and an output. Data goes into the node, the weights and thresholds are applied, and an output is created. A node requires the ability to actually come to a determination and is based on training, or what a human might call, knowledge.
Training
Humans have a variety of ways of learning something that can include family, friends, media, books, TV shows, audio, and just exploring. Neural Networks cannot be trained this way. Instead, neural networks need to be given a ton of data in order to be able to learn.
Each node within a neural network provides an output, sending that to another node, which provides its output, and the process continues until a result is determined. Each time that a result is determined, a positive or negative correlation is determined. Much like a human, the more positive connections that are made, the better, and eventually, the positive correlation between an answer and the result will push away the negative connections. Once it has made enough positive correlations (gotten the right answer), it will eventually be trained.
There are actually two types of training: Supervised Learning and Reinforcement Learning.
Supervised Learning is the idea of feeding a training model so that it can learn the rules and provide the proper output. Typically, this is done using two methods: either classification or regression. Classification is pretty simple to understand. Let us say that you have 1000 pictures, 500 dogs, and 500 cats. You provide the training model with each photo individually and you tell it the type of pet for each image.
Reinforcement learning is similar, but different. In this scenario, let us say you have the same 1000 pictures, again 500 dogs and 500 cats. But instead of telling the model what is what, you let it determine the similarities between the items and as it continues to get them right, that will reinforce what it already knows.
Inference
Inference, in reference to artificial intelligence, is the process of applying a training model to a set of data. The best way to test a model is to provide it with brand-new data to try and infer the result with this brand-new data.
Artificial Intelligence works by taking the input of the new data, applying the weights, also known as parameters, that are stored in the model and applying them to the actual data.
Inference is not free, it does have a cost, most particularly when it comes to energy usage. This is where optimizations can be useful. As an example, Apple will utilize the Neural Engine as much as possible for its on-device inference. The reason for this is because the Neural Engine is optimized to perform inference tasks, while minimizing the amount of energy needed.
Artificial Intelligence Use Cases
No tool is inherently good or inherently bad, the tool is the tool. It is how it is used that determines whether it is a positive usage or a negative use. Artificial Intelligence is no different in this. Artificial intelligence can have a wide range of possible use cases. Current artificial intelligence is capable of performing actions related to detecting cancer, synthesizing new drugs, detecting brain signals in amputees, and much more. These are all health-related, but that is where many artificial intelligence models are thriving, at least at the moment, but that is not all that is possible.
Not all artificial intelligence usage is positive. There are many who will want to make what are called "Deep Fakes". A deep fake is a way of taking someone and either placing them in a situation where they never were, or even making them say something that they have never said. This is not new, not by a long shot. Since the inception of photos, there have always been manipulations. This is designed to influence someone into thinking a particular way. As you might guess, this can have detrimental effects because it distorts reality. While there are those who want to use these for nefarious purposes, there can be some positive use cases for this type of technology.
Back in 2013, country music artist Randy Travis suffered a stroke and, as a result, now suffers from aphasia, which, according to the Mayo Clinic, is "a disorder that affects how you communicate." This effectively left him unable to perform. However, in May of 2024, a brand-new Randy Travis song was released using artificial intelligence that used two proprietary AI models to help create the song. This was done with full permission from Randy Travis himself, so there is no issue there.
Let us look at a couple of different approaches used, including Large Language Models and Image Generators.
Large Language Models
Large language models, or LLMs, are those that are able to generate language that a human would understand. To quote IBM:
"In a nutshell, LLMs are designed to understand and generate text like a human, in addition to other forms of content, based on the vast amount of data used to train them. They have the ability to infer from context, generate coherent and contextually relevant responses, translate to languages other than English, summarize text, answer questions (general conversation and FAQs), and even assist in creative writing or code generation tasks." - Source: IBM.
LLMs can be used for generating, rewriting, or even changing the tone of text. The reason that this is possible is because most languages have pretty rigid rules, and it is not a complex task to calculate the probability of what the next word would be in a sentence.
The way that an LLM is trained is by consuming vast amounts of text. It then recognizes patterns from this data and then it can generate text based upon what it has learned.
Image Generation
One of the uses of modern artificial intelligence is the ability to create images. Similar to LLMs, there are image generation models that have been trained on a massive number of images. This data has been used to train the models which are used for the actual image generation. Depending on the model, you may be able to generate various types of images, ranging from cartoons to completely realistic ones.
Image generation models use a technique called Generative Adversarial Networks, or GANs. The way that a GAN works is using two different algorithms, the generator, and the discriminator, that work in tandem. The generator will output a bunch of random pixels as an image and then send it over to the discriminator. The discriminator, which has knowledge of millions of pictures of what you are trying to generate, will provide a result, which is basically a "Yes" or "No". If it is a 'no', then the generator will try again and again.
This back and forth is what is called an "adversarial loop" and this loop continues until the generator is able to generate something that the discriminator will say matches the intended type of image.
The training for GANs is quite interesting. It starts with an image and then purposely introduces noise into the image, and it does so again, and again, and again. This process reiterates a large number of times. This noisy data is what becomes the basis for the generator.
All of this is a good base for looking at what Apple has in store for its own artificial intelligence technologies, so let us look at that now.
Apple and Artificial Intelligence
You might think that Apple is late to the artificial intelligence realm, but in fact, Apple has been working with artificial intelligence for many years; it has just been called something else. Some of the areas where Apple has been using artificial intelligence have been with Photos, Siri, Messages, and even auto-correct.
Apple Intelligence
As mentioned above, Apple Intelligence is Apple's take on artificial intelligence. Apple Intelligence differs from standard artificial intelligence in that Apple intelligence is designed to work on YOUR information, not on general knowledge. The primary benefit of working on your data is that your data can remain private. This is done using on-device models.
On-Device Requests
A vast majority of Apple Intelligence requests will be performed on your device. There are a number of examples of this, including things like:
"Find me pictures of [someone] while in London."
"When is Mom's flight landing?"
Apple has been doing a lot of research with machine learning to be able to run on-device. This has meant that the machine learning models have needed to be kept the same in terms of quality but need to be able to be used on devices with limited amounts of memory. Limited, of course, is relative. We are not talking like 1GB of RAM, but more like 8GB.
The reason that Apple wants to be able to do much of the processing on your device is twofold. The first is response time. By having devices handle requests, they can be almost instantaneous. This is quite beneficial for those times when you may not have connectivity. Beyond this, sending all of your requests to the cloud would end up providing some sort of delay, even with a direct connection and incredibly fast connection speeds.
The second reason is privacy. Privacy is a big part of Apple's core beliefs. When using your own device and processing the request on the device, that means that nobody else will get access to your data, not even Apple. Instead, only you will have access to your data, which is great for your own peace of mind.
Even though as much as possible will be done on your own devices, there may be instances when your device is not able to handle your request locally. Instead, it may need to be sent to the cloud. This can be needed for larger models that require additional memory or processing to be done. If this is needed, it is handled automatically by sending it to Apple's Private Cloud Compute platform. Let us look at that next.
Private Cloud Compute
Nobody wants their data to get out of their control, yet it does happen from time to time. Apple takes data privacy seriously and has done a lot to help keep people's data private. This is in contrast to other artificial intelligence companies, who have no compunction to take user data and use it to train their machine learning models.
Apple has been working on reducing the size and memory requirements for many machine learning models. They have accomplished quite a bit, but right now there are some machine learning models that require more tokens, which means more memory, than devices are capable of having. In these instances, it may be necessary to use the cloud to handle requests.
Apple has 1.2 billion users, and while not all of the users will utilize Apple Intelligence immediately, Apple still needs to scale up Apple Intelligence to support all users who will be using it. In order to make this happen, Apple could just order as many servers as they want, plug them in, and make it all work. However, that has its own set of tradeoffs. Instead, Apple has opted to utilize their own hardware, create their own servers, and make things as seamless as possible for the end user, all while protecting user data.
Private Cloud Compute is what powers online requests for Apple Intelligence. Private Cloud Compute runs in Apple's own data centers. Private Cloud Compute is powered by a series of nodes. Each of these nodes uses Apple Silicon to process requests. These are not just standard Macs; they have been heavily customized.
Nodes
Each Private Cloud Compute node undergoes significant quality checks in order to maintain integrity. Before the node is sealed and its tamper switch activated, each component undergoes a high-resolution scan to make sure that it has not been modified. After the node has been shipped and arrives at an Apple data center, it undergoes another verification to make sure it still remains untouched. This process is handled by multiple teams and overseen by a third party who is not affiliated with Apple. Once verification has been completed, the node is deployed, and a certificate is issued for the keys embedded in the Secure Enclave. Once the certificate has been created, it can be used.
Request Routing
Protecting the node is just the first step in securing user data. In order to protect user data, Apple uses what is called "target diffusion". This is a process of making sure that a user's request cannot be sent to a specific node based on the user or its content.
Target diffusion begins with the metadata of the request. This information strips out user-specific data as well as the source device. The metadata is used by the load balancers to route the request to the appropriate model. In order to limit what is called a "replay attack", each request has a single-use credential which is used to authorize requests without tying it to a specific user.
All requests are routed through an Oblivious HTTP, or OHTTP, relay, managed by a third-party provider, which hides the device's source IP address well before it ever reaches the Private Cloud Compute node. This is similar to how Private Relay works, where the actual destination server never knows your true IP address. In order to steer a request based on source IP, both Apple's Load Balancer as well as the HTTP relay would need to be compromised; while possible, it is unlikely.
User Requests
When a user's device makes a request, it is not sent to the entire Private Cloud Compute service as a whole; instead, pieces of the request are routed to different nodes by the load balancer. The response that is sent back to the user's device will specify the individual nodes that should be ready to handle the inference request.
When the load balancer selects which nodes to use, an auditable trail is created. This is to protect against an attack where an attacker compromises a node and manages to obtain complete control of the load balancer.
Transparency
When it comes to privacy, one could say, with confidence, that Apple does what they say they are doing. However, in order to provide some transparency and verification, Apple is allowing security researchers the ability to inspect software images. This is beyond what any other cloud company is doing.
In order to make sure there is transparency, each production build of Apple's Private Cloud Compute software will be appended to a write-only log. This will allow verification that the software that is being used is exactly what it claims to be. Apple is taking some additional steps. From Apple's post on Private Cloud Compute:
Our commitment to verifiable transparency includes:
1. Publishing the measurements of all code running on PCC in an append-only and cryptographically tamper-proof transparency log.
2. Making the log and associated binary software images publicly available for inspection and validation by privacy and security experts.
3. Publishing and maintaining an official set of tools for researchers analyzing PCC node software.
4. Rewarding important research findings through the Apple Security Bounty program.
This means that should an issue be found, Apple will be notified before it can become an issue, take actions to remedy the issue, and release new software, all in an attempt to keep user data private.
Privacy
When a request is sent to Apple's Private Cloud Compute, only your device and the server can communicate. Your data is sent to the server, processed, and returned to you. After the request is complete, the memory on the server is wiped so your data cannot be retrieved. This includes wiping the cryptographic keys on the data volume. Upon reboot, these keys are regenerated and never stored. The result of this is that no data can be retrieved because the cryptographic keys are sufficiently random that they could never be regenerated.
Apple has gone to extensive lengths to make sure that nobody's data can be compromised. This includes removing remote access features for administration, high-resolution scanning of the Private Cloud Compute node before it is sealed, and making sure that requests cannot be routed to specific nodes, which may allow someone to compromise data. Beyond this, when a Private Cloud Compute node is rebooted, the cryptographic keys that run the server are completely regenerated, so any previous data is no longer readable.
For even more detail, be sure to check out Apple's blog post called "Private Cloud Compute" available at https://security.apple.com/blog/private-cloud-compute.
General World Knowledge
Apple Intelligence is designed to work on your private data, but there may be times when you need to go beyond your own data and use general world knowledge. This could be something like asking for a recipe for some ingredients you have, or it could be a historical fact, or even to confirm some existing data.
Apple Intelligence is not capable of handling these types of requests. Instead, you will be prompted to send these types of requests off to third parties, like OpenAI's ChatGPT. When you are prompted to use one of these, you will need to confirm that you want to send your request and that your private information (for that specific request) will be sent to the third party.
At launch, only OpenAI's ChatGPT will be available. However, there will be more third-party options coming in the future. This type of arrangement is a good escape valve should you need to get some information that is not within your own private data. Now that we have covered what Private Cloud Compute is, let us look at what it will take to run Apple Intelligence.
Minimum Requirements
Apple Intelligence does require a minimum set of requirements in order to be used. Apple Intelligence will work on the following devices:
iPhone 16 Pro/Pro Max (A18 Pro)
iPhone 16/16 Plus (A18)
iPhone 15 Pro/Pro Max (A17 Pro)
iPad mini (A17 Pro)
iPad Pro (M1 and later)
iPad Air (M1 and later)
MacBook Air (M1 and later)
MacBook Pro (M1 and later)
Mac mini (M1 and later)
Mac Studio (M1 Max and later)
Mac Pro (M2 Ultra and later)
There are a couple of reasons why these are the devices that can be used. The first is that they require a neural engine. For the Mac, this was not present until 2020 when the first Macs with Apple Silicon were released. For the iPhone, the first Neural Engine appeared with the A11 Bionic chip on the iPhone 8, 8 Plus, and iPhone X. All iPhones since have included a Neural Engine, but that is just one requirement.
The second requirement is the amount of memory. The minimum amount of memory to run the on-device models is 8 gigabytes. The iPhone 15 Pro and iPhone 15 Pro Max are the first iPhones to come with 8GB of memory. All M1 Macs have had at least 8GB of memory.
Now, this is the minimum amount of memory. Not all features will work with only 8GB of memory. One example is a new feature for developers within Apple's Xcode app. With Xcode 16, developers will have the option of using Apple's Predictive Code Completion Model. When you install Xcode 16, there is an option that allows you to download the Predictive Code completion model, but only if your Mac has 16GB of memory or more. To illustrate this, if you have a Mac mini with 8GB of memory, you will get the following installation screen.
Similarly, let us say you have a MacBook Pro with 32GB of unified memory, you will get this installation screen.
As you can see, the Predictive Code Completion checkbox is not even an option on the Mac mini with 8GB of memory. And the Predictive Code Completion is a pretty limited amount of knowledge. Swift, while being a large programming language, is limited in scope, and that model does not work on 8GB.
It would not be presumptuous to think that this may be the case for various Apple Intelligence models going forward. Now that we have covered the minimum requirements, let us look at some of the use cases that Apple Intelligence can handle, starting with something called Genmoji.
Enabling Apple Intelligence
As outlined above, Apple Intelligence is available for compatible devices running iOS 18.1, iPadOS 18.1, or macOS Sequoia 15.1. However, Apple Intelligence is not automatically enabled. Instead, you will need to enable it. Apple Intelligence is activated on a per Apple Account basis. This only needs to be done once. Once activated, it will need to be enabled per device. To activate Apple Intelligence perform these steps:
Open Settings on iOS, or iPadOS, or System Settings on macOS Sequoia.
Scroll down to "Apple Intelligence".
Tap, or click, on "Apple Intelligence" to bring up the settings.
Tap, or click, on "Join Apple Intelligence Waitlist". A popup will appear
Tap on the "Join Apple Intelligence Waitlist" button to confirm you want to join the waitlist.
Once you do this, you will join the Apple Intelligence waitlist. It may take some time before you are able to access the features. Once your Apple Account has had Apple Intelligence activated on it, you will then get a notification on your device indicating that Apple Intelligence is ready.
At this point, you can click on the "Turn On Apple Intelligence" button, and a popup will appear that will allow you to enable the features. Once you have enabled Apple Intelligence on your device, you will be able to use the features.
Closing Thoughts on Apple Intelligence
Many Artificial Intelligence tools require sending your private data to a server in the cloud to be able to perform a particular task. Doing this has the potential to not only leak your private data, but your private data can possibly be used to train additional artificial intelligence models. This is an antithesis to the core values of Apple, so Apple has taken a different approach with their own artificial intelligence that they are calling Apple Intelligence.
Apple Intelligence is designed to work on your private data and maintain that privacy. The way that this is accomplished is through a service called Private Cloud Compute. Private Cloud Compute is a set of servers in Apple's own datacenter that are built on Apple Silicon, utilizing features like the Secure Enclave to maintain the integrity of the server. Beyond this, each time that a request has been completed, the previous keys are wiped, and the server is completely reset and reinitialized with no data being retained between reboots.
Apple Intelligence is designed to help you accomplish tasks that you need, like summarizing text, generating new emojis, creating images, and more.
Apple Intelligence will be a beta feature starting in late 2024, with some overall features not coming until 2025, and it will be English only at first. Furthermore, these features will not be available in the European Union, at least not at first.
Apple Intelligence will have some pretty stiff requirements, so it will not work on all devices. In fact, you will need to have an Apple Silicon Mac or an iPad with an M1 or newer, or an A17 Pro. For the iPhone, you will need a device with an A17 Pro, A18, or A18 Pro. These devices are the iPhone 15 Pro, iPhone 16/16 Plus, or iPhone 16 Pro/Pro Max to take advantage of the Apple Intelligence features.
This is merely an introduction to Apple Intelligence, There will be more articles in this series, so be sure to check out those articles.
Here is the iPhone 16 and 16 Pro availability for the Sunday, October 27th, 2024. There are a few changes and only for the carriers.
Highlight of Changes
For Apple, there are no changes.
For AT&T, there are no changs.
For T-Mobile, the iPhone 16 Plus changes are slips in availability, the iPhone 16 Pro change is an improvement. There are a mix of changes for the iPhone 16 Pro..
For Verizon, the three iPhone 16 Pro changes are slips from 'In Stock' to November 8th.