Apple has gotten into what society is calling "Artificial Intelligence", or "AI", except they have put their own spin on the term. One of Apple’s core tenets is protecting user privacy, and they attempt to do so whenever possible. In order to accomplish this, Apple attempts to protect user privacy through a combination of performing as many requests on the device as possible. However, for advanced requests that may not always be possible. For those requests there is a feature called Private Cloud Compute. Private Cloud Compute is a set of servers run by Apple that are capable of handling your request, providing the result, and then the server is completely erased. If you need more details, be sure to check out the post that is the Introduction to Apple Intelligence.
Apple is releasing the Apple Intelligence features in batches. The first set of features released were done so in iOS 18.1, iPadOS 18.1, and macOS Sequoia 15.1. This set included "Typing with Siri", "Summarization and Mail", and "Writing Tools". The second set of Apple Intelligence features are being released with iOS 18.2, iPadOS 18.2, and macOS Sequoia 15.2. There are four new features for this batch. This article will cover integration with "ChatGPT".
Apple Intelligence is designed to work with your personal data. This is all well and good, but sometimes you might need to know about more than just your personal context. In these cases, you may need some more worldly context, and this is where ChatGPT can be useful.
iOS 18.2, iPadOS 18.2, and macOS Sequoia 15.2 can all integrate Siri with ChatGPT in order to be able to ask about information that Siri does not have any idea about. You will need to enable ChatGPT; more on that in a moment. Before we dive into that, let us talk about privacy.
Privacy
It seems like every company today wants to gather as much information about you as possible. This could be something somewhat innocuous like where your mouse is to something extremely personal, like search history, or even personally identifiable information. With ChatGPT on iOS 18.2, iPadOS 18.2, and macOS Sequoia, you are in full control in two ways.
The first way is that you will need to confirm each time that you want to send a request to ChatGPT. This will only appear when you make a request that would warrant being sent to ChatGPT. This will need to be done for each request.
The second way is that you have an option of using a ChatGPT account. However, this is absolutely NOT required. If you do not have an account, all of your requests will be sent anonymously to the ChatGPT servers.
Enabling ChatGPT
If you want to enable ChatGPT, you can do so by using the following steps:
Open Settings.
Scroll down to "Apple Intelligence & Siri".
Tap, or click, on Apple Intelligence & Siri to open up the settings.
Under "Extensions" tap, or click, on ChatGPT.
Tap, or click, on "Setup" next to Use ChatGPT to begin the setup wizard. A popup will appear.
Tap on the "Next" button to continue. A "Privacy" explanation screen will appear.
On the "Privacy" screen, tap on the "Enable ChatGPT" button. The optional account screen will appear.
On the "Using ChatGPT with an Account" page, you can Sign In with your existing ChatGPT account, or you can click on the "Enable ChatGPT without an Account" button.
ChatGPT Requests
When you make a Siri request that Siri cannot handle itself, you will see a prompt similar to this:
When you click on the "Use ChatGPT" button, your request will be sent to ChatGPT, and the result will be shown as a standard Siri result, with the ChatGPT information at the bottom of the screen.
ChatGPT Account
With ChatGPT on iOS 18.2, iPadOS 18.2, and macOS Sequoia 15.2, you can use the ChatGPT features without signing in. When you do this, you will have a limited amount of usage of the advanced ChatGPT models. If you make additional requests, those requests will use more basic models.
Many people already have a ChatGPT account, and you can sign into that account. When you sign into an account, all of your request history will be saved. There are two types of ChatGPT accounts: a free account and a paid account.
If you do not have a paid ChatGPT account, you can actually upgrade to ChatGPT Plus right from within settings on iOS or iPadOS. To upgrade to ChatGPT Plus, perform the following steps:
Open Settings.
Scroll down to "Apple Intelligence & Siri".
Under Extensions, tap on "ChatGPT".
Tap on "Upgrade to ChatGPT Plus". A popup will appear.
Tap on the "Subscribe" button. An App Store popup will appear.
Tap on the App Store "Subscribe" button to confirm you want to subscribe.
Once you have subscribed, you can use the advanced features of ChatGPT. It should be noted that if you already have a subscription, it should be reflected as such when you sign into your account.
Closing Thoughts
Being able to use ChatGPT for requests that Siri cannot answer natively is a good addition. The fact that you can make a decision of whether or not to send a request to Siri is a great privacy benefit. Along with this, you can use ChatGPT without an account, which means that you can maintain your privacy or your request history, depending on your needs.
ChatGPT is currently the only service integrated with Siri, but it is likely that additional ones will be added in the future.
Be sure to check out all of the other articles in the Apple Intelligence series.
Apple has gotten into what society is calling "Artificial Intelligence", or "AI", except they have put their own spin on the term. One of Apple’s core tenets is protecting user privacy, and they attempt to do so whenever possible. In order to accomplish this, Apple attempts to protect user privacy through a combination of performing as many requests on the device as possible. However, for advanced requests that may not always be possible. For those requests there is a feature called Private Cloud Compute. Private Cloud Compute is a set of servers run by Apple that are capable of handling your request, providing the result, and then the server is completely erased. If you need more details, be sure to check out the post that is the Introduction to Apple Intelligence.
Apple is releasing the Apple Intelligence features in batches. The first set of features released were done so in iOS 18.1, iPadOS 18.1, and macOS Sequoia 15.1. This set included "Typing with Siri", "Summarization and Mail", and "Writing Tools". The second set of Apple Intelligence features are being released with iOS 18.2, iPadOS 18.2, and macOS Sequoia 15.2. There are four new features for this batch. This article will cover “Genmoji”.
Everyone has a different set of talents. Some people can sing, others can entertain through comedy, yet others can write, and some can even draw. There are many who wish they could create works of art, and yet they cannot.
When Apple introduced Apple Intelligence, one of the features promoted was the ability to generate images. The way that this would be done is through a dedicated application called Image Playground. Sometimes you have an idea of the type of image that you want, but you have no idea how to get started. Image Playgrounds can help.
Image Playground is designed to allow you to generate an image based upon prompts that you give it. For example, you could describe something like “Show an Alien farming with a sci-fi theme”, or you could say “Corgi and a goat, kayak on a lake”, or even “Cow wearing a blue party hat”, or just about anything else you can think of, and it will be generated.
Communications
There are a variety of ways to communicate. This could be images, video, and even text. When you are communicating with someone via text, it can be difficult to accurately depict what you are trying to say, particularly if you have a limited number of characters. In those situations, you may want to communicate something in a succinct manner.
For text, it can be useful to add a bit of flair or even clarification with something called emoji. Emoji is a standard managed by the Unicode Consortium. As of this writing, there are 3,790 defined emoji characters. This does not include variations, like skin tones for people. Some of these emojis are things you might expect, like a heart, smiley faces, pizza, burger, book, and a giraffe, just to name a few. There are also some rather obscure ones, like Passport Control, Coral, Non-Potable water, and even a Pager.
While it is likely that you will find something that will work in your own situation, you may not be able to find just the right emoji. This is where Custom Emoji can be helpful. Apple calls this Genmoji.
Much like other Emoji-related items, like Memoji, Genmoji are generated emojis.
Generating Emoji
If you are not able to find just the right emoji that you need, you can now create one with Genmoji. To create an emoji, you can use the following steps:
Open the Emoji keyboard.
Start typing the emoji that you are looking for.
When the emoji is not found, tap on “Create new Emoji”. A popup will appear.
The generated Emoji will show a screen similar to that of Image Playground, where four possible versions will be generated. You can swipe between the variations and see which one is the one that you want to use. If you do not like the ones that have been generated, you can swipe to the right and additional versions will be created.
Once you have found one that works for what you want, you can then use it in the app of your choosing. In fact, the custom Emoji will be saved as a sticker that you can use throughout iOS, iPadOS, and macOS Sequoia, but they can only be created on iOS 18.2 and iPadOS 18.2. When a custom emoji is saved, it will be created as a sticker and it is automatically synchronized across your devices.
Genmoji Details
Let us say that you have generated a custom emoji and have used it for a while, you may want to create something similar but may not remember the description that you used. You are in luck, you can actually view the details of the emoji by using the following steps:
Locate the custom emoji that you want to get details about.
Tap and hold, or right-mouse click, on the emoji. A menu will appear.
Tap on the “Details” menu item.
Once you tap on the “Details”, a popup will appear and it will provide you with the data used to create the emoji. The information is only the prompt used, but this may be enough for you to re-create, or create another version, of the custom emoji.
Custom Emoji Examples
Here are a few custom emojis that I have generated, including the text used to generate them.
Sparkle Face
Rainbow Colored Computer
Turquoise Computer
Closing Thoughts
Even though there are nearly 3,800 standard emojis, there are still a plethora of other emojis that one might want to use that are not available. This is where Genmoji can be useful. You can generate any number of new emojis that you want. Much like the Image Playground, you can describe the emoji that you are looking for, and one can be generated.
When you do generate an emoji, it will be saved and automatically synchronized across your devices running iOS 18.2, iPadOS 18.2, and macOS Sequoia 15.2. This means that you can use all of your emojis within many apps within the operating systems.
Be sure to check out all of the other articles in the Apple Intelligence series.
Apple has gotten into what society is calling "Artificial Intelligence", or "AI", except they have put their own spin on the term. One of Apple’s core tenets is protecting user privacy, and they attempt to do so whenever possible. In order to accomplish this, Apple attempts to protect user privacy through a combination of performing as many requests on the device as possible. However, for advanced requests that may not always be possible. For those requests there is a feature called Private Cloud Compute. Private Cloud Compute is a set of servers run by Apple that are capable of handling your request, providing the result, and then the server is completely erased. If you need more details, be sure to check out the post that is the Introduction to Apple Intelligence.
Apple is releasing the Apple Intelligence features in batches. The first set of features released were done so in iOS 18.1, iPadOS 18.1, and macOS Sequoia 15.1. This set included "Typing with Siri", "Summarization and Mail", and "Writing Tools". The second set of Apple Intelligence features are being released with iOS 18.2, iPadOS 18.2, and macOS Sequoia 15.2. There are four new features for this batch. This article will cover a feature called "Image Playground".
Everyone has a different set of talents. Some people can sing, others can entertain through comedy, yet others can write, and some can even draw. There are many who wish they could create works of art, and yet they cannot.
When Apple introduced Apple Intelligence, one of the features promoted was the ability to generate images. The way that this would be done is through a dedicated application called Image Playground. Sometimes you have an idea of the type of image that you want, but you have no idea how to get started. Image Playgrounds can help.
Image Playground is designed to allow you to generate an image based upon prompts that you give it. For example, you could describe something like "Show an Alien farming with a sci-fi theme", or you could say "Corgi and a goat, kayak on a lake", or even "Cow wearing a blue party hat", or just about anything else you can think of, and it will be generated.
Generating Images
The way that you generate images is quite simple; you can use one of two approaches. The first is to add individual words to help describe what you are looking for. The second option is to use full sentences to indicate what you are looking for.
Once you have added your words or description, an image should be generated. You can add additional words to help refine the image that has been generated. In fact, there will be four variations generated. The images that are generated are square and come in at 1024 pixels on each side. You can cycle through the four generated images, and if you do not see one that is what you are expecting, you can click on the right arrow, and it will generate another new image using the same phrase and words that you used previously.
After an image is generated, you may want to go back and change the description that you have chosen. This can be done by tapping or clicking outside of the image. When you do this, you will be brought back to the "Description" screen with all of your selections or entered words.
Tip
A tip for when you generate an image is to be as descriptive as possible. The more detailed, the better the results will be. Each prompt item can be a maximum of 100 characters, but you can add additional phrases.
Suggestions
Sometimes you may not be able to think of something to generate. For those instances, there are the "Suggestions". Suggestions can be one of a variety of types. These include:
Themes
Costumes
Accessories
Places
For each of these types, there will be twelve options. You can click on any of them and add them to your image.
Using People and Pets
It is possible to generate images that do not include people; however, once you add a word that could even possibly relate to a person, Image Playground will REQUIRE you to add a person. This does not necessarily need to be a photo of a person, although that is the intention. Instead, you can use an "appearance".
Appearances
An "Appearance" is one of five skin tones matched with one of three appearances of the type of person. One thing that did not make sense was why there were three different options.
Using Photos
If you prefer to create an image based on a photo, you can do so. You can use any photo that you would like, and it will generate an image from the photo. The photo could be of a person, a pet, a nature scene, or any other photo. Image Playground will do its best to try and generate an image to your specifications.
Styles
When Image Playground was initially introduced, it was indicated that there would be three different styles: Animation, Illustration, and Sketch. This last one, Sketch, is not currently present, and it is unknown if it will be added. However, you can still generate an Animation or Illustration. Sketch may be added in the future, but for the time being, it is not there.
If you have generated an image in one of the available styles, you will be able to switch the style by clicking on the "Style" button and then clicking on another style. The image will be regenerated in the requested style. You might think that if you switch back, the image would be regenerated again, but it will not. Instead, the previously generated image will be shown.
Saving Images
Once you have finalized the image, you have a few options. The first is that you can save it. When you do this, it will appear in your Image Playground library. In the instance that you have edited an image and if you attempt to save, you will be prompted whether or not you want to overwrite the image or save as a duplicate. The popup will appear like this:
One thing I would recommend is to save as a duplicate; you can always delete an image later if desired.
Sharing Images
If you want to show the generated image to someone, you can also share an item. To do this, you can use these steps:
Locate the image you want to share.
Click on the "Up Arrow" button to bring up the system Share Sheet.
Alternatively, you can tap and hold, or right-mouse click, on the image and then select the "Share" item to open the Share Sheet.
Regardless of the manner in which you bring up the Share Sheet, you can then perform any action defined within your Share Sheet, including copying, saving, sending, emailing, or even AirDropping the image.
Image Playground within Notes
One of the features of the Notes app is the ability to draw directly within the app. This feature was introduced with iOS 9 in 2015 and was designed for the original iPad Pro, which introduced the Apple Pencil. Since its introduction, the Notes app has gained even more features, including the tool palette. With iPadOS 18.2, there is a new tool called "Image Wand".
The Image Playground tool is integrated directly into Notes and can be used to take a drawn image and generate a photo from the drawing. To generate an image from a drawing, perform the following steps:
Select the drawing that you want to use as a basis for the generated image.
Tap and hold to bring up the menu.
Tap on "Add to Playground" to open up the image generation popup.
Enter some text to help describe what you are looking for.
Hit the "up arrow" to generate the image.
Just like Image Playground, four different options will be shown. You can swipe between them to find the right image. Once you have found the proper image, tap on "Done" to add it to the current note.
Generating from Text
Being able to generate an image from a drawing is useful, but sometimes you want to augment a piece of text with an image. If you have a description that you want to use to generate an image, perform the following steps:
Select the text you want to use to generate an image.
Tap and hold to bring up the menu.
Tap on "Add to Playground" to bring up the image generation popup.
Swipe between the generated options until you find a suitable image.
Tap on "Done" to have the picture added to the current Notes document.
I tested using this text: "Imagine, if you will, a dilapidated house with broken windows, shudders hanging off of the sides, on a hill". This was one of the results:
Gallery of Examples
Here is a gallery of examples from Image Playground:
Photo-based
Photo-based
Closing Thoughts
Image Playground is capable of taking your input, whether it be text or an image, and generating an image. Whether you use text only or ad an image, you can add additional words or description to create the image that you are looking for. It may take a few tries to get the image you are looking for. One tip is to be as descriptive as possible.
For your images, you have a couple of different options for styles, including Animation and Illustration, and the image will be 1024 x 1024. After you have generated your image, you can share it with others, including emailing, AirDropping, or even messaging, depending on what you need.
It is hoped that Apple increases the physical size of the images and maybe even allows different sizes because it would work better in some situations than others.
Be sure to check out all of the other articles in the Apple Intelligence series.
Apple has gotten into what society is calling “Artificial Intelligence”, or “AI”, except they have put their own spin on the term. One of Apple’s core tenets is protecting user privacy, and they attempt to do so whenever possible. In order to accomplish this, Apple attempts to protect user privacy through a combination of performing as many requests on the device as possible. However, for advanced requests that may not always be possible. For those requests there is a feature called Private Cloud Compute. Private Cloud Compute is a set of servers run by Apple that are capable of handling your request, providing the result, and then the server is completely erased. If you need more details, be sure to check out the post that is the Introduction to Apple Intelligence.
Apple is releasing the Apple Intelligence features in batches. The first set of features released were done so in iOS 18.1, iPadOS 18.1, and macOS Sequoia 15.1. The second set of Apple Intelligence features are being released with iOS 18.2, iPadOS 18.2, and macOS Sequoia 15.2. There are four new features for this set of features. This article will cover “Mail Categorization”.
If you are anything like me, you have a bunch of different email accounts. You may have one account that you set up a long time ago and has become nothing but spam, and another that you use for everyday use, and maybe another that is for a specific purpose. No matter how many email accounts you have, it is likely that you get a lot of email messages. The emails may be order notifications, newsletters, or even promotions.
What would be useful is being able to put emails into various groups or categories. Apple Intelligence on iOS 18.2 can do just that. Mail Categorization will put your emails into one of four different categories. These categories are:
Transactions
Updates
Promotions
Primary
Let us look at each in turn, starting with Transactions.
Transactions
The “Transactions” category will have any orders that you have made, including things like delivery notifications. These are grouped by sender. What this means is that all of the messages from a specific vendor will be grouped together so you can see them all in one place. Here is an example of what that might look like:
Promotions
The “Promotions” category is meant for advertisements. Just like the other
Mail App Badge
When you have Mail Categorization enabled, as well as badges on your Mail app, by default only the emails in the “Primary” category will badge the app icon. However, if you would like, you can change this to be “All” mail. To do this, perform the following steps:
Open Settings
Scroll down to “Apps”.
Tap on “Apps” to open the app list.
Scroll down to “Mail”.
Tap on “Mail” to open up Mail’s app preferences.
Tap on “Notifications” to bring up the Notifications section.
Tap on “Customize Notifications” to bring up notifications options.
Under Badges tap on “All Unread Messages”.
Once you select this option, the badge for the Mail app will show whenever there is an unread message in one of your email accounts. This would be the same behavior as Mail exhibited in iOS 18.1. This means that anything that is in a folder or Junk Mail will not be shown.
Disabling Categorization
If you enable Apple Intelligence and find that Mail Categorization is not for you, there is an option to disable it. This can be done by performing the following steps:
Open the Mail app.
Tap on “All Inboxes”, or any individual inbox.
Tap on the “…” button in the upper corner. A popup will appear.
Tap on “List View”.
Once you enable list view, this will be just like Mail under iOS 18.1, which means no categorization. It should be noted that this can be done on an account-by-account basis. This means that you can enable categorization for one email account but not for another. This means that you can customize the Mail app in a manner that works best for you.
My Thoughts
After having used mail categorization for a while, I am on the precipice of actually disabling it. The primary reason for this is not because of any categorization missteps, but because I prefer to take care of all of my emails, and the badging does not always show when I have a new email. Because of this, I end up not seeing mail until I open it again. This is my personal preference for how I handle email. This is likely because I do not get a lot of email; at least that is not already filtered by SPAM filters. Therefore, I do not necessarily have the need to automatically categorize emails as others might need to.
To a much lesser extent, the fact that Mail Categorization is only available on the iPhone is a big misstep from Apple. I understand that the iPhone is the biggest platform, but having feature parity, particularly when it comes to experiences that are cross-platform, is imperative for some users. Having some mail categorized and some not may cause some users to disable the feature and never re-enable it.
Closing Thoughts
The ability to have email automatically categorized is a good addition to iOS. Unfortunately, it is only available on the iPhone at this time. It may expand to iPadOS and macOS at some point in the future, but Apple has not indicated any plans for that to occur.
While Apple Intelligence tries to properly categorize your incoming email, it may not always get things correct. In those instances, you can recategorize any incoming message, and it should be properly categorized in the future.
It should be reiterated that Mail Categorization is only available on iOS. It is not available on iPadOS nor on macOS. Therefore, on iPadOS and macOS, those apps will behave as they did previously.
Be sure to check out all of the other articles in the Apple Intelligence series.
One thing you may have been able to ascertain if you have read the site for any length of time is that I have been a nerd for a long, long time. Part of that nerdom is constantly being online. I have had a connection to the internet since 1995. At first, as with anybody at the time, it was a dial-up with a couple of different providers. Primarily with MSN and AOL. Due to us always being online and having a bigger family, we ended up getting a second phone line that was just for the internet. Given the pace of change during the 1990s, we would upgrade our computer every few years, and with that would come an update to the modem speeds. Over that time, we had a variety of speeds: 14.4k, 28.8k, 33.6k, and 56.6k. It may be difficult to remember what it was like, but it was a different time indeed.
Over the 1990s and early 2000s, my family had a number of different computers. We needed them because we had a big family, and many people needed to use a computer simultaneously. I think it was 1998 when my parents got me a hub. Yes, it was a hub, not a switch, for either my birthday or Christmas. With the hub, I was able to connect the multiple computers we had together. This worked well because I could install the single printer we had on all of the computers if we needed to print something. Plus, doing this at that time worked well for the future.
In the early 2000s, our cable provider started providing cable internet, and as you might expect, that was a game changer. I honestly do not recall the speeds that we got, but it was definitely way faster than the dial-up. Plus, it was always on. The internet speeds were almost 350 times faster than a 28.8 modem. If I did the calculations correctly, it was a whopping 10 megabits per second, but it was 10 megabits per second, which was a complete change from dial-up internet.
Since then, I have mostly had cable internet, except for a few years when I ended up getting DSL, but for the last 15 years, it has been cable internet. Along with this, I have purchased and used my own cable modems. Primarily, I have used Arris/Motorola Surfboard cable modems. Along with this, I have had my own wireless routers, including multiple Apple AirPort and even a couple of different Eero models, but recently, things have changed.
Most nerds want the fastest speeds that they can get, but at some point, it comes down to the limits of the remote server that you will ultimately get. Along with this, many nerds, including myself, transfer a lot of data. For many years, Comcast did not have any data caps, but in 2016, they did end up implementing them in my area. It was not until 2020 when I ended up getting unlimited data at $30 per month. As an example, going back 12 months, the maximum shown by Xfinity, I am averaging 1.3 terabytes of data transfer per month, and with a cap of 1.2 terabytes, paying for unlimited data is cheaper overall.
In 2017, Comcast updated my speeds from 75Mbps to 100Mbps. In 2019, I had 150Mbps. Again in 2021, my speeds went from 200Mbps to 300Mbps. In October of 2022, my speeds went from 600Mbps to 800Mbps, where they have remained since. For me, download speeds of 800Mbps are plenty; the thing that was really irksome was the upload speeds. The maximum upload speed I could get on my Arris SB8200, which I just bought in October of 2023, has been 20Mbps, with a possible burst of up to 24Mbps. In December of 2023, I received an email indicating that I could get 5x to 10x faster upload speeds if I replaced my cable modem. Having just replaced it, I was a bit irritated because I bought the modem on the recommendation of Xfinity, thinking it would provide the faster upload speeds, but unfortunately, it does not.
Even though I had just replaced it, I have been debating replacing my cable modem, just to get the faster speeds, but it seems like a waste to do so. Regardless, some solution would be needed to get faster upload speeds. Here is what I did have
Faster Speeds
A couple of weeks ago, I went to talk to a rep at my local Xfinity store, and when I talked to them, they indicated that I could only get 20Mbps upload, regardless of the plan. I left that day and did not do anything at that point. I ended up calling Xfinity to verify, and they could not verify anything because the plan that I had was no longer offered. What seemed a bit off is that the limit of 20Mbps was in direct contradiction to the broadband labels on the Xfinity website, which indicated that I could get 100 Mbps upload, with an average of 169.62 Mbps. Due to this discrepancy, I ended up going back to the store because something was amiss.
I walked in, and a representative asked how they could help. I informed him that I wanted to verify the internet speeds shown on the website. We got to talking, and he indicated that the 20Mbps was the minimum I would get, which was not what the last rep indicated, nor the rep on the phone indicated. I explained to him that I was looking to get faster upload speeds.
As mentioned, one way to get these faster speeds was to just buy a new cable modem, and it should work. But, I was paying more than $130 per month. Because of this, when I walked in, I had already planned on getting the Xfinity XFi modem since it would have been $5 cheaper compared to me paying for unlimited data with my own cable modem. I was also prepared to upgrade to the 1Gbps plan, since they technically no longer offer the 800Mbps plan I was on. While talking to the representative, he indicated that they actually had a promo for the 2Gbps plan, as long as I opted for a 2-year contract. Now, I typically try to avoid any sort of contract whenever possible, but it was a pretty good deal.
Ultimately, I ended up saving $21 per month, which is not an insignificant amount of money; that’s just over $250 a year. Furthermore, it is not likely that AT&T will have fiber in my area anytime soon. I would prefer to have AT&T due to having it being completely symmetrical for downloads and uploads. While I was in the Xfinity store, one of the other reps mentioned that even higher upload speeds may be arriving in the coming months, and ultimately there will be symmetrical speeds not being that far away. Now, let us look at the XFi gateway.
Xfinity XFi Gateway
The rep that I talked to at the Xfinity store indicated he was proving their latest XFi gateway. This is the XB8. The XB8 is capable of handling up to 2.5Gbps, and it includes built-in Wi-Fi. The Wi-Fi that is included is Wi-Fi 6E. Beyond the wireless, there are actually a number of ports on the device. The XB8 has six ports on it. Two of these are for voice, which I do not have, and the other four are Ethernet connections. There is a single 2.5Gbps jack on the back, in the lower right corner. The remaining three Ethernet jacks are 1 Gbps each.
When I initially set up the XFi gateway, I did not worry about the Wi-Fi because I was planning on using my existing Eero Pro 6, but after I found out that the XB8 has Wi-Fi 6E, I opted to use that.
Configuration
The initial setup of the XFi was straightforward. I used the Xfinity app to activate it, and once it did its initial update and was activated, everything just worked. It really was a simple and easy setup. After I did a lot of testing to make sure that I was indeed getting more than 20 Mpbs upload speeds; and of course I was.
In order to get onto the internet, every device on your network has to have an IP address. This is usually handled by some sort of router, including ISP-provided devices.
There are a number of devices within my network that I would prefer to have a static IP address, including, but not limited to, my printer. The XFi gateway defaults to using the 10.0.0.0/24 range, which results in 252 devices. You have a couple of other options as well. You can use a /16, which allows more than 25,000 devices, a /25, which allows 128 devices, or even a /8, which allows for more than 16 million hosts. The default is usually enough for most people.
When you configure a Dynamic Host Control Protocol, or DHCP, address, you will also need to provide a lease time. By default, the XFi has a super short setting. I prefer to have a bit longer time, and for me I thought doing an infinite, or forever, lease would make the most sense.
There are a couple of different approaches for setting a static IP on the XFi. You can connect the device, and then switch it to a static, or “Reserved” IP. The other option is to assign the IP address ahead of time. When I initially started trying to reassign devices, it was not working. The IP address would not actually change.
Eventually, I figured out that it was the amount of time that I had set on the DHCP lease. Once I changed this from “forever” to “2 minutes”, everything started being able to be re-assigned. This is not documented anywhere that I could find, but it was what I experienced.
Now, let us look at some speedtest results.
Speedtests
Of course, one of the first things that I did after getting the XFi gateway activated was to run a speed test from my MacBook Pro. My first couple of speed tests were showing around 340Mbps down and 100Mbps up. I was glad to see the upload speeds be that much higher; however, the 340Mbps download was a bit odd. That was until I remembered that I was using iCloud Private Relay.
After disabling iCloud Private Relay, I tested again and it was much better, 600Mbps down and 212 Mbps upload. One feature of the Eero is the ability to run the speediest, and the results from that are 947 Mbps download and 347 Mbps upload. I wanted to verify this and the best way for me to do this was to run a speed test using a wired connection. I used my Mac Studio for this test and it provided around the same, 927Mbps down and 341 Mbps upload. This is an absolute upgrade for my connectivity. However, this is far from utilizing the full 2Gbps.
Utilizing the Faster Speeds
Now, that I have the 2 Gigabit per second plan, I would need to find a way of utilizing as much of it as possible. The 2Gbps connection will allow me to download many things simultaneously on many of the devices connected to my network, all without needing to worry too much about how long it will take. This will be particularly useful when there is a new iOS or macOS update, because I tend to download them on multiple devices at once. But, being able to use 2Gbps on some devices would be a nice benefit from time to time. One particular situation that I can think of is if I have a catastrophic failure of one of my drives and I need to download the backup from my online backup provider. Which, sadly, I have had to do in the past.
One of the limitations of the Eero is that it only has two ports, one for local connections and another for connecting to the cable modem. Therefore, that has required me to use switches. Beyond this, the Eero Pro 6 is limited to 1Gbps ethernet, so I could not even have utilized the speeds even when connected via ethernet, which was one of the reasons I opted to use the XFi gateway.
The big issue with trying to utilize a 2Gbps connection, for me, almost every single one of my devices that has an ethernet connection is only a 1Gbps connection. This includes my Mac mini, Apple TVs, and even the multiple Raspberry Pis that I have. The single exception to this is my Mac Studio, which has a 10Gbps ethernet jack. This is all well and good, but all of the switches in my house are 1Gbps switches. The reason for this is because I did not need anything faster than 1Gbps because my internet connection was only 800Mbps. I could stick with just what I have, and in 99.9% of cases this would be sufficient, yet I prefer to use as much of my internet connection as I can. All of this led to me needing to buy a new switch for my Mac Studio to connect to.
I did a bit of research to see what options were available, and as you might expect, there are a lot of options. I immediately opted not to get a 10Gbps switch, not just because I have no real use for it but also because they are expensive; even a 5-port 10Gbps switch is between $225 and $300. I know I do not need any sort of management. I just need it to connect to my Mac Studio.
I have three switches in my house, and all of these are TP-Link switches; two have 8 ports, and one is a 5 port. I could have just purchased a 5-port switch with even just two 2.5Gbps ports, but I did not want another device that would need to be plugged in all of the time. Instead, I opted to replace the 8-port switch that is currently connected to my Mac Studio, as well as the other items near it, so I needed another 8-port switch. I do not use anything with Power over Ethernet, or POE, so that was not something I needed to worry about.
Initially, I thought I might need two switches: one to go from the area where my Mac Studio is down to another switch and then to the cable modem. However, given that the XFi Gateway has the four Ethernet ports, this was completely unnecessary. I only needed a single 2.5Gbps switch.
Given how well my current TP-Link switches have functioned, I opted to buy another TP-Link switch. This time it is the TP-Link TL-SG108S-M2. I chose this one because all 8 ports are 2.5Gbps ports, it is a small switch, and it was less than $100. Given that I was just swapping out the switches I had, it took less than 5 minutes to fully swap out the two devices. It was a simple process: unplug each of the existing cables and plug it into the new switch.
Once it was connected, it was time for another speed test. Here are those results:
As you can see, the 2.5Gbps switch absolutely allows me to utilize the full download speeds that I am subscribed to. One of the things that I realized as I was installing the 8-port switch is that I could have just stuck with a 5-port switch and been fine, but having more ports is always useful, and the price difference was not enough for me to only get the 5 ports. If the 8-port had been twice the price, of the 5-port it might have been different, but it was not.
Here is another from Akamai's server in Chicago
Closing Thoughts
Initially, the gateways provided by many internet service providers were subpar and had just enough features to say that they provided what everyone needed. However, these days many end users expect more, and companies have stepped up to provide some of the latest features. In the past, you may have wanted to just get your own wireless router, but even as a nerd, it may be worth using the gateway supplied by your ISP. I know I long resisted using Xfinity’s cable modems, but it ended up reducing my monthly cost, eliminating a network item, and it even upgraded my Wi-Fi, resulting in it being the most cost-effective solution overall.
With that said, there may be a day when I end up buying a Wi-Fi 7 router just for devices that can utilize it; however, as of this writing, I only have a single Wi-Fi 7 device, my iPhone 16 Pro Max, and it is absolutely not worth buying a separate wireless router just for that one device.
Today, Pixelmator has announced that it has agreed to be acquired by Apple. From the brief posting:
Today we have some important news to share: the Pixelmator Team plans to join Apple.
We’ve been inspired by Apple since day one, crafting our products with the same razor-sharp focus on design, ease of use, and performance. And looking back, it’s crazy what a small group of dedicated people have been able to achieve over the years from all the way in Vilnius, Lithuania. Now, we’ll have the ability to reach an even wider audience and make an even bigger impact on the lives of creative people around the world.
Regarding any immediate changes, the post states:
Pixelmator has signed an agreement to be acquired by Apple, subject to regulatory approval. There will be no material changes to the Pixelmator Pro, Pixelmator for iOS, and Photomator apps at this time. Stay tuned for exciting updates to come.
My Thoughts
This could be huge in many respects. I suspect there are two possible things that we can see. The first is that once the deal closes, I suspect that many of Pixelmator’s features could be incorporated into Apple’s own Photos app. Furthermore, I could see Apple utilizing Pixelmator as a means of testing out early Apple Intelligence features, particularly within the Photomator app, given that the purpose of that app is to allow you to edit your photos in a non-destructive manner. By using this approach, they could test out new AI features faster before incorporating them into the main Photos app.
The second outcome is a bit different. There are other companies, particularly Adobe, which have artificial intelligence photo enhancement tools already incorporated into their products. Apple likely needs something that can compete. While Apple could absolutely build something, it would take some time. It would be faster to acquire an existing product, and Pixelmator is likely that product.
I can honestly see Pixelmator and Photometer quickly become the new “Image Playgrounds” apps. It is undoubtedly an undertaking to incorporate Apple’s image generation tools into Pixelmator and/or Photomator, but that would definitely be much more of an expense than to build out their own app entirely. I could then easily see Apple providing these two apps for free with basic features, but then having the subscriptions for Pixelmator and/or Photomator for the basis of more advanced photo features powered by Apple Intelligence.
Undoubtedly, it will be interesting to see how Apple incorporates the apps into their own product suite, or what they end up doing with Pixelmator in the long run.
Technology is consistently entertaining new crazes. Some examples include blockchain, subscription juicers, netbooks, 3D televisions, hyperloop, and "hoverboards", just to name a handful of examples. All of these were going to be "the next big thing", but none of these have panned out as the inventors intended.
There has been a term bandied about that people think may be the end-all for computers. Said term is "Artificial Intelligence", or "AI". The term "AI" can mean a variety of different things, depending on whom you ask. However, when most use the term AI, what they are expecting is a fully conscious and sentient entity that can think, act, and rationalize as a human would. This is called "Artificial General Intelligence". Today's technology is nowhere even close to being able to come to this reality. It is not yet known whether or not Artificial Intelligence will actually live up to its ultimate expectations.
Apple is not known for being jumping on bandwagons and being the first to create new categories of technology; they typically leave that to others. However, if there is a technology that they can put their own spin on, they might do so. At their World Wide Developer Conference 24, they introduced one of these types of technologies, called "Apple Intelligence".
Apple Intelligence is not a single item; in fact, it goes against the grain of other AI assistants and only works on your data. Apple Intelligence consists of a variety of tools to help you accomplish a specific task. When introduced, Apple indicated that the initial features of Apple Intelligence would be released over the course of the iOS/iPad 18 and macOS Sequoia releases.
The items that comprise Apple Intelligence include: Writing Tools, Image Generation, and Personalized Requests. Initially, Apple wanted to have the first items available with iOS 18; however, during the beta, Apple realized that the features would not be far enough along for an initial iOS/iPadOS 18.0 and macOS Sequoia (15.0) release, so they were pushed to iOS/iPadOS 18.1 and macOS Sequoia 15.1.
Not every device that can run iOS 18.1, iPadOS 18.1, or macOS Sequoia 15.1 is able to support Apple Intelligence. To be able to run Apple Intelligence you need to have one of the following devices:
iPhone 16/Plus (A18)
iPhone 16 Pro/Pro Max (A18 Pro)
iPhone 15 Pro/Pro Max (A17 Pro)
iPad mini (A17 Pro or later)
iPad Air (M1 or later)
iPad Pro (M1 or later)
Apple Silicon Mac (M1 or later)
The reason that these devices are the minimum is a combination of needing 8GB of memory, as well as a neural engine.
This article is part of an on-going series that covers the features of Apple Intelligence, as they become available. This article focuses on the Apple Intelligence feature called “Hide Distracting Items", within Safari.
The Modern Web
It is hard to imagine today's modern world without the internet. If we did not have the internet, It is entirely plausible that modern society would look incredibly different without the internet. When the internet began is was used merely as a means of sharing information, mostly by the U.S. government and universities. Of course, this would not last, and not long after the internet was created, regular users began joining the internet.
When non-academics and non-government people joined the internet, they began to communicate over bulletin-board systems, creating their own webpages, and their own sites. If you were online in the 1990s, it was a common refrain to hear "do not put your credit card into a site on the internet". Today, though, it is commonplace to do just that.
Running a website is not free, it msut be paid for in some manner. There are a variety of ways of supporting a website. Sometimes, it is with a direct payment, and other times sites are supported with donations. However, the most common method of websites generating revenue is through ads. But ads are not the only items you will encounter while on the web.
Distracting Items
There are those sites that care for their visitors and actually attempt to minimize the distractions that their visitors encounter. But, there are an increasing number of sits that will absolutely bombard you with a variety of items. This can include:
Ads
Autoplay videos
Sign up for a newsletter
Cookie popups
Third-Party Sign in
Use the app
And these are just a few. It is quite possible that you might encounter one, or all, of these on a site, and they can be quite distracting. With iOS 18.1, iPadOS 18.1, and macOS Sequoia 15.1, there is a new feature that can help, at least within Safari.
Hiding Distracting Items
With Safari in iOS 18.1, iPadOS 18.1, and macOS Sequoia you can now use a feature called "Hide Distracting Items". The "Hide Distracting Items" feature is designed to, as the name indicates, hide distracting items on various webpages. This is not the same as a Content Blocker, but it can work in a similar manner.
Hide Distracting Items requires that you indicate which items are distracting, but this is a pretty straightfoward process. To enable Hide Distracting Items perform the following steps:
Open Safari.
Navigate to the website where you want to hide items.
Tap, or click, on the Square and three lines in the URL bar. This should bring up a menu.
Tap, or click, on "Hide Distracting Items".
When you tap on Hide Distracting Items an overlay will be shown. This overlay will highlight various elements on the page. You can click on the "Hide" to confirm that you want to hide the element. You can click on any number of items that you would like to hide and they should be hidden.
Once you have completed selecting the items that you want to hide, be sure to click, or tap, on the "Done" button to save your changes. You can also tap, or click, on "Cancel" to not save your changes.
Showing All Hidden Items
In the event that you accidentally end up hiding too many items and you have saved the changes, you can show all of previously hidden items by using the following steps:
Open Safari.
Navigate to the page you want to show the hidden items on.
Tap, or click, on the Square and three lines in the URL bar. This should bring up a menu.
Tap, or click, on the "Show Hidden Items" button.
Once you click on "Show Hidden Items", all previously hidden items will be shown. It should be noted that this will show ALL previously hidden items, not just from the latest session, but any element you hid. It is not an ideal situation to have to show all hidden items, but it is quite useful should you accidentally hide too many items.
Caveats
The Hide Distracting Items feature is pretty simple to use, but it is not always 100% correct. As an example, you could be attemping to hide a rather egrious ad on a webpage, only to have another ad appear in its place. This happens because of the nature of the Hide Distracting Items feature. It will do its best to consistently hide the items, but if the element id changes between page loads, it might not always hide the element.
Closing Thoughts on Hide Distracting Items
The modern web is chalkful of ads, popups, and just general distractions. It has not always been this way, but many are reluctant to pay for content, and instead of paying with money, you pay with attention and data. Apple has added a new feature to help with the former item; this is called Hide Distracting Items.
With "Hide Distracting Items" you can hide any element on a website. This could be an ad, a popup, or any other distracting item. This works in most situations, but it is foolproof and sometimes items that you have hidden will appear again. If you do manage to accidentally hide some elements on a webpage, you can undo all of them in one fell swoop.
Even though the feature does not work 100% of the time, it does work a majority of the time, so it may be worth exploring for those sites that are egregious with their ads and popups.
Be sure to check out all of the other articles in the series:
Technology is consistently entertaining new crazes. Some examples include blockchain, subscription juicers, netbooks, 3D televisions, hyperloop, and "hoverboards", just to name a handful of examples. All of these were going to be "the next big thing", but none of these have panned out as the inventors intended.
There has been a term bandied about that people think may be the end-all for computers. Said term is "Artificial Intelligence", or "AI". The term "AI" can mean a variety of different things, depending on whom you ask. However, when most use the term AI, what they are expecting is a fully conscious and sentient entity that can think, act, and rationalize as a human would. This is called "Artificial General Intelligence". Today's technology is nowhere even close to being able to come to this reality. It is not yet known whether or not Artificial Intelligence will actually live up to its ultimate expectations.
Apple is not known for being jumping on bandwagons and being the first to create new categories of technology; they typically leave that to others. However, if there is a technology that they can put their own spin on, they might do so. At their World Wide Developer Conference 24, they introduced one of these types of technologies, called "Apple Intelligence".
Apple Intelligence is not a single item; in fact, it goes against the grain of other AI assistants and only works on your data. Apple Intelligence consists of a variety of tools to help you accomplish a specific task. When introduced, Apple indicated that the initial features of Apple Intelligence would be released over the course of the iOS/iPad 18 and macOS Sequoia releases.
The items that comprise Apple Intelligence include: Writing Tools, Image Generation, and Personalized Requests. Initially, Apple wanted to have the first items available with iOS 18; however, during the beta, Apple realized that the features would not be far enough along for an initial iOS/iPadOS 18.0 and macOS Sequoia (15.0) release, so they were pushed to iOS/iPadOS 18.1 and macOS Sequoia 15.1.
Not every device that can run iOS 18.1, iPadOS 18.1, or macOS Sequoia 15.1 is able to support Apple Intelligence. To be able to run Apple Intelligence you need to have one of the following devices:
iPhone 16/Plus (A18)
iPhone 16 Pro/Pro Max (A18 Pro)
iPhone 15 Pro/Pro Max (A17 Pro)
iPad mini (A17 Pro or later)
iPad Air (M1 or later)
iPad Pro (M1 or later)
Apple Silicon Mac (M1 or later)
The reason that these devices are the minimum is a combination of needing 8GB of memory, as well as a neural engine.
This article is part of an on-going series that covers the features of Apple Intelligence, as they become available. This article focuses on the Apple Intelligence feature called “Clean Up".
Photo Editing History
There is an old adage that goes "a picture is worth 1000 words", however the original quote is by newspaper editor Arthur Brisbane, who said "Use a picture. It's worth a thousand words". The quote is from 1911, back when newspapers were the prime method of obtaining news and with the limited amount of space, you could easily put a picture in place of 1000 words. The sentiment of either quote is that it would take about a thousand words to adequately describe a scene, when a single photo would be able to convey the same thing.
Fast forward to today and everything has changed. Written text is still important, but it has been supplanted not only by photos, but also by video. We are nearly two centuries from when the first photograph, "View from the Window at Le Gras" was taken. We have come a long, long way from then. Today's technology can easily take multiple pictures per second when you are using burst mode on a camera.
When film cameras became popular, you would take a photo in the hopes that you would get a usable photo. You would not know right away, because you would need to send your film off to be developed and processed. Once the film was processed, there was typically not a lot that you could do with the photo. That is not to say that some people did not manipulate photos, because of course they did, but it was a skill and not something easily accomplished.
Nearly 35 years ago, there was a new piece of software released. That software is called Photoshop. It is quite likely that you have heard of Photoshop, but in case you have not, Photoshop is software created by Adobe that will allow you to not only create images, but also the ability to edit photos. It is this latter functionality that many use the software for. Photoshop is not an easy piece of software to use, at least not for the average user. There are millions who are quite proficient with the software (the author of this post is absolutely not one of them).
While it is no longer necessary to hope that you got a good photo, there may still be instances when you may want to make some modifications to a photo, but you have the skills to use an app like Photoshop. For these situations, you can use a feature within Photos called "Clean Up".
Clean Up
Clean Up is a new tool that can be used to remove various items from a photo. The Clean Up tool can be found within the editing functions of the Photos app. To access the Clean Up tools, perform the following steps:
Open the Photos app.
Locate the photo that you want to use Clean Up on.
Click on the "Edit" button.
Click on the "Clean Up" button to bring up the Clean Up tools.
Once you bring up a photo, you will have a sidebar that says "Clean Up". Here you will have a single option: the size of the brush. You can adjust the size of the brush by clicking and dragging along the slider. The further right you go, the bigger the brush.
When you bring up a photo for editing, you may notice some items flashing. These flashing objects indicate what Photos thinks you may want to remove. Sometimes, it is correct; other times, it may not be. Let us look at an example.
In the photo below, you will see that it contains a car, some garbage cans, a white car, and a folded chair. In the screenshot, you will see that the garbage cans and car are automatically selected.
If you double-click on any of the highlighted items, they will be removed and their background will be replaced. Here is an example of what that might look like.
-- INSERT SCREENSHOT OF GARBAGE CANS "CLEANED UP" --
Now, you may initially think "Oh, that's pretty good", and at first blush it might be. However, if you look at it closer, it does not work all that well. As an example, the grass has been expanded onto the street. At the same time, the street has been expanded onto the grass. This is not accurate at all.
The thing that I think is the most aggravating is the fact that you can easily see that there is a curb that is circling around behind the garbage cans, yet it is completely removed from the area that you can easily see. It is somewhat understandable that the area behind the garbage cans, that is not seen, is filled in improperly, but the area that is shown should not really be touched.
Let us look at another example.
In this second photo, you can see a squirrel just chilling on the railing of a deck. Let us say that you want to remove the backing of the chair in the lower portion of the photo. It is the area that is highlighted.
Now, if you remove the chair, you will get something like this:
This is an infinitely better photo. The stiles of the railing on the deck are correct, and it does look very close to what you might expect. The only item that I noticed was that the filled-in area along the far right of the photo is not correct. However, it does make sense given that it does not have any information to fill in that area, besides the dirt at the top of the railing.
Closing Thoughts on Clean Up
Clean Up is a good idea and a tool that can provide mixed results. In some cases, the results are good and acceptable. However, there are also those instances where it does not work all that well. Ultimately, it depends on the image and what you are trying to clean up as to whether the proper item(s) will be removed. Hopefully, Apple is able to improve the way that this functionality works and have it function as expected.
Be sure to check out all of the other articles in the series:
Technology is consistently entertaining new crazes. Some examples include blockchain, subscription juicers, netbooks, 3D televisions, hyperloop, and "hoverboards", just to name a handful of examples. All of these were going to be "the next big thing", but none of these have panned out as the inventors intended.
There has been a term bandied about that people think may be the end-all for computers. Said term is "Artificial Intelligence", or "AI". The term "AI" can mean a variety of different things, depending on whom you ask. However, when most use the term AI, what they are expecting is a fully conscious and sentient entity that can think, act, and rationalize as a human would. This is called "Artificial General Intelligence". Today's technology is nowhere even close to being able to come to this reality. It is not yet known whether or not Artificial Intelligence will actually live up to its ultimate expectations.
Apple is not known for being jumping on bandwagons and being the first to create new categories of technology; they typically leave that to others. However, if there is a technology that they can put their own spin on, they might do so. At their World Wide Developer Conference 24, they introduced one of these types of technologies, called "Apple Intelligence".
Apple Intelligence is not a single item; in fact, it goes against the grain of other AI assistants and only works on your data. Apple Intelligence consists of a variety of tools to help you accomplish a specific task. When introduced, Apple indicated that the initial features of Apple Intelligence would be released over the course of the iOS/iPad 18 and macOS Sequoia releases.
The items that comprise Apple Intelligence include: Writing Tools, Image Generation, and Personalized Requests. Initially, Apple wanted to have the first items available with iOS 18; however, during the beta, Apple realized that the features would not be far enough along for an initial iOS/iPadOS 18.0 and macOS Sequoia (15.0) release, so they were pushed to iOS/iPadOS 18.1 and macOS Sequoia 15.1.
Not every device that can run iOS 18.1, iPadOS 18.1, or macOS Sequoia 15.1 is able to support Apple Intelligence. To be able to run Apple Intelligence you need to have one of the following devices:
iPhone 16/Plus (A18)
iPhone 16 Pro/Pro Max (A18 Pro)
iPhone 15 Pro/Pro Max (A17 Pro)
iPad mini (A17 Pro or later)
iPad Air (M1 or later)
iPad Pro (M1 or later)
Apple Silicon Mac (M1 or later)
The reason that these devices are the minimum is a combination of needing 8GB of memory, as well as a neural engine.
This article is part of an on-going series that covers the features of Apple Intelligence, as they become available. This article focuses on the Apple Intelligence feature called "Writing Tools".
Writing Tools
As you might have been able to surmise, the written word is one of the most common forms of communication. This may have started out as handwritten, but now, most of today's writing is in electronic form. Often, this is via a messaging service, like SMS, iMessage, WhatsApp, or a countless number of other messaging services. These work well for shorter messages, but for longer forms of work, there are other applications. One example is a word processor. Word Processing applications have been around since the mid-1970s and have come a long way since then.
When modern computers first came about, they were quite limited and truly for the hobbyists. However, as they gained traction within enterprises, their utility became more apparent. The first word processing software was called "Electric Pencil” and first went on sale in 1976. The first popular word processing application was "WordStar" created by MicroPro International.
WordStar became the market leader but was not the only word processing application available. In the mid-1980s, WordPerfect started gaining traction and became quite popular during the 1980s and 90s. Of course, as you might have surmised, WordPerfect had challengers, specifically one, who still dominates the market today, that is, of course, Microsoft Word.
If you were to attempt to create a word processor today, you would have a lot of work ahead of you. This is not just because it would be a difficult task, because it would be, but also because of the sheer number of features that one would expect. Some of these features you might be able to get right from the operating system, like printing, formatting (bold, italics, underline, strikethrough, etc.), and open/save dialog boxes. However, the remaining feature would be needed. One of those features would be spelling and grammar, which are staple features of any word processing application.
Spelling correction, along with autocorrect and grammar checking, has been integrated into word processors since 1992, when Microsoft added it to Microsoft Word. While Microsoft Word was the prominent word processing app on the Mac, it is not the only one. Apple introduced its own word processor as part of the iLife suite. This app is called Pages.
Pages has become an ever-present application that works across Apple's platforms, including macOS, iOS, iPadOS, and visionOS. As you might expect, Pages does include the ability to perform spelling and grammar checking. These work quite well, but this may not cover all situations. For other situations, the new "Writing Tools" may become useful. Let us look at those next.
Writing Tools is a set of functions that allows you to perform a number of actions. These actions include:
Proofreading
Rewriting
Summarization
Key Points
List Creation
Table Creation
Writing Tools is available system-wide in any application that supports Apple's standard controls. This is a boon in that the features are available across the operating systems. This means that you can easily use the features not only in Apple's own apps, but also in third-party apps. Before we dive into each function, let us look at how to access Writing Tools.
Invoking Writing
The way that you invoke Writing Tools is quite straightforward. Simply perform the following steps:
Select the block of text you want to use Writing Tools on.
Right-click on the text.
Hover over the "Writing Tools" menu option. Alternatively,
Select the tool that you want to use.
Let us look at each of the tools in turn, starting with Proofreading.
Proofreading
When you select the "Proofread", the highlighted text will be checked for both spelling and grammar. When the check is complete, there will be a popup that will show you the changes that have been made, with said changes underlined in red. The popup toolbar will also have a button with three lines and a left arrow. This button will allow you to easily switch between the original text and the replaced text.
The total number of changes will be shown in a toolbar, so you know whether or not anything has been changed. Along with this, you can also switch between the individual changes, which will allow you to review each change individually. If you like the changes, you can click on the "Done" button; however, if you do not like the changes, you can click on the "Revert" button, and the changed text will be reverted.
Writing Styles
There may be occasions when you want to adjust the tone of some text. This could be because your writing style is a bit relaxed and you need something a bit more professional, or it could be that you think the text needs to be a bit more user-friendly. There is a feature designed just for this type of situation. You can convert text into three different styles: Friendly, Professional, or Concise.
The manner in which this is accomplished is similar to using Writing Tools; you perform the following steps:
Select the block of text that you want to convert.
Right-click on the text.
Select the "Writing Tools" menu item.
Select the writing style you want to use.
Just like Proofreading, you will be able to see the changes made and flip back and forth between the versions. Writing Tools is able to perform a few more actions, like List Creation.
Create a List
Being able to proofread and change the writing style of the text is quite useful. Yet, there may be times when you wish to be able to change some text around. As an example, you may have some steps that you initially thought might be concise enough to have in a paragraph, but then realize it would be better to have it as a numbered list. Let us say that you have the following text as instructions:
Select the text you want to convert, right-click on the text to bring up the menus, click on the "Writing Tools" menu item, select the "Make List" option.
This would be easy enough to follow, but it would look better as a numbered list. To accomplish this, you can actually use the above steps and it should result in something like this:
Select the text
Right-click on the text
Bring up the menus
Click on "Writing Tools"
Select "Make List"
Now, this is not exactly what was intended. Therefore, you would need to convert it to a numbered list. If you use Notes, this is easy enough to accomplish by going to "Format" -> "Numbered List", and it will be converted for you. This is currently a limitation of Apple Intelligence, it can only make bulleted lists. I hope that there will be a future option to select the type of list to create.
Summarization
When you create a large body of text you may also want to be able to quickly provide a brief overview. You can easily write out a brief summary. This approach might work well for a couple of pages, but if you have a 10-page item, it might be nicer to have it summarized for you. This is entirely possible to do with Writing Tools. To summarize some text, perform the following steps:
Select the text you want to summarize.
Right-click on the text to bring up the menu.
Select the "Writing Tools" menu item.
Select "Summarize".
I performed a test using my introduction article about Apple Intelligence. That article is just over 4700 words and 228 paragraphs. Apple Intelligence reduced the entire article down to the following:
Artificial Intelligence (AI) aims to create machines that can think and act like humans, but current technology is far from achieving this. AI systems use neural networks to process data and make decisions, with training methods like supervised and reinforcement learning helping them learn and improve. Despite its potential, AI has yet to meet the idealistic depiction of fully conscious machines, and its use cases vary from automated cleaning to image generation.
Artificial Intelligence (AI) is a tool that can be used for both positive and negative purposes. Large Language Models (LLMs) and Image Generators are two examples of AI technologies that can be used for various tasks, including generating text and images. Apple has been working on its own AI technologies, known as Apple Intelligence, which prioritizes privacy by processing requests on-device or on Apple’s Private Cloud Compute platform.
Apple’s Private Cloud Compute service protects user data through target diffusion, which anonymizes requests and prevents replay attacks. Apple Intelligence, powered by Private Cloud Compute, will be available on select devices starting in late 2024, with some features not available until 2025.
Apple Intelligence requires Apple Silicon Macs, iPads with M1 or newer, and iPhones 15 Pro or Pro Max or newer.
Given everything that I wrote in that article, I do not think that the summary is all that good. It is missing some key information, but then again, maybe it is that I would choose a different set of summary text.
Table Creation
From time to time, you may have some data in a format that would look better in a table. Here is an example of some data that was used within my iPhone 16 Pro Max review.
Device Chip CPU Single Core CPU Multi-Core GPU (Metal) iPhone 16 Pro Max (2024) A18 Pro 3497 8581 32822 12.9-inch iPad Pro (2024) M4 3585 12603 55769 iPhone 15 Pro Max (2023) A17 Pro 2749 6713 27661 14-inch MacBook Pro (2023) M2 Max 2707 15148 127761 Mac Studio (2022) M1 Max 2439 12825 103224 6th generation iPad (2021) A15 Bionic 2157 5285 20183 Mac mini (2020) M1 2394 8810 34575
If I attempted to create a table from the data, this is what was previewed:
As you can see, Apple Intelligence completely missed the mark. It added a column that was not present, the header row seemed to be duplicated, and the first row of data was ignored. When it was not formatted properly, I thought that maybe replacing the tabs with commas might allow it to be formatted properly, but it was the same result.
I then thought that maybe there were too many rows, so I opted to only use three rows of data. When I did that, I got the following popup:
The fact that the table could not be created properly, and that it does not seem to understand that the text I have is in English, means that, at least as of this writing, the "Make Table" functionality is not helpful or useful in any way.
---
Closing Thoughts on Writing Tools
The new Apple Intelligence Writing Tools can be useful in some situations, but not all. If you need to proofread a block of text, Writing Tools will accomplish the task. The same goes for making a list, provided that you want a bulleted list, and not a numbered one.
Writing Tools is able to rewrite a block of text using one of three styles, friendly, professional, or concise, depending on your needs.
Writing Tools is available in any application that uses Apple's standard controls, like Pages, Notes, and even Xcode. However, it is not limited to Apple's own apps; any third-party app that uses a text field should also have access to Writing Tools.
Apple Intelligence should be available on iOS 18.1, iPadOS 18.1, and macOS Sierra 15.1, on any device that has an M1, or newer, as well as the iPhone 15 Pro/Pro Max, iPhone 16/Plus/Pro/Pro Max.
Be sure to check out all of the other articles in the series:
Today Apple has unveiled the final new release related to the Mac, this time the MacBook Pro. As expected the new MacBook Pros have the M4, M4 Pro, and the newly unveiled M4 Max.
Display and Camera
At the top of the display is the notch and within the notch is the camera. There is a new 12 Megapixel Center Stage camera. Center Stage is intended to keep you and everyone else around you in frame as much as possible. This camera also supports Desk View, so you can display what is happening on your physical desktop while in a FaceTime call.
The display on the MacBook Pro is a Liquid Retina XDR display. It has always come with a glossy finish, but that now changes. There is now a Nano Texture option. Much like the other Nano Texture displays, this is designed to reduce glare in bright light situations. This will cost an extra $150, but if you are frequently in areas with bright light, it might be worth looking at.
M4, M4 Pro, and M4 Max
The MacBook Pros are powered by Apple Silicon and can be configured with three different processors, the M4, the M4 Pro, and the M4 Max. There are a few configuration options for each model.
M4
The M4 comes in 10-Core CPU and 10-Core GPU model. This can be configured with 16GB, 24GB, or 32GB of memory. The base model comes with 512GB of storage and this can be configured with either 1TB or 2TB of storage. The maximum memory bandwidth for the M4 is 120 gigabits per second.
According to Apple, the MacBook Pro with M4 delivers:
- Up to 7x faster image processing in Affinity Photo when compared to the 13‑inch MacBook Pro with Core i7, and up to 1.8x faster when compared to the 13-inch MacBook Pro with M1.
- Up to 10.9x faster 3D rendering in Blender when compared to the 13‑inch MacBook Pro with Core i7, and up to 3.4x faster when compared to the 13‑inch MacBook Pro with M1.
- Up to 9.8x faster scene edit detection in Adobe Premiere Pro when compared to the 13‑inch MacBook Pro with Core i7, and up to 1.7x faster when compared to the 13‑inch MacBook Pro with M1.
M4 Pro
The M4 Pro comes in two variants. The first is a 12-Core CPU, 16-core GPU version or a 14-Core CPU. and a 14-Core CPU with a 20-Core GPU version. Both models come with 24GB of unified memory, and can be configured with 48GB. The M4 Pro models come with 512GB of storage, and can be configured with 1TB, 2TB, or 4TB of storage. The maximum memory bandwidth for the M4 is 273 gigabits per second.
According to Apple, the MacBook Pro with M4 Pro delivers:
- Up to 4x faster scene rendering performance with Maxon Redshift when compared to the 16-inch MacBook Pro with Core i9, and up to 3x faster when compared to the 16-inch MacBook Pro with M1 Pro.
- Up to 5x faster simulation of dynamical systems in MathWorks MATLAB when compared to the 16-inch MacBook Pro with Core i9, and up to 2.2x faster when compared to the 16-inch MacBook Pro with M1 Pro.
- Up to 23.8x faster basecalling for DNA sequencing in Oxford Nanopore MinKNOW when compared to the 16-inch MacBook Pro with Core i9, and up to 1.8x faster when compared to the 16-inch MacBook Pro with M1 Pro.
M4 Max
The M4 Max is a new chip not released until today. Much like the M4 Pro, the M4 Max comes in two variants. The first is a 14-Core CPU with 32-Core GPU version. This can only be configured with 36GB of unified memory. This memory has a maximum bandwidth of 410 gigabits per second, which is nearly 3.5x more memory bandwidth than the M4, and 1.5x more memory than the M4.
The second variant is a 16-Core CPU with a 40-Core GPU. This starts at 48GB of unified memory, but can be configured with 96GB or 128GB. The memory in this model is 546 gigabits per second, which is 4.5x the memory in the M4, 2x that of the M4 Pro, and 1.33x more memory bandwidth than the 14-Core M4 Max version.
Both M4 Max variants come with 1TB of storage, but can be configured for 2TB, 4TB, or even 8TB of storage, depending on needs.
And the MacBook Pro with M4 Max enables:
- Up to 7.8x faster scene rendering performance with Maxon Redshift when compared to the 16-inch MacBook Pro with Intel Core i9, and up to 3.5x faster when compared to the 16-inch MacBook Pro with M1 Max.
- Up to 4.6x faster build performance when compiling code in Xcode when compared to the 16‑inch MacBook Pro with Intel Core i9, and up to 2.2x faster when compared to the 16‑inch MacBook Pro with M1 Max.
- Up to 30.8x faster video processing performance in Topaz Video AI when compared to the 16‑inch MacBook Pro with Intel Core i9, and up to 1.6x faster when compared to the 16-inch MacBook Pro with M1 Max.
Connectivity and Ports
Similar to the M4 Mac mini, there is a difference in ports with the M4 and the M4 Pro, not in the number, but the USB-C ports. For the M4, you get three Thunderbolt 4 ports, up to 40 Gigabits per second, and the M4 Pro and M4 Max devices come equipped with three Thunderbolt 5 ports up to 120 gigabits per second. This is the same setup as the Mac mini with M4 and M4 Pro.
The number of displays supported varies depending on the M4 version. The M4 and M4 Pro can support up to two external displays up to 6K at 60Hz over Thunderbolt, or one display up to 6K at 60Hz, and one display up to 4K at 144Hz over HDMI. The HDMI is also capable of supporting one display at 8K resolution at 60Hz, or one display 4K at 240Hz, both of these are over HDMI.
The M4 Max can have up to four external displays, three displays up to 6K with 60Hz over Thunderbolt, and one at 4K up to 144Hz on HDMI. Alternatively, you can have two external displays up to 6K resolution at 60Hz, and one external display up to 8K resolution nat 60Hz, or one display up to 4K at 240Hz on the HDMI port.
Along with the Thunderbolt ports, you also get an SDXC card reader, a dedicated HDMI port, and a 3.5mm headphone jack.
The Wi-Fi in all models is Wi-Fi 6E and support for Bluetooth 5.3 is also included.
Pricing and Availability
The M4 MacBook Pro comes in the same two sizes of 14-inch and 16-inch. The pricing differs for each model and chip. For the 14-inch you can get an M4 model starting at $1599. The M4 Pro model starts at $1999, and the M4 Max starts at $3199.
The 16-inch starts at $2499 for the M4 Pro with 14-Core CPU, 20-Core GPU, 24GB of unified memory, and 512GB of storage. The 16-inch M4 Max version starts at $3499 for a 14-core CPU with a 32-Core GPU, 36GB of unified memory, and 1TB of storage.
All of the M4-line of MacBook Pros are available to order today and will be available starting November 8th.
Closing Thoughts
The MacBook Pros continue to be the workhorses of the Apple laptops. Many users do a ton of work on these devices and now with M4 processors they should be able to accomplish even more than before. The new M4 Max adds even more horsepower to the laptops and are welcome upgrades. The line up is a bit strange, but for today’s modern Apple, it is makes sense because it is not too dissimilar to the iPhone Pro line of devices. If you have an Intel-based MacBook Pro, now would be a great time to update your MacBook Pro.