Apple event for Nov 10 - ARM Macs

Page: 1, 2, 3, 4, 5
Online now: Bing (sucks), Google [Bot], ukimalefu
Post Reply
TOS
User avatar
i'm seeing conflicting reports on the actual power of these m1 machines, some saying they're way more powerful than their predecessors, some saying they're totally meh

so what's the deal?
ukimalefu want, but shouldn't, may anyway
User avatar
TOS posted:
i'm seeing conflicting reports on the actual power of these m1 machines, some saying they're way more powerful than their predecessors, some saying they're totally meh

so what's the deal?


The report I saw about "meh", was from an AMD forum, and the "more powerful" report I saw was from a mac rumors site, so consider the source.

And these are "early" "reviews". I don't know how they're getting their info on the new Macs.

Apple said at the event "order today, available next week". So no one has an officially shipping unit yet.
maurvir Steamed meat popsicle
User avatar
I have no doubt the M1 is a powerful CPU, and that it compares well with Intel's offerings. However, Apple was incredibly vague about performance measurement, and I can easily see how it would be faster than 98% of Windows laptops if you consider that these range from the $150 Walmart specials all the way up to the $4k gaming systems from Dell/Alienware.

I wish Apple would have been a bit more specific, but my guess is that they weren't because the M1 slots in right about where the current gen Intel processors are today. It would have been silly to actually downgrade performance with the new architecture, particularly since they are going to have to break out emulation again, but the fact that Apple said "98%" and not "every" tells me that they haven't yet dethroned x86 just yet.

That said, I suspect there is more performance to be wrung from these things, so it will be interesting to see how this plays out.
ukimalefu want, but shouldn't, may anyway
User avatar
What 98% could mean, IMHO

98% of all laptops? no stick fiddling way

It's either, all laptops of the same price, not 98%, but maybe better than "most".

Or, better than most "entry level, super thin laptops", like the MacBook Air, and the PC laptops would cost significantly less.

There was a report that said "better than most Macs". I think maybe that's closer to the truth.

Everybody still needs to wait for actual reviews of shipping units, running Big Sur, and apps coded for M1 chips.
ukimalefu posted:
What 98% could mean, IMHO

98% of all laptops? no stick fiddling way

It's either, all laptops of the same price, not 98%, but maybe better than "most".

Or, better than most "entry level, super thin laptops", like the MacBook Air, and the PC laptops would cost significantly less.

There was a report that said "better than most Macs". I think maybe that's closer to the truth.

Everybody still needs to wait for actual reviews of shipping units, running Big Sur, and apps coded for M1 chips.

It can't be 'all laptops of the same price' Apple pricing has been for decades now giving consumer less performance than you could get for the same amount on the PC side, I highly doubt that'd change all of a sudden now.

Regardless it's a moot point as you can't and never will be able to compare the M1 to the other big name competitors since the chip is locked down in the Apple ecosystem. Not like you can throw it in a linux test bench test it, switch out the bench for the team blue and red and retest. The ONLY test and PR BS claims they should be allowed to make are vs their own ecosystem locked down machines (which coincidentally are the only tests they report the hardware & testing procedure for, doing for the other non Apple test would ream their unicorn marketing BS hard).

Last edited by Aaron_R on Fri Nov 13, 2020 4:44 pm.

ukimalefu posted:
TOS posted:
i'm seeing conflicting reports on the actual power of these m1 machines, some saying they're way more powerful than their predecessors, some saying they're totally meh

so what's the deal?


The report I saw about "meh", was from an AMD forum, and the "more powerful" report I saw was from a mac rumors site, so consider the source.

And these are "early" "reviews". I don't know how they're getting their info on the new Macs.

Apple said at the event "order today, available next week". So no one has an officially shipping unit yet.

I agree with the AMD forum but probably not for what they are whining about. My beef with Apple's claims and AMD regard their graphics.
Quote:
The world’s fastest integrated graphics in a personal computer.

Testing conducted by Apple in October 2020 using preproduction 13‑inch MacBook Pro systems with Apple M1 chip and 16GB of RAM using select industry-standard benchmarks. Comparison made against the highest-performing integrated GPUs for notebooks and DESKTOPS commercially available at the time of testing.

What total BS. Like I said Intel has been for over the past decade now trying to overcome AMDs Radeon baked APUs to no avail (with graphic performance up to x2 that of Intels similarly price APUs) that Apple with no history or experience in that field would somehow magically overcome and beat the field leader of Integrated Graphics (esp. claiming they did so for the desktop APU variant), yeah okay, and everything that comes out of the Whitehouse is true facts.
Robert B. Dandy Highwayman
User avatar
So integrated graphics are good now?
ukimalefu want, but shouldn't, may anyway
User avatar
Robert B. posted:
So integrated graphics are good now?


I guess... it depends?

what's on the PS5 and Xbox... whatever is latest?
Apple's claims make sense to me now and I was all wrong!

The key is their twisted definition of what an "Integrated Graphics System" is. Traditionally that term referred to having both the CPU & GPU dies on the same socket. Apple took that definition and redefined it to include not only the CPU & GPU die but also the machines memory subsystem. Thus rebranding what would be referred to traditionally as an "Embedded System" into what they are mistakenly calling an "Integrated Graphics System"

When you realize this definition shenanigan and than compare current market embedded systems to the M1 their marketing claims make perfect sense. Shame on them for twisting definitions to make things appear more than they are, no wonder they refused to release the test rigs used, it's all their in the small print after you decipher it. :mad:

Quote:
6. Testing conducted by Apple in October 2020 using preproduction 13‑inch MacBook Pro systems with Apple M1 chip and 16GB of RAM using select industry-standard benchmarks. Comparison made against the highest-performing integrated GPUs for notebooks and desktops commercially available at the time of testing. Integrated GPU is [wrongfully] defined as a GPU located on a monolithic silicon die along with a CPU and memory controller, behind a unified memory subsystem. Performance tests are conducted using specific computer systems and reflect the approximate performance of MacBook Pro.

ukimalefu want, but shouldn't, may anyway
User avatar
Aaron_R posted:
Apple's claims make sense to me now and I was all wrong!

The key is their twisted definition of what an "Integrated Graphics System" is. Traditionally that term referred to having both the CPU & GPU dies on the same socket. Apple took that definition and redefined it to include not only the CPU & GPU die but also the machines memory subsystem. Thus rebranding what would be referred to traditionally as an "Embedded System" into what they are mistakenly calling an "Integrated Graphics System"

When you realize this definition shenanigan and than compare current market embedded systems to the M1 their marketing claims make perfect sense. Shame on them for twisting definitions to make things appear more than they are, no wonder they refused to release the test rigs used, it's all their in the small print after you decipher it. :mad:

Quote:
6. Testing conducted by Apple in October 2020 using preproduction 13‑inch MacBook Pro systems with Apple M1 chip and 16GB of RAM using select industry-standard benchmarks. Comparison made against the highest-performing integrated GPUs for notebooks and desktops commercially available at the time of testing. Integrated GPU is [wrongfully] defined as a GPU located on a monolithic silicon die along with a CPU and memory controller, behind a unified memory subsystem. Performance tests are conducted using specific computer systems and reflect the approximate performance of MacBook Pro.


Image

;) :p

Again, wait for actual reviews. From your favorite website, and a couple of others, like if 2 of 3 agree, or 3 out of 5, that's what you should believe. Or that's what I'd like to do.
Pariah Know Your Enemy
User avatar
I would not be surprised if the new Apple chips were faster than the overwhelming majority of laptops on the market. Most laptops sold are slow as human waste. Heck my 8 year old 3770 is faster than most consumer oriented hardware being sold today.
I'm not surprised if that is the case as well on the CPU side, it's just their outlandish claim about the graphics side that set me off.
maurvir Steamed meat popsicle
User avatar
Is anyone honestly surprised that an Apple performance claim turned out to be blatherskite on closer inspection?

It's really a shame, too, because I suspect the M1 will actually be an impressive chip for this first generation. That it won't absolutely stomp an equivalent Intel based machine with discrete GPU isn't a problem, it's just a mild disappointment - since Par is right, most laptops use embedded graphics and suck performance-wise.
Mr. T Dude extraordinaire
User avatar
TOS posted:
i'm seeing conflicting reports on the actual power of these m1 machines, some saying they're way more powerful than their predecessors, some saying they're totally meh

so what's the deal?

The M1 is essentially a beefed up A14. The A14 (a phone chip) has already been extensively reviewed and benchmarked. It is faster per-core than anything Intel has to offer even on the Desktop, and trails AMD by a hair. Apple claims the M1 will be faster than even that, and they're probably right.

This is significant because single-core performance is precisely the area where x86 has been stagnant for years. Apple has done it in a phone chip --seemingly by taking advantage of the inherent advantages of RISC to ratchet up ILP (and in turn, IPC) into the stratosphere. In theory, this approach won't clock as well, but that's largely irrelevant as clock speeds have pretty much hit a wall anyway --die shrinks be damned.

The future of improved single-core performance lies squarely with IPC improvements which are generally made through more complex logic (deeper branch prediction, better out-of-order execution to resolve data hazards in-flight, etc) and a bunch more transistors (larger cache, single-cycle multiplication, etc). In the future, as Moore's law approaches its theoretical limit, 3D architectures (multi-layer silicon, not graphics) will come into play as well.

The thing is, Apple has the upper hand for precisely the same reasons the PPC architecture was so strong when it was first unveiled. RISC architectures are far better suited for the kind of stuff Apple's been doing and they are easier to work with from an engineering standpoint. And even if none of that were true, the simple fact is that ARM is the dominant architecture overall in the industry, and that's where all the R&D capital is flowing.

Maybe Intel should've taken a page out of ARM's playbook and offered a similar licensing model --Maybe they should've played ball and built the phone chip Apple asked them for in the early 2000's. Hindsight, it seems is 20/20. Regardless, Intel missed the boat and I predict x86 as a platform will be mostly replaced within a decade from now --not just on Macs. Intel will probably remain as a TSMC competitor, but they really need to start getting better as a fab. 14nm in 2020? Seriously?

EDIT: I also wanted to add that Apple's impressive single-core performance, large cache and the multi-endianness nature of ARM will all help in terms of x86 emulation. I suspect that x86 on ARM will probably be somewhat more efficient than Rosetta v1 --And that wasn't really all that bad in my experience. Meanwhile, the bi-endianness will help a lot in the transition to native apps.

Last edited by Mr. T on Sat Nov 14, 2020 2:15 am.

maurvir Steamed meat popsicle
User avatar
Strictly speaking, the CISC vs RISC war ended a long time ago, and it ended in favor of VLIW machines running a CISC or RISC front-end decoder. The stuff that you feed the CPU has almost no relationship to what actually goes on inside the chip compared to yesteryear - they are almost pseudoinstructions.

So, what we are seeing now is the difference an architecture can make, not an instruction set.
Mr. T Dude extraordinaire
User avatar
maurvir posted:
Strictly speaking, the CISC vs RISC war ended a long time ago, and it ended in favor of VLIW machines running a CISC or RISC front-end decoder. The stuff that you feed the CPU has almost no relationship to what actually goes on inside the chip compared to yesteryear - they are almost pseudoinstructions.

So, what we are seeing now is the difference an architecture can make, not an instruction set.

That is true. However, I think it's becoming relevant again.

What you describe did indeed take place, and it's what helped x86 catch up to PPC by the late 90's --along with clock speeds, die shrinks (extra transistors for that precious decode logic) and the tried-and-true method of throwing cash at the problem.

Apples-to-apples, though, I would argue that RISC architectures are inherently easier to work with, especially when you're building the 16 billion transistor chips Apple's making. There's a more direct relationship between the software instruction, and the hardware implementation of that instruction.

The thing is, there used to be other ways to improve performance, so having an optimal ISA was not a big deal. I would argue that it is becoming relevant again. Still not the most important thing... The most important thing is ARM's licensing model, and the boatload of cash that's going to ARM architectures as a whole.
ukimalefu want, but shouldn't, may anyway
User avatar
maurvir posted:
Is anyone honestly surprised that an Apple performance claim turned out to be blatherskite on closer inspection?


Is it? where do you get this? links please? I mean, I'd like to know the exact, scientifically proven, size of said blatherskite.

Don't ALL electronic device makers say the same kind of human waste?

-
maurvir Steamed meat popsicle
User avatar
ukimalefu posted:
maurvir posted:
Is anyone honestly surprised that an Apple performance claim turned out to be blatherskite on closer inspection?


Is it? where do you get this? links please? I mean, I'd like to know the exact, scientifically proven, size of said blatherskite.

Don't ALL electronic device makers say the same kind of human waste?

-


My comment was not meant to imply Apple was, in any way, unique in this regard. :squint:
Robert B. Dandy Highwayman
User avatar
Maybe wait for the M2.
I am pretty sure everyone during the entire livestream was very careful as to only compare the M1 GPU system to integrated graphics. They never once said it would compete against discrete GPUs.
TOS
User avatar
and apparently adobe apps aren't supported yet?

kind of makes them fairly useless for a ton of their potential buyers, no?
ukimalefu want, but shouldn't, may anyway
User avatar
TOS posted:
and apparently adobe apps aren't supported yet?

kind of makes them fairly useless for a ton of their potential buyers, no?


I believe they said they're coming and photoshop will come out early next year...

... if you plan to buy them... I mean... M1 native apps, "universal" is what they're calling them, right?

They also say your current apps will work just fine, I mean they said that about all Mac apps, not Adobe in particular.

Last edited by ukimalefu on Mon Nov 16, 2020 7:58 pm.

dv
User avatar
TOS posted:
and apparently adobe apps aren't supported yet?

kind of makes them fairly useless for a ton of their potential buyers, no?

Same story as last time. Give it 8-12 months.
Pariah Know Your Enemy
User avatar
TOS posted:
and apparently adobe apps aren't supported yet?

kind of makes them fairly useless for a ton of their potential buyers, no?

Professionals are not early adopters.
Mr. T Dude extraordinaire
User avatar
This is pretty amazing, if true.

In my experience, the PCC to x86 transition wasn't all that bad back in 2006. In my perception, that's about when low-end computing power became sufficient for 85% of typical workloads. It also helped that Rosetta 1 was surprisingly efficient. I recall around 50% of native, which was surprisingly tolerable in actual use.

By contrast, the 68k to PPC transition was often brutal. Not only were typical workloads heavily CPU-bound back then, but Apple's emulation engine was utter crap. It was so bad, Connectix sold a replacement implementation called "speed doubler" which thankfully worked as advertised. Even so, fat binaries were all the rage back then...
ukimalefu want, but shouldn't, may anyway
User avatar
Adobe releases Arm beta version of Photoshop for Windows and macOS

Quote:
Adobe is releasing Arm versions of Photoshop for Windows and macOS today. The beta releases will allow owners of a Surface Pro X or Apple’s new M1-powered MacBook Pro, MacBook Air, and Mac mini to run Photoshop natively on their devices. Currently, Photoshop runs emulated on Windows on ARM, or through Apple’s Rosetta translation on macOS.

Native versions of Photoshop for both Windows and macOS should greatly improve performance, just in time for Apple to release its first Arm-powered Macs. While performance might be improved, as the app is in beta there are a lot of tools missing. Features like content-aware fill, patch tool, healing brush, and many more are not available in the beta versions currently.

ukimalefu want, but shouldn't, may anyway
User avatar
Pariah Know Your Enemy
User avatar
Mr. T posted:
This is pretty amazing, if true.

In my experience, the PCC to x86 transition wasn't all that bad back in 2006. In my perception, that's about when low-end computing power became sufficient for 85% of typical workloads. It also helped that Rosetta 1 was surprisingly efficient. I recall around 50% of native, which was surprisingly tolerable in actual use.

By contrast, the 68k to PPC transition was often brutal. Not only were typical workloads heavily CPU-bound back then, but Apple's emulation engine was utter crap. It was so bad, Connectix sold a replacement implementation called "speed doubler" which thankfully worked as advertised. Even so, fat binaries were all the rage back then...

If memory serves the Connectex product that helped early PPC macs was Ram Doubler. The big difference was with RD you could turn on file mapping without creating a hard drive sucking virtual memory file. File mapping was an important but little understood advancement that PPC brought to the table that 68k lacked.
obvs Socialist isn't an epithet;it's a badge.
Send private message
RAM Doubler and Speed Doubler were separate products.
Pariah Know Your Enemy
User avatar
obvs posted:
RAM Doubler and Speed Doubler were separate products.

Yep. A product called Ram Charger was also available.
Mr. T Dude extraordinaire
User avatar
We're both right, in a way :)
Quote:
RAM Doubler: The first product to combine compression with virtual memory ... RAM Doubler was something of a case study for porting Macintosh products to the PowerPC processor

Quote:
Speed Doubler: ... combines an enhanced disk cache, better Finder copy utility, and a dynamically recompiling 68K-to-PowerPC emulator, which is faster than both the interpretive emulator that shipped in the original PowerPCs and the dynamically recompiling emulator that Apple shipped in later machines.

Interpreter emulators are a worst-case-scenario architecture. The entire cpu cycle must be emulated step-by-step per instruction. Figure 1/12th native performance if implemented in assembly. Dynamic recompile's maintain an emitted code cache as they execute speeding things up quite a bit. Taking a such relatively naive interpreter implementation, and reworking it as a dynamic recompiler will give you probably 1/6th native performance. How is Apple able to achieve 75% native? Magic, plus a bunch of very well-paid engineers.
Pariah Know Your Enemy
User avatar
Mr. T posted:
We're both right, in a way :)
Quote:
RAM Doubler: The first product to combine compression with virtual memory ... RAM Doubler was something of a case study for porting Macintosh products to the PowerPC processor

Quote:
Speed Doubler: ... combines an enhanced disk cache, better Finder copy utility, and a dynamically recompiling 68K-to-PowerPC emulator, which is faster than both the interpretive emulator that shipped in the original PowerPCs and the dynamically recompiling emulator that Apple shipped in later machines.

Interpreter emulators are a worst-case-scenario architecture. The entire cpu cycle must be emulated step-by-step per instruction. Figure 1/12th native performance if implemented in assembly. Dynamic recompile's maintain an emitted code cache as they execute speeding things up quite a bit. Taking a such relatively naive interpreter implementation, and reworking it as a dynamic recompiler will give you probably 1/6th native performance. How is Apple able to achieve 75% native? Magic, plus a bunch of very well-paid engineers.

I seem to recall that Apple acquired a company that had some code morphing technology they used in their emulation.
ukimalefu want, but shouldn't, may anyway
User avatar
So far the benchmarks I have seen paint the M1 in a very good light. Single core performance is really good. And the integrated GPU looks to be crushing Intel and AMD's on die offerings. At this point I am hoping Apple throws a kink into nVidia and/or AMDs plans and releases a MPX GPU card for the Mac Pro. Face it, Metal performance is all anyone should care about on MacOS; Apple would be wise to release a discrete Metal GPU that just hammers AMD into the dirt.
maurvir Steamed meat popsicle
User avatar
The M1 is a solid chip, especially for a first generation device, which this is sorta. The fact that it is more or less neck and neck with the current x86-64 champs is a real win for Apple.

However, let's be realistic - the graphics performance is being compared to integrated graphics processors, which everyone acknowledges are trash. Let's see how it does against discrete GPUs before we all start going nuts and dancing naked in the streets.
TOS
User avatar
avkills posted:
So far the benchmarks I have seen paint the M1 in a very good light. Single core performance is really good. And the integrated GPU looks to be crushing Intel and AMD's on die offerings. At this point I am hoping Apple throws a kink into nVidia and/or AMDs plans and releases a MPX GPU card for the Mac Pro. Face it, Metal performance is all anyone should care about on MacOS; Apple would be wise to release a discrete Metal GPU that just hammers AMD into the dirt.


would it be worth it for apple?
ukimalefu want, but shouldn't, may anyway
User avatar
Get your new macbook for 50$ off!

https://www.theverge.com/good-deals/202 ... unt-50-off

WHAT A DEAL!

What a sarcasm! But maybe use that 50 to get the usb c to usb A hub you'll need, or some bluetooth earphones
maurvir posted:
The M1 is a solid chip, especially for a first generation device, which this is sorta. The fact that it is more or less neck and neck with the current x86-64 champs is a real win for Apple.

However, let's be realistic - the graphics performance is being compared to integrated graphics processors, which everyone acknowledges are trash. Let's see how it does against discrete GPUs before we all start going nuts and dancing naked in the streets.


That does not make sense. The GPU on the M1 is integrated. There isn't a chance in hell it is going to beat any of "today's" discrete GPU offerings. It does however, go neck and neck with some older discrete GPUs.
TOS posted:
avkills posted:
So far the benchmarks I have seen paint the M1 in a very good light. Single core performance is really good. And the integrated GPU looks to be crushing Intel and AMD's on die offerings. At this point I am hoping Apple throws a kink into nVidia and/or AMDs plans and releases a MPX GPU card for the Mac Pro. Face it, Metal performance is all anyone should care about on MacOS; Apple would be wise to release a discrete Metal GPU that just hammers AMD into the dirt.


would it be worth it for apple?


Yes I think it would be. Was the AfterBurner card "worth it" for Apple? I bet hardly anyone has bought it, as it only serves a niche market.
macnuke Afar
User avatar
they make it to drop in a 16 lane PCIe slot and they will keep the cheese graters around for years more.

i realize the bus is slower, the XEON quads are long in the tooth as well, but for many users, it's more than enough speed and power.

the Afterburner card only worked with Apple ProRes or ProRes RAW video codecs

so yeah... that was about at niche extreme as you can get.
Subsequent topic  /  Preceding topic
Post Reply

Apple event for Nov 10 - ARM Macs

Page: 1, 2, 3, 4, 5