by admin

Linux-audio-user: Re: [ot-ish]: Live Distro For Mac

I’ve been screwing around with the elusive “linux low latency” for so long I forgot how to play music. I’m beginning to think it’s a myth.

Okay, enough complaining Thanks to dcsimon for taking the time with the posted guide. Everything in this guide corresponds with my setup, save the following:. kernel config: how important is setting default IO scheduler to CFQ?. I’m running a dual processor.

Could this be hurting things?. my audio card is sharing an IRQ with an unused USB port and ahci. Is that killing me?. I did not recompile my own ALSA stuff, but I’m pretty sure it’s been compiled with the correct flags (got it from CCRMA) I’m running: Planet CCRMA’s RT kernel on Fedora 7: 2.6.22.6-1.rt9.3.fc7.ccrmart 1 GB Ram Dual Intel 2.1 GHz Different guides have recommended setting priority for IRQs and timers to different ways, I feel like I’ve tried them all. Even with setting high priority for chosen IRQs, jack, and timer threads, if I configure jack to get down to sub 20msec, I get choppy audio, Xruns, the whole crap. And mousing around with the UI clearly affects the XRuns directly, so it’s as if setting priorities hasn’t changed anything. Any suggestions on how to determine what it is that is going wrong?

Linux-audio-user Root-is Alive Distro For Machine Learning

Thanks, -cmaren. Question is: do you need sub 10ms latency? Unless you’re doing software monitoring with plugins active when recording on top of things, I don’t think this is really necessary. I managed to get down to a nominal 1.3ms but I realized in the end that it is useless to me. Because it makes the system a bit more unstable, I tried the default #frames, i.e. 1024 (still with realtime priority of course), and it is just flawless. Thanks to h/w monitoring, I have no practical latency no that’s not true in fact.

I have some issue when I forget to bypass jamin (I have jamin plugged to ardour most of the time) and I record a new track on top of my session. The track waveform is delayed by a few tens of milliseconds. Maybe there’s a way to compensate for jamin’s latency? But pressing the jamin bypass button is enough. The only xruns I get now are when a jack client crashes miserably and my system is up for days, haven’t rebooted for a long time since I switched back to kernel 2.6.22 I think (2.6.23 turned out not so good for me).

By the way, I have an Intel Core 2 Duo (E6600 or 2x2.4 GHz). No issue with this. I took a shower while considering this carefully. With hardware monitoring, I’ll be able to record comfortably. But regardless of jamin, doesn’t this mean the track always hits disk 30-40msec behind?

I gathered from a product review for the M-Box for example, that part of the recording workflow involves a hot key that nudges your track back 30msec. Maybe this is a standard practice for anyone who doesn’t have a mac and doesn’t want to grind on the low latency thing? I’m interested though that you got it down so low and still decided you would rather just roll with hardware monitoring. But I’m curious how most people do. Thanks for posting. But regardless of jamin, doesn’t this mean the track always hits disk 30-40msec behind? Not to my knowledge.

I would have noticed that from the start. No, everything is on the beat. Of course, when I record say a bass line, I get random errors (I am human:)), but the error distribution is not systematically delayed by a few tens of milliseconds.

When I record a software like HG sync’ed with jack transport, it’s right on. Really, when I came to think of it, h/w monitoring was all I needed. In fact, the ardour option I chose is “external monitoring”, I don’t try to use the h/w monitoring option that comes with jack or ardour, the hdspmixer app doing all the job (my system is based on the RME Hammerfall DSP).

And now that I found near perfect stability, I won’t change my setup for a long while I think, except for bug fixes About the 1.3ms latency I reached, yeah that was a great achievement since I’ve been fiddling around linux-rt, jack and others for years. But I got some instability (mainly triggered by the GUI, or so it seemed) and were too frequent to my taste But after all, it could be that ardour automatically compensates. An ardour dev could confirm.

If so, it works great because I never noticed any delay. I’m not and ardour dev, but I have talked through this issue with them and yes Ardour does compensate. It compensates for latency both from your live recordings and from plugins (if they report their latency of course). I wish someone had put this as a big note somewheremaybe a dev could write a brief and clear explanation and make it a sticky in the forums! So many people spend hours trying to get their latency down, when in reality they will never be performing live and have absolutely no need for it. I have the same setup as you Thorgal, use RME HDSP 9652 and the hdspmixer to route my monitoring. It’s correct to ignore the H/W monitoring options in JACK.

Still don’t know the difference between HW and external monitoring option in ardourI can’t hear any difference personallyThe only time where I have found software monitoring helpful is when I am playing something like a guitar clean into my pc and want to add a VST preamp and cabinet emulator, other than that I always use hardware monitoring. That was it nice job, I couldn’t care less about latency in the way I do things in my studio Solv, I can’t either tell you the exact difference in philosophy between h/w monitoring and external monitoring but I can describe the diff empirically: when I choose option “h/w monitoring” in jack, each time I start or stop jack, my hdspmixer setting is affected. For example, I have a few mono sources that I pan dead-center for monitoring in hdspmixer. This setting is lost when I start or stop jack. It’s like an internal reset of the HDSP setting (default is pan-left or pan-right, depending on the input number). Clicking on the preset buttons of hdspmixer brings back my normal setups. Not using this jack option has no affect whatsover on my hdsp system (I realized this after a few days toying around with my system!

What do you know! Thanks for writing this.

It is helpful. I’m not following the section on using chrt on threads 100%. For example, on my system, with ps -e I see: # ps -e PID TTY TIME CMD 1?

00:00:00 init 2? 00:00:00 kthreadd 3? 00:00:00 migration/0 4? 00:00:00 ksoftirqd/0 so I would apply: chrt -f -p 99 4 but I’m not following the section on cat /proc/interrupts On mine, I getthe following. My sound card that Ic are about is EMU10K1.

I do not see how to apply the chrt command to this thanks! For what it’s worth, to follow up on my previous efforts: this weekend I built a recording setup on a 7r old 1.4 Ghz machine. Same setup as on my current super hotshot gaming box:Fedora 8 with the Planet CCRMA low-latency kernel/rtprio RPMs. I wasn’t even really trying to get the RT stuff working, I just wanted another recording box. Right after booting with the RT kernel, I was running Jackd at 1.3msec easy, no XRuns. Couldn’t believe it.

All I’m saying is when the low latency kernel works, it just works. I have a feeling my sweet gaming box is not working nicely because the audio is sharing IRQ with like a billion other things (USBs, sata shtuff, etc). Alllll that said, to thorgal’s original point, practical recording scenarios don’t necessarily call for low latency at all. Thorgal’s point is a good one, but I need low latency. I’m running VST instruments under dssi. They actually run very well but without low latency they are of course unuseable.

I’ve got mine down to 10ms which is low enough, but I always want more So I read this excellent and helpful document (I wish I’d read it a couple of months ago:-)) but it left me with some questions I’m running a 2.6.22 kernel which I’ve built myself using a Mandriva source RPM. As far as I can tell this already has the realtime patch applied - my ‘make xconfig’ allows me to set the Preemption type as ‘Preemptible Kernel (Low Latency Desktop)’. I made the changes to limits.conf and I am able to start jack in realtime mode. So I think I have low latency and realtime working and I don’t need to apply the patch.

My confusion comes from the fact that I have no hrtimer or softirq-timer threads or anything like that when I run ps -e grep timer. The only recommended kernel option I’ve not set is ‘Enhanced Real-Time Clock Support’, because I didn’t understand the instructions (it talked about having to create something in /dev). Finally, I’m running a firewire sound ‘card’. /proc/interrupts is slightly confusing. Am I right in assuming that the IRQ thread for my card is the one running ohci1394?

CPU0 0: 433050 XT-PIC-XT timer 1: 3300 XT-PIC-XT i8042 2: 0 XT-PIC-XT cascade 5: 31525 XT-PIC-XT uhcihcd:usb3, VIA8233 6: 5 XT-PIC-XT floppy 9: 0 XT-PIC-XT acpi 10: 8983 XT-PIC-XT ohci1394, uhcihcd:usb2 11: 189967 XT-PIC-XT uhcihcd:usb1, eth0, nvidia 12: 1771 XT-PIC-XT ehcihcd:usb4 14: 12194 XT-PIC-XT ide0 15: 35629 XT-PIC-XT ide1 NMI: 0 LOC: 0 ERR: 8 MIS: 0. OK thanks, that clears up my confusion. It’s wierd, I have all the options listed except Complete Preemption, and I’m able to start jack in real time mode (although top never shows it as having a priority higher than 20). Unfortunately I get errors when I try to apply the real time patch - things like some files don’t exist and others I didn’t understand at all. The Mandriva kernel source I’m using already contains some patches, so perhaps there is a conflcit between the Mandriva patches and the real-time patch.

Mandriva actually supply a kernel with the rt patch already applied, but this doesn’t have any of their other patches, or third pary drivers, and doesn’t work at all on my machine. I think I’ll stick with my 10ms, it’s low enough.

I think I’m getting too deeply into things I don’t understand, and when that happens it usually results in me re-installing my machine.

Apple users are used to transitions, having moved from 68k-based Macs to Power PC processors, and the classic Mac OS 9 to Mac OS X. Now it's time for the third and most shocking transition of all: the move to Macs with Intel processors. There's one word on the lips of most Mac users at the moment, and that word is Intel. After more rumours than usual over the weekend preceding Steve Jobs' keynote at this year's Apple Worldwide Developers Conference (WWDC), this time concerning the company moving away from the Power PC processor architecture to Intel's x86, the Apple CEO confirmed that the rumours were, indeed, true. What this means is that, starting from 2006, Apple will ship Macintosh computers powered by x86 Intel processors, the same processors used by computers running Windows today. Intel will not be manufacturing any Power PC variant for Apple. Jobs introduced the topic of Macs with Intel processors by saying 'let's talk about transitions'.

He went on to describe the two main transitions of the Mac's 21-year history. Firstly, the move from 68k processors to Power PC-based designs during 1994-96, which the Apple CEO described as having 'set Apple up for the next decade', calling it 'a good move' and, secondly, the more recent 'brain transplant' from Mac OS 9 to X during 2001-03. The change from Power PC to Intel marks the third major transition for the Mac, and while Jobs commented that Apple have 'great products right now' and 'some great Power PC products in the pipeline', he conceded that the company didn't know how to make the products they were envisaging with the current Power PC 'road map'.

Linux-audio-user: Re: [ot-ish]: Live Distro For Mac

Acknowledging that two years ago he stood on the same stage when introducing the Power Mac G5 and promised a 3GHz G5 within a year, Jobs was admirably candid about the fact that Apple hadn't been able to deliver either a 3GHz Power Mac G5 or a Power Book G5. According to Jobs, Intel will be able to help Apple in both of these departments: great performance is assured by utilising Intel's Pentium D dual-core desktop processors or a couple of dual-core Xeon processors for future desktop and server machines, but where Intel have really succeeded in recent years is in the mobile market. The Pentium M has been a huge success for Intel, as part of the, and in his keynote Jobs mentioned that Intel's projected performance per watt for mid-2006 was over four times higher than that of the Power PC.

Towards the end of the keynote, Jobs invited Intel President and CEO Paul Otellini onto the stage. The latter said of the Apple/Intel arrangement: 'The world's most innovative computer company and the world's most innovative chip company have finally teamed up.' Building a computer with Intel's technology shouldn't prove too difficult for Apple's engineers, but one of the most important factors in the transition to Intel-based Macs will be, as Jobs himself put it, 'making Mac OS X sing on Intel processors'.

And here's where more rumours that have been floating around for a while turn out to be true, as the Apple CEO confirmed that 'Mac OS X has been leading a secret double life for the past five years', and that 'every release of Mac OS X has been compiled for both Power PC and Intel'. This should really be no surprise, since OS X's heritage is Nextstep, the operating system for which Apple effectively acquired Next, which ran on Intel processors. Intel processors.

Coming fairly soon to a Mac near you.Jobs mentioned an Apple internal guideline stating that 'designs must be processor independent and projects must be built for both Power PC and Intel processors', before revealing that the machine he'd previously used in the keynote to demonstrate Dashboard widgets had, in fact, been running Mac OS 10.4 on an Intel processor. It seemed to be working pretty well — although, in many ways, there's no reason it shouldn't. Most modern operating systems, including UNIX, Linux and Windows NT, were either designed or have evolved to run on multiple architectures through modular designs and hardware abstraction.

So getting OS X to 'sing' on Intel processors turns out not to be such a big deal, since Apple always had the 'just in case' scenario in mind. What is a big deal is the way in which third-party developers will deal with the Power PC-to-Intel transition, especially since they've only just got through the move to OS X. This third transition finds Mac users and developers in pretty much the same situation they were in 10 years ago, during the move from 68k to Power PC, which many reading this column will remember. The biggest problem, in my opinion, isn't just getting the developers to port their code to the new platform: it's leaving them in a situation where they have to support two different architectures for the same operating system. In the first transition, Apple created what was termed a 'fat binary' that bound together 68k and Power PC binaries into a single package, so that developers could deliver one application to any Mac user. This time Apple have the same idea, except that the package containing both Power PC and Intel versions will be known as a Universal Binary.

Developers may remember that porting 68k code to Power PC wasn't always straightforward, but (coming back to the present) Apple have released a new 2.1 update to the company's own Xcode developer tools, to make it simple to both port and maintain Mac OS X applications under two architectures. The Universal Binary concept will really be important to you if you've just purchased a new Mac and want to be confident that it will be supported by Apple and third-party developers. Analysts and news reporters initially questioned whether the Mac platform could deal with another major shift; to counter this doubt, Apple are doing their best to convince everyone that it won't be too difficult for applications to be ported. During the keynote, Jobs invited Wolfram Research co-founder Theo Gray on to the stage to describe how it had taken one of Wolfram's engineers only around two hours, a couple of days before the keynote, to make the Power PC-based Mac OS X version of Mathematica into a Universal Binary that could run under Mac OS X on the Intel platform. And since the keynote many other developers have commented on the speed with which they've got their applications running on Apple's Intel-based development systems.

Live on the Bleeding Edge You can download source code packages and Windows installers which are automatically created each time code is checked into the. If needed you can install the latest development release from the. These packages are available in the of our download area. Past releases can be found by browsing the all-versions directories under each platform directory. Bittorrent free download for mac. Go Spelunking You can explore the download areas of the main site and mirrors below.

Linux-audio-user Root-is Alive Distro For Macbook Pro

One such developer is Luxology, who managed to port their flagship surface-modelling application, Modo, in just 20 minutes. Apple are, of course, to be praised for making it easy for developers, but it's also worth remembering that, with the majority of applications being cross-platform, the source code should already be highly portable. No matter how easy Apple makes the process of creating Universal Binaries (see main text), it's unlikely that every application you run will be available with Intel-native code by the time Intel-based Macs are shipping, especially if one app you rely on is no longer supported or developed, for example. When Apple moved from 68k processors to the Power PC, the Power PC-based Macs included an emulator that could enable 68k applications to run if a 'fat binary' (again, see main text) wasn't available.

This worked well for general-purpose software. For the Power PC-to-Intel transition, Jobs introduced a technology called 'Rosetta', to bridge the gap between Power PC and Intel-based Macs. It allows Power PC binaries to be translated at 'runtime' and be executed on Intel-based Macs. An application running via Rosetta will never be quite as fast as if it were running natively, since the translation process itself entails some processing overhead. This means that those requiring high performance from music and audio software aren't going to find Rosetta too useful, but Jobs showed Adobe's Photoshop and Microsoft's Word running pretty successfully with Rosetta during his demonstration. According to Apple's Universal Binaries guidelines, available publicly at, Rosetta is capable of translating applications that can run on a G3 Mac with OS X, and the major restrictions are that it will not run OS 8 or 9 applications, or any code with Altivec or any other G4 or G5-specific instructions.

So what does all of this mean for those running audio and music software on the Mac? Actually, it's probably mostly good news. It's no secret that, in terms of performance and battery life, Apple's current line of Power Books lags behind their Intel-based counterparts, so finally we should get Power Books that can once again live up to their name.

And while the current high-end Power Mac offers good performance, as Intel and AMD-based machines move to faster and multiple cores it will be necessary for Apple to keep up with performance, since the hardware will now largely be the same. Intel's CEO and President Paul Otellini: 'We are thrilled to have the world's most innovative personal computer company as a customer'.

Linux-audio-user Root-is Alive Distro For Mac

In terms of music and audio software companies releasing Universal Binaries, this shouldn't be quite as bad as the process of 'carbonisation' required to port OS 9 applications to OS X. The general application code, such as the user interface and so on, isn't likely to pose a problem, but performance and optimisation are likely to be bigger tasks in some cases, as optimisations for the Power PC — and specifically the Altivec instructions — will require rewriting for the Intel and SSE (Altivec equivalent) instruction sets. Fortunately this isn't so difficult, as Apple provide information regarding SSE equivalents for Altivec instructions in the freely available Universal Binary guidelines.

Many of the major music and audio applications are already cross-platform, so it's likely that optimisations and other processor-specific instructions can simply be adjusted from code that already exists. This should definitely help in companies like Steinberg, Ableton, Propellerhead and Digidesign. And Logic 's developers at Apple have plenty of experience in developing Intel-based code! The bottom line is that, with most software being developed on portable cross-platform frameworks these days, Apple are perhaps right in claiming that this transition will be a relatively painless one.

In the short term, Apple's move to Intel processors will not have a major effect on Mac users. Analysts have speculated that it might slow Mac sales until the newer Intel models appear, but Steve Jobs made it clear that 'this is not going to be a transition that happens overnight'. And that's probably a good thing. If you buy a Mac now, you're probably going to have a few good years of use from it before needing to upgrade to an Intel-based Mac. A year from now, Jobs said that Intel-based Macs would be shipping, and they're likely to be Macs that can benefit from the Pentium M chip, such as the Mac Mini, Power Book and iMac. But by the end of 2007 Apple expect the transition to be complete, and the thought of a new Power Mac based on multiple cores using x86 processors is pretty intriguing.

All contents copyright © SOS Publications Group and/or its licensors, 1985-2018. All rights reserved. The contents of this article are subject to worldwide copyright protection and reproduction in whole or part, whether mechanical or electronic, is expressly forbidden without the prior written consent of the Publishers. Great care has been taken to ensure accuracy in the preparation of this article but neither Sound On Sound Limited nor the publishers can be held responsible for its contents. The views expressed are those of the contributors and not necessarily those of the publishers. Web site designed & maintained by PB Associates & SOS.