Adding a toolbar to UIImagePickerController

For an iPad app I was working on recently, I wanted to be able to show a toolbar on the bottom of a UIPopoverController which was being used to present a UIImagePickerController. It turned out this was a non-trivial problem to solve.

On iPad you are meant to display the image picker inside a popover. My first attempt was to ignore this and create a new view controller that had a main view and a toolbar. I added the image picker into to the main view and presented the whole thing in a popover:

UIImagePickerController *imagePicker = [UIImagePickerController alloc] init];
ToolbarPickerController *toolbarPicker = [ToolbarPickerController alloc] init];
[toolbarPicker.mainView addSubview:imagePicker.view];

UIPopoverController *popover = [[UIPopoverController alloc] initWithContentViewController:toolbarPicker];

This worked but was unsatisfactory for a few reasons:

  • The popover controller normally makes UINavigationController content look like its part of the popover itself (it styles the navigation bar, and any toolbars, to blend perfectly into the popover border). Because my new view controller was not a UINavigationController, but hosted one instead (UIImagePickerController), the popover did not apply the correct appearance.
  • The image picker thought it was modal and displayed a cancel button. It doesn’t do this normally when displayed in a popover. It was impossible to get rid of the cancel button and it was jarring. I also got a few warnings about a missing style while debugging.

In fact, this had not been my first attempt. Initially I had planned to simply add some toolbar items to the UIImagePickerController but gave up very quickly when this didn’t work. I decided to try again…but a bit harder.

My initial attempt had been to do something like this prior to displaying the popover:

imagePicker.toolbarHidden = NO;
imagePicker.toolbarItems = [NSArray arrayWithObjects…];

This doesn’t work, and its flawed. Firstly, the image picker forcibly hides the toolbar as soon as it presents anything. Secondly, the image picker is a navigation controller and therefore it sets its toolbar items from whatever view controller it is presenting. I wanted a consistent toolbar that would be visible all the time.

It turns out the solution is to show the toolbar and set the toolbar items on the top view controller every time the image picker presents a view controller. This can be achieved by implementing a couple of UINavigationControllerDelegate methods:

- (void)navigationController:(UINavigationController *)navigationController willShowViewController:(UIViewController *)viewController animated:(BOOL)animated
{
  if (navigationController == self.imagePicker)
  {
    [self.imagePicker setToolbarHidden:NO animated:NO];
    [self.imagePicker.topViewController setToolbarItems:self.toolbarItems animated:NO];
  }
}

-(void)navigationController:(UINavigationController *)navigationController didShowViewController:(UIViewController *)viewController animated:(BOOL)animated
{
  if (navigationController == self.imagePicker)
  {
    [self.imagePicker setToolbarHidden:NO animated:NO];
    [self.imagePicker.topViewController setToolbarItems:self.toolbarItems animated:NO];
  }
}

You have to put the code in both methods. If you don’t, some transitions will hide the toolbar and it might not appear when initially displayed.

PJ’s Guitar Tuner

My father-in-law owns a guitar shop. He also plays in several bands. He likes ‘the quo‘. He could be ’the quo’. He spends a lot of his time helping to look after my lovely but sometimes difficult son. He never thinks he is difficult. Only lovely. He never asks for anything and never wants anything. He is impossible to buy birthday presents for.

This is a kind of backstory.

While doing some client work, I became wrapped up in some audio processing stuff, analysing frequencies and microphone input. I was telling ‘grandad’ all about this when it occurred to me I could probably get it to work out the pitch as well.

A pet project was born.

It turned out I was wrong. Well, kinda right and kinda wrong. I did eventually get the pitch stuff nailed but not at all how I though it would work initially. I aslo enlisted the help of a rather good designer friend-of-a-friend who is known for his work here. He did me a nice icon and one of the themes.

So, for fame, fortune and my father-in-law I bring you…

PJ's Guitar Tuner

PJ’s Guitar Tuner!

Coming to an app store near you soon. Available on iPhone and iPod touch.

I’m releasing this app under our new “Powered by Dootrix” brand/thing. Dootrix do serious stuff for a growing number of pretty serious clients…but we are also known to do the odd side project in our ‘spare time’. This is mine. For now.

Converting 8.24 bit samples in CoreAudio on iOS

When working with CoreAudio on iOS many of the sample applications use the iPhones canonical audio format which is 32 bit 8.24 fixed-point audio. This is because it is the hardwares ‘native’ format.

You end up with a buffer of fixed point data, which is a bit of a pain to deal with.

Other libraries and source code tend to work with floating point samples between 0 and +/-1.0 or signed 16 bit integer samples…so this fixed point stuff is a bit of a pain. You could force CoreAudio to give you 16 bit integer samples to start with (which means it does the conversion for you before giving you the audio buffer) or you could do the conversion yourself, as and when you need to. This can be a more efficient way of doing things, depending on your needs.

In this post I want to show you how you can convert the native 8.24 fixed point sample data into 16 bit integer and/or floating point sample data…and give you an explanation of how it works. But first, I need to de-mystify some stuff to do with bits and bytes.

Bit Order != Byte Order

In Objective-C you can think of the bits of a binary number going from left to right. Just as in base 10, the most significant digit is the left most digit

128| 64| 32| 16| 8 | 4 | 2 | 1
-------------------------------
 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0
-------------------------------

The above binary number may represent the integer 66. We can apply bit shift operations to binary numbers such that if I shifted all the bits right ( >>) by 1 place I would have:

128| 64| 32| 16| 8 | 4 | 2 | 1
-------------------------------
 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1
-------------------------------

This might represent an integer value of 33. The left most bit has been newly introduced or padded with 0.

So. When you are thinking about bits, and bit shifting operations, think left to right in terms of significance. Got that? Right, now lets move onto bytes.

The above examples dealt with a single byte (8 bits). When a multi-byte number is represented in a byte array, it can be either little endian or big endian. On Intel, and in terms of CoreAudio, little endian is used. This means the BYTE with the most significance has the highest memory address and the BYTE with the least significance has the lowest memory address (little-end-first = little-endian).

See this post for why this is important when dealing with raw sample data in CoreAudio, and this post on codeproject for a more in-depth explanation. The most important thing to realise is that Bit order and Byte order significance are different beasts. Dont get confused.

For the rest of this post, we are dealing with the representation of the binary digits from the perspective of the language, not the architecture. i.e Think in terms of bit order and not byte order.

Converting 8.24 bit samples to 16 bit integer samples

What does this mean? It means we are going to:

  • Preserve the sign of the sample data (+/- bit)
  • Throw away 8 bits of the 24 bit sample. We assume these bits contain extra precision that we just dont need or are not interested in.
  • Be left with a signed 16 bit sample. A signed 16 bit integer can range from -32,768 to 32,767. This will be the resulting range of our sample.

Remember, we are thinking in terms of bit order; the most significant bit (or the ‘high order’ bit) is the left-most bit. Here is an example of a 32 bit (4 byte), 8.24 fixed point sample:

  8 bits  |         24 bit sample
----------------------------------------------
 11111111 | 01101010 | 00011101 | 11001011
----------------------------------------------

In 8.24 fixed point samples, the first 8 bits represent the sign. They are either all 0 or all 1. The next 24 bits represent the sample data. We want to preserve the sign, but chuck away 8 bits of the sample data to give us a signed 16 bit integer sample.

The trick is to shift the bits 9 places to the right. It’s a crafty move. This is what happens to our 32 bits of data if we shift them right 9 places: 9 bits fall of the end, the sign bits get shunted up and the new bits get padded with zeros such that we get left with:

  new bits  |sign bits|                         gone
------------------------------------------
 00000000 | 0111111 | 10110101 | 00001110    111001011
------------------------------------------
                    |   first 16 bits    |

We still have 32 bits of data with the bits shunted up. We are only interested in the first 16 bits of data (the right most bits) that now contain the most significant bits of the 24 bit sample data. A brilliant side effect is that the first (left-most) bit of the first 16 bits represent the sign!

By casting the resulting 32 bits to a 16 bit signed integer we take the first 16 bits, which are the bits we want, and we have a signed 16 bit sample that ranges from -32,768 to 32,767. If we want this as a floating point value between 0 and 1 we can now simply divide by 32,768. Walla.

The code is thus:

SInt16 sampleInt16 = (SInt16)(originalSample >> 9);
float sampleFloat = sampleInt16 / 32768.0;

Simple when you know how. And why!

 

 

In Praise of ARC

It’s not all the fault of the garbage collector…but I’m growing to love ARC

When I started developing on Windows in the 1990′s, software was fast even though computers were pretty slow. The word ‘Native’ was used to describe an indigenous person and Java was still just a type of coffee.

Somehow this all changed. You could call it progress I guess.

Managed languages, virtual machines, JIT and byte code took over. This seemed to go hand in hand with garbage collection. “Horray” we all shouted. No more memory leaks.

Wrong. They just became harder to find.

There were lots of other advantages though….weren’t there? Well maybe. .Net allowed us to write in a slew of different languages that could talk to each other without having to use shared C libraries. A managed environment protected all the applications from one another and looked after resources more intelligently. So it was all still good. Right?

Frustration. Managed

I’m writing this post to vent a bit of frustration with the promises of byte code VM’s and garbage collection; I’ve fallen out of love with ‘managed’. iOS and Objective-C have shown me another way.

Android’s latest version, Jelly bean, has put the emphasis on delivering a ‘buttery smooth’ user experience. You know, the kind of experience the iPhone has enjoyed for 5 years! Well, now the Java based Android (running on the dalvik VM) has achieved the same thing. 5 years on. Thanks in no small part to huge leaps in its graphics acceleration hardware and a quad core processor!

On Windows, .Net and WPF are slow, hardware hungry beasts. If you want speed, you have to drop down to the native DirectX API’s..and until recently you could not combine these different graphics technologies very easily; Windows suffers from severe ‘air-space‘ issues.

When I started developing for iOS, I was pleasantly surprised by several things:

  • All the different API’s in the graphics stack played nice together.
  • Apps were lightning fast and beautifully smooth with low memory overhead.
  • I found the lack of garbage collection liberating.

[Garbage release]

I did not, and do not, miss the managed environment. Before ARC we had to do reference counting in Objective-C on iOS. I was used to this from my days with COM on Windows but reference counting on iOS made more sense somehow. The rules seemed clearer.

And then the compiler got clever. The compiler. Not a VM.

With the introduction of ARC we dont have to do reference counting. The compiler analyses the code and does it all for us. In the main, it does a fantastic job. The compiler and the tools for developing on iOS manage to produce native code, make it easy to consume C and C++, make reference counting almost invisible, produce sandboxed apps that can’t crash other apps and give me the good grace to use pointers where I see fit without having to declare my code “unsafe” (most of the time anyway)

I still love the Microsoft C# language and the BCL. But as for the whole managed thing? I am happy to leave it behind.

 

Understanding AurioTouch

I have been playing around with CoreAudio on iOS of late. The trouble with media API’s is that they are necessarily complex and CoreAudio is no exception.

While trying to figure out how to read data coming from the microphone and visually render the samples to the screen, I came across the aurioTouch example provided by Apple. It looked great..until I tried to work out what the code was doing!

There are so many aspects of the code that I struggled to make sense of, from arbitrary scaling factors to the odd bit of inline assembly, but here I will mention just one. In hindsight, it doesn’t seem so obscure now. But hindsight is a wonderful thing.

After having obtained a buffer full of PCM audio data, the following code is used to fill an array of values that is used to draw the data:

SInt8 *data_ptr = (SInt8 *)(ioData->mBuffers[0].mData);
for (int i=0; i<numFrames; i++)
{
    if ((i+drawBufferIdx) >= drawBufferLen)
    {
        cycleOscilloscopeLines();
        drawBufferIdx = -i;
    }

    drawBuffers[0][i + drawBufferIdx] = data_ptr[2];
    data_ptr += 4;
}

m_Buffers[0].mData contains an array of SInt32 values. These are PCM samples in 8.24 fixed-point format. This means that nominally, 8 bits of the 32 bits are used to contain the whole number part, and the remaining 24 bits are used to contain the fractional part.

I could not understand why the code was iterating through it using an SInt8 pointer and why, when the actual value was extracted, it was using data_ptr[2]. i.e It was using the third byte of the 32 bit (4 byte) 8.24 fixed point sample and chucking away the rest. I was so confused that I turned to stackoverflow for help. The answer given is spot on the money…but perhaps not all that clear if you are an idiot like me.

After printing out the binary contents of each sample I finally understood.

The code is using an SInt8 pointer because, at the end of the day, it is only interested in a single byte of data in each sample. Once this byte of data has been extracted, data_ptr is advanced by 4 bytes to move it to the beginning of the next complete sample (32 bit, 8.24 fixed point format)

The reason it extracts data_ptr[2] becomes apparent when you look at the binary. What I was failing to appreciate (a school boy error on my part) was that the samples are in little-endian format. This is what a typical sample might look like in memory:

data_ptr[0]     data_ptr[1]     data_ptr[2]    data_ptr[3]
----------------------------------------------------------
 01001100    |   01000000    |   11001111    |  11111111
----------------------------------------------------------

The data is little-endian, meaning the least significant byte has the lowest memory address, and conversely, the most significant byte has the highest memory address. In CoreAudio 8.24 fixed point LPCM data, the first (most significant) 8 bits is used to indicate the sign. They are either all set to zero or all set to one. The sample code ignores this and looks at the most significant byte of the remaining 24 bits…which is data_ptr[2]

It is safe to throw the rest away as it is of little consequence to the display of the signal; throwing the rest of the data away still gives you a ‘good enough’ representation of the sample.

Later on in the sample code (not shown above), this value is divided by 128 to give a value between -1 and 1. It is divided by 128 because an SInt8 can hold a value ranging from -128 to +127

Like I said, this is just one of many confusing bits of code in the sample app. CoreAudio is not for the feint hearted. If you are a novice, like me, then perhaps the aurioTouch sample is not the best place to start!

 

Guided Access in iOS 6

Improving fine motor control and having a blast

My son is autistic. Trying to explain what this means is very difficult. His autism means that at the age of four, among many many other things, he has no language, behaves very unpredictably and struggles with his fine motor skills. He is also the worlds biggest fan of Sound Blaster on the iPad.

A typical session finds him constantly distracted by three things:

1. The home button. It’s tactile and tempting and he just cant resist its constant allure.

2. Inadvertant use of multi-touch gestures or “fat fingers”. This always ends up throwing him out of the app.

3. On the iPhone, the Ad banner gets mistakenly hit. Again, taking him away from the action.

It is remarkable that for a boy who could not even understand his own name a few months ago, he is still able to navigate back to the app from any of these eventualities. This is because he knows that the home button will get him back to a known location, from which he can navigate back to the app.

One button.

Genius.

My wife is a teacher. She teaches small children of about 6 years of age and is just starting to use the iPad in the classroom. Trying to keep kids engaged on one task and ensuring they dont change all your settings when your back is turned is an issue all teaches and parents have to contend with.

With Apples latest update to iOS these are problems that can now be addressed. iOS 6, launching in just a couple of months, has a new feature called guided access. This lets you disable physical buttons, such as the home button, lock out certain areas of the screen, thus disabling input and even prohibit activation of the devices motion sensors.

Such features ensure that teachers, parents and supervisors have a way to keep a kids focus inside of the app…or even on just a specific part of an app. For my 8-bit fanboy son this means that we can prevent him being thrown out of his favorite app in all three of the situations outlined above.

No doubt this will also prove to be a fantastic feature for restaurants (menus), kiosks (tourist information), display-only setups (show rooms) and various other task centric scenarios.

Fo me, this means the iPad will continue to be a useful learning device for my son and hopefully my home button might last that little bit longer before giving up the ghost.

DTRichTextEditor Project Setup

I am using a component for rich text editing in an iPad app we did for one of our clients. We started using it while it was still in early beta and just pulled the code straight into the project.

Now its a bit more mature, I figured it was time to update to the latest and greatest. Having taken a look at the cocoanetics video to figure out how it should be included as a project reference, instead of just a dump of the source code, I thought I’d write down what I did. It may save someone a bit of time if you dont want to spend an hour watching the video.

  1. Grab the latest source from the DTRichTextEditor repository. I then exported a copy to my preferred location in my own souce tree. This gives me a clean copy of the code without all the .svn folders lurking around. In my case this was something like ../ThirdParty/DTRichTextEditor
  2. Open up your project in Xcode and create a Dependencies folder. Right click it and choose Add Files…
  3. Browse to DTRichTextEditor.xcodeproj and click add. Make sure the ‘copy item into destination folder’ option is not checked.
  4. Do the same again but this time browse to DTLoupe.xcodeproj and add this. It can be found in DTRichTextEditor/Core/Externals/DTLoupeView
  5. Click on your build target and go to the build phases tab.
  6. Add the DTRichTextEditor static library and DTLoupe resource bundle to the target dependencies section.
  7. In the link binaries with libraries section, add libDTRichTextEditor.a, libXML2.dynlib and CoreText.framework. There may be some others that you need here as well. This project has been kicking around a while so I cant remember what I had to add initially!
  8. Expand DTLoupe.xcodeproj in the Dependencies folder (or wherever you put it) and drill down into the products folder. You should see a DTLoupe.bundle item. Drag and drop that into the copy bundle resources section.
  9. Nearly there! Now open the build settings tab and find the search paths section. Double click the header search paths item and add in the path to the source code for DTRichTextEditor. In my case this was ../../../ThirdParty/DTRichTextEditor/Core. Ensure you check the box to the left of the path to make the search recursive.
  10. Write your code, build and run.

Disclaimer: I dont know if this is the correct way to do it. It maybe that you need to do things slightly different if you are using some features that I’m not. And, as I was adding it to an existing project, there may well have been some additional setup that I have not covered here.

In any case, this guide will prove useful to me when I forget all of this in a few weeks time. Maybe it will help someone else as well?

 

Origami Slippers

I was at a meeting with the guys from Simpl the other day which ended up with a curry and  a beer. A pretty good meeting if you ask me!

Having been cajoled into folding a flapping bird from an after eight wrapper I made the mistake of bragging that I could fold them a pair of slippers from a newspaper if they had one…at which point one of the guys produced said newspaper.

Well, I had to back down and admit I had forgotten how to do it. But I did promise to make him a pair for ‘next time’.

I dug out the instructions over the weekend. The results are below.

 

 

HTML Canvas Physics with Box2DWeb

I’ve been playing around with the canvas element recently as I’m interested in the possibility of using some advanced HTML5 stuff in my current pet project

Your browser doesn’t have support for the HTML5 canvas element.

Having also had some previous experience with physics engines and knowing how awesome the results can look, I started to search around for a webby equivalent of either Box2D or Bullet.

I eventually came across Box2DWeb. While not super up to date, or even particularly optimised, it seems to do the job pretty well. Having tried it out on the iPad, the speed is none to shabby.

If you are using a modern web browser you should be able to see the results for yourself! Hit the refresh button to start the simulation again.

The Floppy Disk Must Die

My wife works as a primary school teacher and, as such, has to make sure kids are learning how to use computers and software, and are able to grasp the basics of the internet etc.

She pointed out something the other day which should have been obvious, but I honestly don’t think I had considered it before.

When explaining how to save their work, she directed the class to use the picture of the floppy disk (that little icon in Word that we all know and love).  The response? “Whats a floppy disk? You mean the thing that looks like an old Nintendo?”

It struck me as odd that software, of all things, should be so stuck in the past. The floppy disk is meaningless to the current generation.

Icons should convey meaning; the save button, used in many of todays applications, conveyes next to none! Surely, now more than ever, it’s time for the floppy disk to die!