[Index] [Menu] [Up] Blog[Header]

Travel Blog   Activities Blog   (Go up to OJB's Blog Page)

Blog Search

This is my web log which contains all sorts of random thoughts I felt it necessary to record for posterity here. I've recorded ideas on all sorts of topics in here so I hope you find something interesting, and maybe even useful!

Show entries, about containing for the year

Apple, AD 8

2019-11-25. Computers. Rating 1. ID 2015.

As I write this post it has been 8 years since the death of Apple co-founder, Steve Jobs. You could say that in Apple's history, it is the year AD 8. There has been a lot of speculation over that time about whether Apple can survive the loss of its visionary leader, but most of the negativity has been proved wrong, and Apple continue to succeed in all the major areas they are involved in.

At one point Apple was the biggest company in the world, by market capitalisation (and the first to be worth a trillion dollars), ahead of competitors such as Alphabet (Google) and Microsoft, and although that number varies over time, thanks to the vagaries of the stock market, Apple is in no danger of failing financially. But maybe it never was, because if you search for "Apple death knell counter" you will see a series of ridiculous predictions of the company's demise, and suggestions for how to avoid it - all of which they did the opposite of to become one of the most successful companies of all time!

So what is the current status of Apple products? Well, the iPhone is the premium smartphone in the world, and while some companies sell more units, that is only because they have extremely cheap offerings; the iPad is the clear leader in tablets; the Apple Watch is the world's biggest selling watch of any kind, and by far the best selling smartwatch; and the Mac, while still a long way behind PCs in total sales, is still significant in desktop and laptop sales. Plus Apple's main future growth area, services, is doing well. For example, Apple Music is the second most popular music service in the world, but only because Spotify offers a free alternative.

So where others have taken the lead in sales it is usually because of a cheap option. I wonder if it would be a good idea for Apple to also offer a cheap model in each of those product categories, then they might take the number one spot with a low cost, mass market version of their more expensive premium products which would continue to bring a higher profit margin.

You might say they are already doing that, because there is a "cheap" option in most categories. Of course, what Apple call "cheap" many other companies would call mid-priced or even expensive, so I guess it is all relative. Also, Apple do have a reputation for producing high-quality, premium products, and a truly cheap option might damage that reputation. Note that I think that reputation is mostly well deserved, although there is also room for improvement.

It might be a bit like Pagani, Ferrari, or Lamborghini offering a Toyota Corolla look-alike. Not only would that drag their reputation down, but they would also probably create something which is nowhere as good as the Toyota they are trying to imitate. Of course, the leadership of those companies don't even intend to try to take on Toyota in sales, and rightly so.

Recently Jony Ive left his CDO (chief design officer) role at Apple. There is no doubt that his designs were beautiful aesthetically, but there were questions around the usability of his creations, and there is little doubt that maybe not enough emphasis was put on practicality with him in charge. Since his departure Apple products have become a bit more practical, with the latest phones and laptops being slightly thicker to accomodate a bigger battery, for example. It's important that the visual design of Apple products is maintained as a major objective, but I welcome the small compromises to make the ergonomics and overall usability better. Design is more than just what something looks like.

As I said above, when Tim Cook took over from Steve Jobs as CEO, there were predictions of doom for the company. That wasn't totally unreasonable, because on the previous occasion that another person was CEO - when Jobs was forced out of the company in the mid 80s - things did go badly. As I said above, there were plenty of predictions of doom, but all of that was averted when Jobs returned to the company in the late 90s and was mainly responsible for the creation of the iMac, iPod, iTunes, iPhone, iPad, and Apple Store - which were all incredibly successful and innovative.

I think Cook is very aware of what happened on that previous occasion and he seems to be very mindful that the Jobs style of leadership shouldn't be compromised too much. I'm no huge fan of Cook - he's a bit too politically correct and he just seems a bit disingenuous to me - but he is doing a fairly good job so far, maybe because he is not trying to change the overall culture at Apple too much.

The world of technology is a difficult one to predict, because of the constant change, unexpected successes and failures, and new products which seem to appear from nowhere. So I think making long term predictions about where the future will lead is unwise - therefore I won't do it. But I will say that Apple do need to be careful where they go in future, but not too careful, because audacious new moves are what has given them success so far.

So, for Apple, AD 8 is looking pretty good, but who knows what the future will bring.

View Details and Comments

Simple Advice for Apple

2019-10-31. Computers. Rating 2. ID 2009.

I've got to be honest with you: sometimes I'm not proud to say that I am a computer consultant and programmer. Why? Because so many of the products and services which people use and are created by IT geeks (my informal generic term for several different specialties in information technology) aren't that great, and could easily be made better with the simple application of a bit of common sense.

To be fair to my colleagues, there are often good reasons - some of which might not be obvious - for some piece of computer technology not working as well as it should, but there are also a lot of what I suspect are really bad reasons for things not being optimal (such as policies enforced in large organisations, taking shortcuts to reduce costs, and being more focussed on making money than making good products).

Generally my blog posts are triggered by some experience in the real world which gets me thinking about a particular subject, and when I start down that path I often feel the need to share my thoughts in a post. So what has started this one?

Well, several things actually, but most immediately and annoyingly, Apple's failure to adequately warn users about the consequences of their actions, specifically in this case involving installing operating system updates.

Recently I have had to rescue several users who have upgraded to Apple's latest operating system, macOS 10.15 Catalina. A great thing about Apple is that all users get all the new systems for free indefinitely. In contrast, Microsoft charges users for new major releases of their OS. Apple can avoid this because only people who have bought Apple hardware can use Apple operating systems, so users have already bought expensive hardware which could be thought of as subsidising the development of new systems. In contrast, most computers running Windows aren't made by Microsoft.

Getting new systems for free as they are released is good in many ways, because computers are a relatively new consumer product and their possible uses in the real world change quite quickly. So new system updates can give the user access to these new features, and getting those extra capabilities for free is good. The operating system producers - and there are really only a few systems to worry about: macOS, iOS (and variants), Windows, Linux, and Android - are constantly improving their technology, often by eliminating poorly designed features and bugs from the past.

Again, this is good, but it also creates significant possibilities for incompatibilities between older applications a user might have and the latest system they might have been tempted to install. Additionally, OS producers don't want to support a large number of older versions, so they try really hard to get users to update as soon as they can, and this is often sooner than they should!

Specifically in Catalina, Apple have eliminated support for 32 bit applications. You don't need to worry too much about what that means. I'll just say that Mac hardware has been designed around a 64 bit architecture for many years now, and there is a significant overhead in supporting older programs, which might have originated back in the day before 64 bit hardware was common.

One way to think of an operating system is as a series of functions which programs can use to get things done. So instead of every software developer having to write code to read and write to disks formatted with Apple's file system, for example, Apple can provide libraries of functions to do those operations. This makes writing programs easier and it forces programs to interact with the hardware "correctly". In the example above, Apple could change the file system and as long as they updated the libraries (which they would) the program writers (and users) don't have to care. At least, that's the theory in an ideal world!

There is always conflict between allowing old programs to run, and keeping the system clean, simple, and reliable. The more old stuff the OS supports, the more possibilities there are for bugs, slowness, and security issues. Apple have always been "enthusiastic" about dropping old stuff early and forcing updates, where Microsoft have generally been better at continuing to support old stuff - at the expense of reliability, speed, and ease of use, unfortunately.

Both approaches - trying to move ahead and leave old stuff behind, and trying to maintain the usability of old stuff - have good and bad points, and I'm not judging here: I'm just saying that those two approaches exist.

Apple have a feedback page where anyone can leave bug reports, suggestions on improving products, and ideas for new features. I have used it several times, and left a comment once which applies to the issue I have been describing. I said: when the user is about to install a major update, and not just a minor (but possibly important) security update or bug fix, why not give them an extra warning by showing a message like "You are about to install a major system update. This might cause compatibility issues with applications you already use. If you are unsure about this please talk to your IT support person about it before proceeding".

I even said, facetiously, that Apple could use my brilliant idea for free and I looked forward to seeing it in future system updates! Well, that was a few years back now, and two or three major updates later, I still don't see it being done. That's a bit unfortunate, because while I understand that Apple want to encourage as many people as they can to move to the newest possible system, I would prefer those people didn't move to a system that isn't possible, at least for them!

So that was my simple advice for Apple, which seems to have been ignored, and the lack of that easy to implement warning has caused a lot of frustration for some users. But I'm not really complaining, because fixing all the ensuing issues keeps people like me in work!

View Details and Comments

Et Tu Porsche?

2019-08-05. Computers. Rating 2. ID 1994.

I recently updated my web server, and despite my careful planning, there was a "glitch" which meant my site was off-line for about 20 minutes. This may not sound like a big deal, because I don't really host anything of great importance, but it is a matter of professional pride to me to try to keep my down-time to a minimum.

The problem turned out to be the different way my fibre router interprets port forwarding, so that my old notes, based on a previous router, didn't work. This meant I spent about 10 minutes changing settings a second time, rebooting, etc unnecessarily. And a second problem didn't help where Apple's web server software defaulted to re-routing all incoming traffic to a secure port rather than the default http.

So what should have been 30 seconds down time, which is the time between disconnecting the old server and connecting the new server, turned out to be closer to 30 minutes, which was pretty annoying. But I do feel a lot better about my rare errors when I see far worse problems in commercial software and on (so-called) professionally run corporate web sites.

In fact, to make myself feel better after I make a mistake, and just for general entertainment at other times, I keep a folder of screenshots of errors I come across in my daily computer use. And I thought it might be fun to share some in this post, so here are some examples...

First, there is the generic web site fail errors, like this one from New Zealand Post: "NZ Post is currently experiencing technical difficulties." Well, that's helpful. Any idea about what sort of error that might be, and what I should do about it? Also, why do these "technical difficulties" happen so often?

Apple aren't immune from problems, and their error and progress messages can often be entertaining. For example, during a system update the progress dialog told me I had "NaN remaining". If you don't know, "NaN" is a special code meaning "not a number" and is often created by a divide by zero or a similar operation. But errors concerning zero are tricky: this one could have easily got an error message too: "Currently displaying page 0 of 0". I guess I was NaN percent of the way through the document!

Even a rock solid operating system like Unix can give some interesting messages. I was recently testing a network using the "ping" command, which sends a very simple message to another device and waits for a response to say "I'm here". The time taken to respond can reveal information about the performance of the network and computers involved. So you can imagine how impressed I was when I got a ping time of -372 ms. The remote computer responded about a third of a second before I even sent the request. Now, that's a fast network!

My ISP, Orcon, has had a few interesting glitches in the past. For example, on one occasion their network status page was blank. The corporate colour scheme was there, so there was some network activity happening, but there was no text. How should I interpret this? Are there problems or not? But this was a few days after they sent me a bill requesting payment for $0, so it shouldn't have been a surprise. I have to be fair and say other ISPs are just as bad. I got this from Sprak, for example "Oops this embarrassing. Our system failed to complete your request". At least they were embarrassed!

Then there are errors which are even more confusing. I rarely use Microsoft software or services, but my clients do, so I occasionally let my standards slip, and on one occasion I tried to upload a JPEG to set a photo for my profile there. I got an error message to the effect of "A JPEG file is not a format Microsoft supports, please try a different format, such as JPEG, instead". I know the file was OK because I had used it on many other sites, so I'm still mystified about what was happening there.

Having health data on the internet is great, because it avoids confusing and time consuming calls to health professionals, but it's unfortunate that many of these services are so poor. A while back I received "Manage My Health, service call failed: 0 error." So zero error is bad, or is that no error? And what should I do? But other sites try to give you an error message where the programmer has messed up the code so that you are left more confused than you were before their attempts. I got the following message on LinkedIn: {"debugmessage":""}. Well, that's a nice try, but not helpful to the average person.

Sometimes the error message sort of makes sense but the options the user has are not helpful. For example, iOS told me that "there are no upgrades available", then gave me the choice of two buttons: "Yes" or "No". I chose "No" but there were still no upgrades available! Another iOS app asked me if I would like to "Mark all articles as read?", with the options "Yes" and "Yes". Nice to have the choice!

Then there are the generic errors which really mean nothing. Here are a few: Microsoft telling me "there's a temporary problem with this service, please try again later." New Zealand's IRD generates many errors, such as: "page not available", "system error", and "unable to process your request at this time". Even Internet New Zealand's site creates random errors, such as "PDO error". And I've lost count of the number of errors the NZ Herald site generates, like "VIDEO CLOUD ERR NOT...", "502 bad gateway", "Baaah! page not found", and "Error from backend server 503". Then there's this very frequent message from Facebook: "sorry something went wrong" (so specific). And on various other sites: "Oops something went wrong", "The website has encountered an error which cannot be recovered from" , "BBC backend not available", "MYOB, error 404 page not found", "Apple: 502 bad gateway", and "TradeMe: Oops there was a slip up" (with a picture of a cute kiwi which has fallen over).

Web sites often like to provide dynamic data, such as greeting you based on your actual name, to give that personal touch, but when I got "Hello $(notification. message. discussion. topicMessage. author, login)" on a Cisco site, I didn't really feel much better than if I had been greeted as something generic, like "user". Another example of this happened on the Act (a New Zealand political party) web site where I saw "With your help we were able to deliver onclick="window.open(...".

I sometimes use Apple's dictionary screensaver which displays random words and definitions when the screen sleeps. But sometimes things go wrong. I recently got the message "(entry not found)", ironically the word it was trying to find was "fraudulent"!

Sometimes there is no actual error, but in some ways you wish there was. I recently made the mistake of doing some Windows support and, when trying to download a user's email, I was informed the data transfer speed was "19 bytes/s." OK, I'll come back in a decade or two, and see if that finished OK!

But the ultimate failure is the old system crash on a public display. This almost always involves Windows, because that is almost certainly the least reliable OS around, but also because, for some odd reason, it is also widely used. So PowerPoint presentations often crash with error messages such as "the instruction at 0x6018cde1 referenced memory at 0x00000000 the memory couldn't be written". Not a good look, especially when the error stays there for several days.

Finally, it can happen to anybody, even a company which represents the epitome of fine engineering! I recently got this when visiting Porsche's site: "Dokument nicht gefunden." Even with my limited German the message was obvious! Et Tu Porsche?

View Details and Comments

Software Problems

2019-07-26. Computers. Rating 2. ID 1992.

How can we fix some of the problems we have with technology today? Specifically, the subject for this blog post is how can the well known computer software and internet problems we have be fixed, or at least improved.

So to start with, let me list the issues as I see them. First, most software is unreliable, unintuitive, and overpriced. Why is this? Well, it depends on the exact circumstances, but maybe the most common reason is the commercial and management pressures applied to the software developers and teams. To be fair, there are undoubtedly some simply incompetent programmers as well, but I'm fairly confident that's a lesser problem and not the one I am going to concentrate on here.

I talked about this general issue in several past posts, especially in relation to New Zealand's school payroll system "Novopay" which is still problematic many years after its initial deployment. For older posts on this topic check out "The 'E' Word" from 2014-02-27, "Another IT Debacle" from 2013-06-27, "Corporate Newspeak" from 2013-03-21, "Doomed to Failure" from 2012-12-20, and "Talentless Too, No Pay" from 2012-11-24. Yeah, this is obviously a favourite topic of mine!

So its generally the corporate culture which is to blame for these disasters. In Novopay's case it was the idea that they could produce something more easily, and therefore make a much higher profit, by hacking a few extra layers on top of a hopelessly antiquated travesty, which might generously be described as an early payroll system. I'm fairly sure most competent programmers would have seen that was a bad idea, but they would have been overridden by the greedy and useless management.

Now, to be fair, I do have to admit that I am reading between the lines here, and basing this appraisal on the information which has been leaked, and my knowledge of how this process usually proceeds. But we are never going to hear the real story because it is just too embarrassing, so some degree of speculation is necessary.

There have been many other spectacular failures of software over the years, including a police records system here in New Zealand, the bad code causing the new Boeing 737 crashes, and today I heard that the Airbus A350 has to be rebooted every 149 hours or some of its systems will fail.

Boeing in particular should be utterly ashamed of themselves, because their terrible code reputedly was produced by cheap programmers from India. I'm not saying Indian people can't program - far from it - but those working in "sweat shops" designed to create the most code at the lowest price are unlikely to be the most talented people in that nation.

So, assuming this rumour is true, Boeing killed hundreds of people to save a bit of cash by outsourcing the programming to the cheapest bidder. Anyone could see that was a bad idea. Or should I say, anyone except the managers at Boeing. As I have said before: management is the most despicable, revolting profession on the planet. If they all dropped dead on the spot tomorrow the only disadvantage to society would be effort involved in ridding the world of their massed bodies! Note that this is a rhetorical point, and not a genuine wish for the death of anybody!

There are lesser problems too, but ones which affect a larger number of people. Almost all of the popular software and services we use today could use a lot of improvement. For example, Facebook is an abomination of crappy user interface design, slow and unreliable code, and utterly unfair policies and standards. And then there's my other favourite target: Microsoft Word. To call this mangled together abomination of poorly thought out functions a program is really, really generous. I admit, you can do most things with it, but only if you are prepared to use the most arcane user interface features, work around the arbitrary limitations, and find needlessly complex ways to do things which should be easy.

At this point you might be wondering why I am picking on what are by far the most widely used social network and word processor in the world. Well popularity does not imply quality. In fact the opposite is probably true and I think I know why. As a product becomes more popular a bigger and bigger bureaucracy grows up around its maintenance and development, this gets more managers involved in the process and... well, you know my opinion of managers!

So, what's the answer? Well, it's really simple actually. What we need is some open, public standards for information exchange in all the major categories we use. We could have one for word processing data, for example, and for exchanging posting and messaging data on the internet. Everyone would need to follow these standards, so if people didn't like Facebook from a technical or political perspective (for example, because of privacy) they could just use an alternative program which would have equal access to the underlying data.

So the files that Word created would need to comply with the standards, meaning that another software developer could access those files on an equal level to Microsoft. Plus, the underlying format could be made far more elegant than the hacked together travesty we currently have, which should increase speed and reliability. At the moment it is possible to read a Word file in a different program, but it rarely works perfectly because Microsoft has ultimate control over the file format, and has an unfair advantage.

And that would also mean that anyone could write a Facebook app which accessed the Facebook data stream in a far better form than Facebook currently does. And let's do that with Twitter too, because that is hopelessly obscure and difficult to use. Again, there are apps which do this already, but because Facebook and Twitter control the underlying form of the data they have an unfair advantage.

This might not even be disadvantageous to the Microsofts and Facebooks of the world either. If the protocols and software code were open source then any improvements would be available free to the whole community, including the big companies. And don't tell me they don't want that because a massive number of the big companies' existing systems use open source software, such as Linux, Apache, and MySQL.

It could be a win for everyone, especially the users - that's you! But will it happen? Well, probably not, because the people that make the decisions - those managers again (boo!) - rarely make innovative decisions of this type, no matter how obvious and advantageous they might be.

So I guess we are stuck with using sub-standard software. I can avoid some of it (I never use Word, for example) but some I sort of have to use, such as Facebook and Twitter, because that's where all my social media friends (and enemies) are. At this point, I'm still wondering whether I should avoid flying on an A350 or Boeing 737 MAX 8!

View Details and Comments

Use Your Mac Better

2019-05-29. Computers. Rating 1. ID 1982.

I'm an IT support person and part-time programmer, hardware installer, and server manager. So I see a wide range of issues that users have to contend with, and I see a lot of people who really aren't using their computers (or phones or tablets) very efficiently. And that is unfortunate, because I primarily support Apple products, and Apple do put more emphasis than most on usability.

Sometimes I am working on a user's computer and they just seem to be completely amazed at how quickly and easily I can get things done. It's almost as if the computer has read my mind and just done what I wanted through some sort of subliminal connection. I don't want to come across as being too self-congratulatory here, because I should fully expect that I will be good at using computers since I spend so much of my time in front of one, and I aren't as good as other people in other areas, but there is no doubt that I am pretty freakin' amazing using a Mac!

So I wouldn't expect a person who has other factors in their lives which are more important than computers to achieve the same competence as a person who uses one "all the time", but I can offer a few tips on how to get better and to at least get a bit closer to the skill level of an expert. And it's not actually that hard. There are just a few basic tips that could make people much better, and they only have to start using them one at a time so that they become automatic fairly quickly.

So, without further preamble, here is my list...

First, you should stop using the mouse when you are doing word processing (or text editing, or working with spreadsheets, or most other tasks primarily involving text and numbers). You should use the keyboard instead, because when typing, constantly switching from the keyboard to the mouse is both time consuming, and interrupts the basic modality of the interaction with the computer.

So if I am typing a sentence, like this one, and accidentally type a word incorrectly (my spelling is good, but my typing isn't so great when I try to type quickly) I don't use the mouse to go back and correct it. In fact, I typed the word "isn't" above with a semicolon instead of a quote (or apostrophe) so I needed to go back and correct it. But I only noticed the error when I got tot he word "great".

So here's how I fixed that: I pressed option back-arrow three times (held down the option key and pressed back-arrow 3 times, then released the option key), pressed delete, typed the quote, and pressed option forward-arrow three times, and continued typing. That sounds complicated, but try it, and after a bit of practice it is super fast and super easy. The option back-arrow is a common Mac shortcut to go back one word (the back-arrow key by itself goes back one character, of course, but that is too slow).

After practicing this for a while the correction can be made so quickly that to many people it looks like the change has happened without any interaction by the user at all!

The option modifier works with the up and down arrow keys too, where it moves the cursor up and down by one paragraph at a time. So that's a quick way to get up and down the page without having to use the mouse. And most people can see quickly how many paragraphs to move without even thinking, so when you know you need to move (say) 3 paragraphs up just press option up-arrow 3 times quickly and the cursor will "magically" appear in the right place!

The command modifier is also useful for this sort of navigation. Press command left-arrow to move to the left margin, and command right-arrow to move to the right margin. Or use command up-arrow to move to the top of the document. Note that Microsoft Word gets this wrong and moves to the next paragraph instead, but you can use command function left-arrow instead (why, Microsoft?).

Unfortunately, non standard programs do make these tips slightly less useful, but if you are forced to use Microsoft products (I don't) you can still adapt fairly easily.

Finally, in the modifier plus arrow-key series there is the shift key. This modifies what you are doing by highlighting what you are moving over. Hold down shift and press back-arrow and you highlight the last character typed. Hold down shift and option, and press back-arrow and you highlight the last word typed. So if I typed a few words and wanted to change them I can really easily. For example, in that sentence I originally typed "some characters" instead of "a few words" so I pressed option shift back-arrow twice and just typed "a few words" instead. After using the arrow key combinations the words I wanted to delete were highlighted and just typing then replaced them.

This is also useful to style text after typing it. For example, if I wanted to type a sentence like "I can bold words" and have the word "bold" in bold I would do this: type "I can bold " (note the space after the word), press option back-arrow, press shift option forward-arrow, press command-B, press forward-arrow, continue typing. Again, it sounds hard but is automatic after a while, I sometimes don't even look at the screen because I just know it will work.

Here's why it works: option back-arrow goes back to the beginning of the previous word ("bold"), shift option forward-arrow highlights the word, command-B bold it, and forward-arrow unhighlights the word skips over so I can continue typing. And remember that space? That was a sort of "barrier" to stop the bold "leaking" into the next word. Try it without the space and see what happens.

Of course, in this example, if I had remembered to bold the word as I was typing it I would have just typed "I can " command-B "bold" command-B " words". The first command-B turns on bolding and the second turns it off again.

The awesome thing is that the arrow key modifier combinations are easy when you think of them the right way. The option modifier moves by a unit (word or paragraph) instead of a single character. The command modifier moves a bigger unit (paragraph, page, or whole document depending on the app) The shift modifier just highlights instead of simply moving. That's all you need to know, because the rest is just using combinations of these. Of course, I hope you already know a few basic keyboard shortcuts like command-B for commands like bolding!

So enough about text editing, how about some ideas for managing working in different apps more efficiently? Well, the one simple thing which surprisingly few people use, is command-tab. If you hold down the command key (just keep it down for this whole explanation), then press tab, you will see a list of icons for apps appear on the screen (the "app switcher") with the next one highlighted. Continue to hold down the command key and each time you press tab the next icon will be highlighted. If you go too far you can keep pressing tab until the selected icon loops around to the start again, or press back-tick (`) instead of tab to go back one. Release command to bring the selected app to the front.

You can also quit and hide apps this way. When you reach an app (while still holding down command), press H to hide it or Q to quit it. To quit the next three apps, hold down command, press tab Q Q Q `, then release command. Note the final back-tick which brings you back to the app you started with. Also note that you must be already running 4 apps for this to work, and that one app (the Finder) cannot be quit!

There is one clever design feature of this system which you can use too. That is that as you use apps they get moved to the left of the app icon list. So you always know that the app you are currently using will be on the left, and the app you used just before that is next to it. So to switch to the app you just used before moving to the current one, just press command-tab (hold down command, press tab, release command immediately). The next app will come to the front without you even seeing the app switcher icons.

Finally, here's a hint for keeping your screen decluttered while working with many apps. Sometimes you might like to only have the windows for the current app you are working in shown. For example, if you are word processing you might not want to have any distractions from a browser window showing an animated ad or something similar. Other times you might have finished using an app for a while and want to have its windows hidden while you work somewhere else. For example, you might have finished with a game for a while and want to do some work!

One way to solve this is to work in full screen mode, but that is often not helpful when you do want more than one app visible. So I suggest learning a couple of keyboard shortcuts instead. First, use command-H to hide the window or windows belonging to the current program. So if you are finished with that game, but might want to continue later, just press command-H. That will hide the app without saving, deleting, or closing any windows. Second use command-option-H to hide all the windows *except* the current one. So if you are in a word processor and want all other distractions hidden, press command-option-H to hide every other app.

You can re-display any hidden app by just clicking on its icon in the dock, but you will use the command-tab trick I talked about above instead, of course!

Now, let's put it all together. Imagine you have found a useful page in Safari (or other web browser) and want to insert that link at the bottom of a word processing document, and you are have just switched back from Safari and are currently editing the word processor document. Try this: command-tab command-L command-C command-H command-down-arrow command-V. I'll leave it as a project for you to figure out why that worked - or didn't if you got any of the steps a bit wrong!

So there are a few basic tricks for using your Mac more efficiently. Just try one at a time, and when that one becomes second nature, move into the next one. Soon you'll be just as good as me. Well, maybe not that good!

View Details and Comments

Management Types

2019-03-10. Computers. Rating 3. ID 1970.

Google's original catch-phrase, and part of their code of conduct was "don't be evil". Well that idea certainly seems to have been thrown out, now that the company has transitioned to a highly successful corporation. What a joke that has become, and in fact even Google must realise it, because the phrase was officially dropped last year.

I really think they thought that Google was going to be different and that "don't be evil" actually meant something. Here is part of the original code of conduct: Googlers generally apply those words to how we serve our users. But "Don't be evil" is much more than that. Yes, it's about providing our users unbiased access to information, focusing on their needs and giving them the best products and services that we can. But it's also about doing the right thing more generally - following the law, acting honorably, and treating co-workers with courtesy and respect.

Look at that, and it's pretty obvious why they couldn't honestly keep it. Here's an excerpt from the current code: The Google Code of Conduct is one of the ways we put Google's values into practice. It's built around the recognition that everything we do in connection with our work at Google will be, and should be, measured against the highest possible standards of ethical business conduct.

See the difference? From a simple wish to do something good for customers and to follow the highest standards of pure ethics, they transitioned to a form where they strive for the highest standard of "business ethics".

I'm sure many business people (especially those in big corporations) really do believe that business ethics is a thing, but many people would say that it is an oxymoron of the same type as "military intelligence" and "female logic". Yes, I know that those two examples are an attempt at being humorous, but I think that "business ethics" actually is something which might not really exist, at least if the business is going to be successful in the conventional way, and that ethics is defined in a recognised way.

So it really seems to me that Google has gone down the same route as many other companies which started off being idealistic and, as they became more successful, just went down the same old path of throwing out ethics and trying to maximise profits using conventional business practices.

For example, the original code contained this: "providing our users unbiased access to information". Does anyone really think Google still provides unbiased information? Clearly it doesn't because the results of Google Search (still the product they are best known for) are biased in three ways: first, they reflect the biases the Google algorithm gains from the user; second, they filter out material which is considered by Google to be unacceptable; and third, they inject ads which are often not necessarily particularly relevant, and are clearly purposed to making Google more profit rather than providing the user with better information.

For example, I recently wanted to buy some tickets for a concert, so I Googled the name of the internet service I usually buy tickets from. At a quick glance it looked like it came up first in the search list so I clicked the link and started to find the tickets. But after a short period I realised the site wasn't what I was expecting and was actually the rather controversial site Viagogo, where many people are often scammed. I should have checked for the "ad" icon, but I missed it that time, and many people don't ever look for it. A friend did buy tickets there, and she is now not sure whether they are even genuine.

Is this providing the best, unbiased information, or is it just making a quick, easy profit with little regard for the overall quality of experience? And is this what Google means by "business ethics"?

I know that Google Search is a really good service and I have tried alternatives, like DuckDuckGo, which are OK but really just don't give the same quality of results. And I understand that a free service needs to be paid for some way, but I think Google could do a better job of optimising the quality for the user, even if that means that the pursuit of profit isn't quite so high in their list of priorities.

Google do provide some other really good servies and products too, like Android, YouTube, and Google Chrome (now the most widely used web browser by a significant margin). But can they really claim responsibility for these creations? No. In every case they are either based on existing technology, usually free, or they were created by someone else and acquired by Google.

Android is built on top of the open source OS Linux, YouTube was created and made popular by another company before Google bought it, and Chrome is based on an existing free rendering engine.

So what has Google really done? They created an excellent search engine. In fact, even that isn't true, because Google search was created in 1998 by Larry Page and Sergey Brin as a research project when they were both PhD students at Stanford University.

So are there any real examples of Google being genuinely creative and innovative? They have acquired a lot of really good basic systems, tidied them up a bit, and applied a commercial model to them to make the company incredibly rich. Plus they have applied their own extreme political ideology to filter what people can and can't see. Is this an example of them being not evil? I don't think so.

It might seem here that I have been unfairly criticising Google, and that point is partly true, but every other major corporation involved in technology have done exactly the same thing. That includes Microsoft, Apple, and Facebook. In fact, Facebook is probably the single most objectionable technology company on the planet right now. I have never seen so much success generated by such a mediocre product and such a truly immoral business model.

Now at this stage I do have to admit that I use Google, Apple, Facebook, and even Microsoft products every day. Does that make me really hypocritical? Well, in some ways it does, but I don't use those products through choice, I use them because I have to.

To be fair, Google and Apple do make really good products, and I do choose to use them despite their questionable business practices. But Microsoft and Facebook make such junk that I would never use them if I had a realistic choice. But if I want to be part of the computer community today (and I do, because I am an IT consultant) I have no viable alternative.

Where did things go wrong? Well I think there is a basic flaw in corporate culture where companies which become successful through clever engineering and genuine innovation are eventually taken over by professional management types. It happened with Google when their CEO, Sundar Pichai, who has an MBA (never a good sign) took over, and it has happened when Tim Cook (another MBA type) took over from Steve Jobs at Apple.

Whether this more "professional" and "business-oriented" style is a good or bad thing could be debated. Some would say that to create great products and to truly innovate a company must be financially successful and well organised. Well maybe, but if you are going to take that path don't even pretend that your prime motivation is to create great products and services for your customers. And forget about meaningless statements like "don't be evil". They're not relevant when a company is taken over by the management types.

View Details and Comments

Diminishing Returns

2018-11-16. Computers. Rating 2. ID 1947.

There's a classic law in economics called the law of diminishing returns. It describes how increasing investment in one area doesn't always lead to a linear increase in profit unless other areas are also enhanced (and often, not even then). For example, adding more workers to a production line won't result in an equivalent number of extra items being built unless other factors are also increased, such as number of machines, supply of raw materials, etc. Another example might be marketing. Increasing a marketing budget might result in twice as many advertisements being produced, but probably not twice as many sales, because consumers will react negatively to too much advertising.

The same principle applies in other areas. If you spend twice as much on a car, for example, it probably won't be twice as fast, or twice as reliable. My car cost NZ$13000 and can do 0 to 100 kph in 5.7 seconds. To achieve that launch twice as fast I would need to spend a million dollars (or more) on a Porsche 918 or a Lamborghini. Definitely a similar consequence to the law of diminishing returns there, I think!

And now we get to the point of this post, because the same applies to consumer electronics.

My regular readers will know that I recently bought an Apple iPhone XS for just over NZ$2000. Some say that is ridiculous amount to spend on a phone, and those people might be similarly perplexed if they knew about all the other expensive Apple equipment I own. In fact, what I have spent on some other electronic equipment could also be questioned to a lesser extent.

In fact, most of Apple's new products are more expensive than what many people had hoped for, and there is some debate amongst members of the Apple user community whether they are now too expensive. I think it would have been better overall if the price of new devices had been kept to the same as the old ones they replaced - a sort of informal policy Apple has had in the past - but does even that represent a fair price?

A superficial look at the Apple devices' features compared with similar products from other manufacturers costing half as much might lead to the question: why spend that much? Well, there are many reasons why spending more makes sense, and a similar number of reasons why it doesn't. In fact, the person who makes the decision to pay the price premium and buy Apple products, and the person who decides to buy a product at half the price, both have justifications which make sense.

Actually, to be more correct, most people don't make purchasing decisions based on sound evidence and logic, but a good case could be made even if it often isn't. As an aside, this is an important point in economics generally, because classical economics assumes people make rational purchasing decisions where some newer forms, such as behavioural economics, point out that this is a false assumption. This makes a lot of classical economics invalid. No surprises there! But that is another blog post, so let's continue with the main theme today...

In most cases a $2000 phone will not be twice as good as a $1000 phone, just because of this version of the law of diminishing returns. But it will almost always be significantly better to a lesser extent. I want to give an example of where hidden benefits exist in the more expensive Apple products by comparing the technology which unlocks the phone.

Apple wanted to have the screen on the new phones extend right to the edge, so there was no room for physical buttons on the front of the phone. That meant that the technology used by the models from the iPhone 5S to the iPhone 8 could not be used for unlocking - that was a fingerprint sensor on the home button on the front of the phone on those models.

How have other manufacturers - who also want to extend the screen - solved this problem? Well some have put the fingerprint sensor on the back instead. Others have used a simple face identification system which recognises the user using the front camera. And others have put a fingerprint sensor on the screen itself. How good are these solutions? Well they are OK, but that is all, because all have serious flaws.

What did Apple do? They looked at those options, but in the end they decided to do it properly. The iPhone X models all have no less than 8 miniaturised sensors and transducers in less than 1 square centimeter available in the "notch" at the top of the phone. Some of these are used by the face recognition technology that Apple decided to use which uses a dot projector and infrared camera to do accurate recognition of the user.

Superficially this looks similar to other phones, but the details are where it matters. Apple doesn't tend to talk a lot about technical details, just the end result. This means that other unlocking systems look the same to many users, but they aren't. The Apple system is fast and reliable. They have turned a weakness (the lack of space for a front home button and its associated sensor) into a strength (a much better unlocking system).

I should admit here that I am writing this based on reviews of other products in general, and there might be some very good unlocking systems out there that I don't know about. However, the same principle applies to many systems in the iPhone and I think it is unlikely that other phones could match all of them unless they cost a similar amount to an iPhone anyway (Samsung's top phone costs almost as much as Apple's).

Other aspects of Apple products which might not be immediately obvious but add to their value include: top quality materials, long useful life, good operating system upgradability, careful attention to trade-offs between processing power and battery life, excellent interoperability between products, and the best attention paid to security, privacy, and ease of use concerns.

So is an iPhone worth twice as much as a good Android phone with similar features, or is a Mac worth more than a similarly configured PC, or is an iPad worth a lot more than an Android tablet? Well that depends on how much you use your device, how much you want your interactions with technology to be smooth and trouble free, and how much you just appreciate beautiful design (I don't just mean physical appearance here, I mean the complete experience of using the device).

The law of diminishing returns says that it is unlikely that doubling what you pay will get you something which is twice as good based on simple specifications or features, but looking a bit more deeply, maybe it's not such a bad deal after all, at least for those users who can truly appreciate the differences.

View Details and Comments

A Brilliant Phone

2018-11-05. Computers. Rating 1. ID 1945.

I have been the proud owner of an iPhone XS for about a week now. A week is long enough to have reached the point of appreciating the strengths and having been annoyed by a few weaknesses in the design of a new device, so what are my impressions?

OK, let's get the obvious criticism out of the way first: the phone is too expensive. I am paying mine off over three years and have found a cheaper cell plan, so it's not too painful for me, but NZ$2200 for a phone is difficult to justify. Or maybe the fact that other models of the same phone cost $NZ2800 makes it look OK!

I have three other criticisms I need to cover here before I go on to the positives...

First, the screen does show some off-axis colour shift, which is annoying at first. I have already got used to it, but for a phone this expensive it is disappointing. I know that this is a characteristic of OLED screens, and that LCD screens show a similar phenomenon, except all colours are affected evenly, but maybe that is something for the OLED screen manufacturers to work on.

Second, the 3.5 mm headphone socket is gone, meaning I have to use the digital lightning connector instead. This is fine except when I want to plug the 'phones into my Mac or iPad. They don't work. So I now need two sets: one for the iPhone and another for the computer and iPad. Not a disaster, but still a small annoyance.

Third, there is something about the materials the phone is made from which makes it want to slowly slip off things and fall on the floor. The glass finish is beautiful, water resistant, and quite robust, but if I sit it on the edge of something like a couch which is not a flat surface it always seems to randomly fall off a few minutes later. Maybe it's the vibration of notifications which does it.

So, with those three (four if you include price) criticisms out of the way, what are the positives?

Well first, it's beautiful. The design is superbly elegant and simple, it's a bit shiny but not too much, and the build quality and materials are superb. This is the epitome of Apple design, where less is more.

Second, it's fast. I mean really fast. The power Apple are packing into these devices now is amazing, and it is at least a generation ahead of any comparable product on the planet. My previous phone was an iPhone 6S Plus, so I have moved ahead a few iterations in processor design, and this is a big boost in performance.

Third, the new interface works really well. The face ID is amazingly accurate. It recognises me in all sorts of light and different conditions, and it has never mistaken anyone else for me. And it works super quickly. It's like having no lock at all, except no one else can use the phone! And the new swipe gestures are much better than a home button once you get used to them, plus the worry of wearing out the home button (a common issue with old iPhones) is gone.

Fourth, the cameras are really good. I am a photography enthusiast and I know that, at the extremes, I still need a dSLR, but the iPhone takes great photos. The twin lens system works brilliantly, the exposure and focus are near faultless, the camera is super fast, and the digital processing works well (I have applied the 12.1 update to fix the front camera problem).

Fifth, the screen. OK, I admit I was skeptical about OLED to begin with, but this isn't a screen, it's a layer of reality on the front of a phone. It's more like a super high quality printed page on ultra-fine paper than a screen. It really is that good... as long as you don't have the phone tilted too much, because there is still the issue of the off-axis colour I mentioned above.

Sixth, it is more waterproof (and dust-proof) than earlier phones, and that seems real based on actual use. Water droplets seem to be repelled from the phone and it's easy to dry off if it gets wet. I haven't been brave enough to try any full immersion tests myself but other users report it survives these fine. But I do feel confident using it as a radio when in the shower!

Finally, there are other highlights too. The sound is better and louder. The screen is bigger than my previous phone, even though the phone itself is noticeably smaller (the screen goes to the edges now). The battery life is good. Obviously I have used it a lot since I got it, including syncing many Gigabytes of data, and the battery still has plenty of life at the end of the day (I always recharge my devices overnight). And the storage options are good. I decided 256G would be enough so didn't get the top capacity of 512G which adds to the cost, of course.

As I finish this post I have remembered a couple of other small issues. The first is that the phone supports wireless charging but Apple doesn't give you a wireless charger. And it would be nice if they included the lightning to 3.5 mm adapter still, too. I mean, they're charging a fortune for this phone so a few little added extras would be appreciated.

Overall, this is a brilliant phone. Sure, if I already have an iPhone X I wouldn't have bothered with the update. If I had an 8 or maybe a 7 I might not have done it either. But coming from a 6S Plus, which is really the lowest level phone which is still useful, it is a major improvement.

Whether it is really worth spending enough to buy a cheap used car is debatable. I guess I would need to spend about the same to get a really good TV or amplifier or camera, so it actually isn't so bad. Maybe the fact that smartphones are so small disguises how amazing they really are. In fact all of that technology in a package so compact is one of the great achievements of modern consumer technology, maybe even the single greatest achievement.

So, NZ$2200 for a phone. Is it worth it? Maybe not, but it might be worth paying that for a true marvel of multi-purpose, miniaturised, modern electronic design, and that's what an iPhone really is.

View Details and Comments

A Virtual World

2018-10-18. Computers. Rating 2. ID 1941.

There's always a "next big thing" in the field of consumer computing (by that I mean computers used by "normal" people). First it was hobby computers (this started in the 70s), then it was useful and relatively easy to use personal computers, then laptops, then smartphones, then tablets, and finally smart watches. So what's next?

It has got to the point now where computers are really functional, and there is not a lot more that most users could really ask for. The same applies to smartphones. Sure, there are improvements in the latest phones, but I'm using an iPhone which is several generations old, and it is still quite functional (although I will probably upgrade this year). And the same applies to my iPad which is also a few years old but still really useful. My Apple Watch (which is the original generation) is a useful device but needs an upgrade. But smart watches aren't improving a lot from one generation to the next.

So it seems that we can't expect much new in existing, widely used computer technologies. If that's true, what is the next big thing? Well, the internet of things is one candidate, and that will be significant, but only if ease of setup, security, and compatibility can be improved (come on Apple, you made a good start here, but it feels a bit neglected now). But the more likely really big next thing (as you might have guessed from the title of this post) is virtual reality, including augmented reality.

There's a lot of data out there today, but it is often difficult to access. And the type of information available now is sometimes not well suited to being accessed through conventional user interfaces. Plus there are lots of experiences which computer users want but which are impractical now.

It seems that VR and AR might be good ways to resolve these issues.

Imagine a "Minority Report" type user interface (except a well-designed, logical one, not the silliness we see in mainstream movies like that). That is visualising data as a three dimensional object projected onto the real world, and which can be manipulated using natural gestures. So to compare sales figures of different smart-phones the user just drags "boxes" of data labelled "Apple", "Samsung", etc, onto a machine labelled "graph". The graph projects like a movie from the graph box, and the user can change that by "touching" the projected graph. It can be zoomed, rotated, etc that way. If another type of phone is needed, the user just "throws" its box into the graph machine.

This type of AR is relatively easy (although still there are big challenges, obviously) and matches the sort of model we have now on computers. The boxes of data are files. The machine is an app. And the gestures are user interface controls. It seems to me that this could be done without having to re-work too much of what we already have, and users would recognise the metaphor. In fact, it would be a bit like reversing the metaphor, because the AR interface would be a lot like the real world which the conventional computer desktop was based on.

Now add thought control. This may sound a bit futuristic, but it is realistic using existing technology. The biggest problem might be collecting the signals without implanting probes into the user's brain - this sounds like something the average user might find too intrusive!

But if the sensors can be made sensitive enough then thinking about the data can manipulate it. The person can test ideas on real world data by just thinking about how it might act. Obviously this might involve quite a bit of training for the computer to understand how the human thinks, but the basics of this already exist.

It has been known for some time that sufficiently detailed and accurate projections of the real world can be subconsciously interpreted by the observer as being real. People who watch high resolution, high frame rate movies, using technologies like IMAX, feel like they are really there, and can suffer from motion sickness. Virtual reality headsets are even more immersive because looking in different directions gives a realistic depiction of real 3D space.

It seems apparent that VR headset experiences are already very compelling. For example, people report finding it almost impossible to force themselves to walk out onto a (virtual) narrow plank between two tall (virtual) buildings. As that technology continues to improve there might be a point where VR is practically indistinguishable from reality. In fact, using various augmentations it might be "more real that reality".

Given the current state of this technology, and the seemingly obvious idea that it will progress greatly over the next couple of decades, who could deny the real possibility that people will want to live entirely in a virtual world? In the past, science fiction stories have depicted dystopian worlds where people sit in tiny cubicles for their whole life and only interact with others, or with the world, through virtual reality. It's hard to see how this won't become real.

As I said above, these ideas tend to be presented in dystopian terms, but is that fair? Well, to us today, it seems that way, but many people from 100 years ago would think that the way we live today: sitting for hours in little cubicles (our offices) looking at screens (computers) and then driving home in little mobile boxes (cars) and sitting in other boxes (our lounges) watching somewhat bigger screens (TV) would be dystopian from their perspective too. Yet this is a lifestyle many people participate in quite freely.

In fact, living in a VR might be the great equaliser. It costs no more to go on a virtual holiday to Italy than it does to take a trip down to the local supermarket. It costs no more to drive a Lamborghini than it does to drive a Toyota Corolla. Everyone could live a millionaire lifestyle without having to pay for it, and without imposing too much damage to the environment, because there would be no need to travel and no need for real world items to be manufactured from scarce resources.

If we were offered this experience today we might not want to take it, because the technology isn't quite good enough yet, and we would feel that we were getting a "second class" experience compared to the rich people who could afford the real thing. But in the future, when the virtual world is better than the real one, and everyone - no matter how rich - lives in it, what would be the justification for evaluating it as second best?

This might sound like a nightmare from a social perspective. If people interact with the virtual world instead of each other, how is that good? Well, I think people will still interact with others, but it will be through a virtual world interface. It will seem in every way as if the other person is there without them actually physically being present. Even today, many people choose to interact with others far more through messaging and social services than they do through real life.

But now things get even more concerning, because why would you interact with a real person when you could interact with an artificial intelligence which is indistinguishable from a real person (except possibly being far better)? And at this point we really do get back to the dystopian science fiction theme, because a person's total existence could be completely fabricated inside a VR system.

You might say the universe wouldn't be real, because it would be "just" a huge computer simulation. Which, of course, gets back to that other favourite topic of mine: how do we know that our current reality isn't a simulation? Philosopher, Nick Bostrom, and some other pretty smart people, point out that it is in some ways quite sensible to think that we do live in a simulation already (see my blog post "Life's Just a Game" from 2016-07-06).

And here's the key question: if we do live in a simulation, should we want out? Given that everything we care about: our friends and family, our work, our interests, all might exist inside a simulation, what is the value in leaving it? Is it likely that the "real world" outside is any better? That seems unlikely.

And that might be the ultimate proof that living in a simulation isn't as bad as we assume at first. If we didn't know we were in one, then found out that we were, does that make our previous existence less valuable in some way? And would we want to give up everything we have now to "escape". I don't think so, which seems to show that really good VRs are just as good, or better, than reality.

So maybe we shouldn't be as scared of the idea as our initial reaction makes us. Maybe we should welcome the idea of the upcoming VR arriving. Or maybe we should admit we've been in it all along!

View Details and Comments

Facebook is Watching You!

2018-05-14. Computers. Rating 2. ID 1914.

Recently I downloaded my Facebook data, just to find out what sort of information was stored about me there. I am not a super-heavy user of Facebook, but I do spend some time there multiple times every day, and I occasionally get into some fairly massive "debates" on various topics there, which use up a fair bit of time.

I also visit other social sites every day, such as Twitter and Quora. But, given the recent publicity regarding Facebook, I was most concerned about that. Actually, when I consider all of the "social" sites I use, it is surprising I ever get anything useful done!

I am aware of what Facebook is doing. I know that if I seem to be getting a service for free then there is a "payment" being made in some other way. And that is fair enough, because we live in a capitalist society, and despite its many obvious flaws - most of which I have complained about on numerous occasions - it works moderately well most of the time, and really well on a few occasions.

But Facebook is not one of the examples of where it has worked really well. It is not a good service, for both technical and political/business reasons. It is not good technically because it is poorly designed, and sometimes slow and unreliable. And it is not good politically because of the algorithms it uses are primarily designed to make you want to use it more, and to generate more income for the company, rather than providing a genuinely useful service.

So, given all this negativity, why do I use it? For the same reason I use all the other mediocre services and products (eBay, TradeMe, Microsoft Word, etc): because that's what everyone else uses. That is literally the only reason I use it at all. Whenever I sign up to other similar services (Google+, Ello, Path, etc) I just don't use them for long, because my friends and family aren't there.

So I know Facebook is spying on me, and often in very subtle ways. But there was a recent example of something a lot more obvious. I was visiting a friend and he mentioned a new, relatively obscure style of wine I had never tried. So I Googled it on my phone while I was there. And yes, you guessed it: the next day an ad for that exact style appeared in my Facebook feed. I should say that a whole pile of things I wanted to see from friends didn't appear, but this ad did. Thanks Facebook!

Another problem with Facebook is that it serves as a well-known, public repository of potentially contentious opinions you might hold. The fear of a future employer trolling your public Facebook feed, or even demanding your password to examine its contents, is well known. My opinion on this is that any employer prepared to resort to such offensive tactics isn't worth working for anyway, and if they find something I have said publicly, that they don't like, the same applies.

But managing this stuff is really quite easy. People just have to remember a few basic rules...

Number one. Work under the assumption that nothing you do on the internet, and especially in Facebook, is confidential. Assume everyone can see everything. If you want to make an anonymous comment use proxies, fake accounts, etc. Any half-decent computer geek can show you how to.

Number two. Don't use any of the advertisers you have fed to you. If you see an ad in Facebook, or in a Google search, or anywhere else you haven't asked for, ignore it. It fact, mark that particular company down for use in future. I'm not saying don't use Facebook advertisers, but I am saying search using other techniques and choose companies that way. And if you use Google, do it anonymously and be less trustful of links marked with "Ad".

Number three. Don't take any of this too seriously. Most people and most companies aren't really all that interested in you apart from as a potential target for advertising and possible sales. So take notice of the first two rules but don't let it paralyse your use of the internet. Whatever its faults, it is still one of humanity's greatest achievements.

View Details and Comments

Utopia or a Dystopia?

2018-02-05. Computers. Rating 3. ID 1898.

I have been interested in artificial intelligence for years, without being too deeply involved in it, and it seemed that until recently there was just one disappointment after another from this potentially revolutionary area of technology. But now it seems that almost every day there is some fascinating, exciting, and often worrying news about the latest developments in the area.

One recent item which might be more significant than it seems initially is the latest iteration of AlphaGo, Google's Go playing AI. I wrote about AlphaGo in a post "Sadness and Beauty" from 2016-03-16 after it beat the world champion in the game Go which many people thought a computer could never master.

Now AlphaGo Zero has beaten AlphaGo by 100 games to zero. But the significant thing here is not about an incremental improvement, it is about a change in the way the "Zero" version works. The zero in the name stands for zero human input, because the system learned how to win at Go entirely by itself. The only original input was the rules of the game.

While learning winning strategies AlphaGo Zero "re-created" many of the classic moves humans had already discovered over the last few thousand years, but it went further than this and created new moves which had never been seen before. As I said in my previous post on this subject, the original AlphaGo was already probably better than any human, but the new version seems to be completely superior to even that.

And the truly scary thing is that AlphaGo Zero did all this in such a short period of time. I haven't heard what the time period actually was, but judging by the dates of news releases, etc, it was probably just days or weeks. So in this time a single AI has learned far more about a game than millions of humans have in thousands of years. That's scary.

Remember that AlphaGo Zero was created by programmers at Alphabet's Google DeepMind in London. But in no way did the programmers write a Go playing program. They wrote a program that could learn how to play Go. You could say they had no more input into the program's success than a parent does into the success of a child whom they abandon at birth. It is sort of like supplying the genetics but not the training.

You might wonder why Alphabet (Google's parent company) has spent so much time and money creating a system which plays an obscure game. Well the point, of course, is to create techniques which can be used in more general and practical situations. There is some debate amongst experts at the moment about how easily these techniques could be used to create a general intelligence (one which can teach itself anything, instead of just a specific skill) but even if it only works for specific skills it is still very significant.

There are many other areas where specialised intelligence by AIs has exceeded humans. For example, at CERN (the European nuclear research organisation) they are using AI to detect particles, labs are developing AIs which are better than humans at finding the early signs of cancer, and AIs are now good at detecting bombs at airports.

So even if a human level general intelligence is still a significant time away, these specialised systems are very good already, even at this relatively early time in their development. It's difficult to predict how quickly this technology might advance, because there is one development which would make a revolutionary rather than evolutionary change: that is an AI capable of designing AIs - you might call this a meta-AI.

If that happens then all bets are off.

Remember that an AI isn't anything physical, because it is just a program. In every meaningful way creating an AI program is just like playing a game of Go. It is about making decisions and creating new "moves" in an abstract world. It's true that the program requires computer hardware to run on, but once the hardware reaches a reasonable standard of power that is no more important than the Go board is to how games proceed. It limits what can be done in some ways, but the most interesting stuff is happening at a higher level.

If AlphaGo Zero can learn more in a week than every human who ever played Go could learn in thousands of years, then imagine how much progress a programming AI could make compared with every computer scientist and programmer who ever existed. There could be new systems which are orders of magnitude better developed in weeks. Then they could create the next generation which is also orders of magnitude better. The process would literally be out of control. It would be like artificial evolution running a trillion times faster than the natural version, because the generation time is so short and the "mutations" are planned rather than being random.

When I discussed the speed that AlphaGo Zero had shown when it created the new moves, I used the word "scary", because it literally is. If that same ability existed for creating new AIs then we should be scared, because it will be almost impossible to control. And once super-human intelligence exists it will be very difficult to reverse. You might think something like, "just turn off the computer", but how many backups of itself will exist by then? Simple computer viruses are really difficult to eliminate from a network, so imagine how much more difficult a super-intelligent "virus" would be to remove.

Where that leaves humans, I don't know for sure. I have said in the previous post that humans will be redundant, but now I'm not totally sure that is true. Maybe there will be a niche for us, at least temporarily, or maybe humans and machines will merge in some way. Experts disagree on how much a threat AI really is. Some predict a "doomsday" where human existence is fundamentally threatened while others predict a bright future for us, free from the tedious tasks which machines can do better, and where we can pursue the activities we *want* to do rather than what we *have* to do.

Will it be a utopia or a dystopia? No one knows. All we know is that the world will never be the same again.

View Details and Comments

Random Clicking

2018-01-14. Computers. Rating 3. ID 1893.

Nowadays, most people need to access information through computers, especially through web sites. Many people find the process involved with this quite challenging, and this isn't necessarily restricted to older people who aren't "digital natives", or to people with no interest in, or predisposition towards technology.

In fact, I have found that many young people find some web interfaces bizarre and unintuitive. For example, my daughter (in her early 20s) thinks Facebook is badly designed and often navigates using "random clicking". And I am a computer programmer with decades of experience but even I find some programs and some web sites completely devoid of any logical design, and I sometimes revert to the good old "random clicking" too!

For example, I received an email notification from Inland Revenue last week and was asked to look at a document on their web site. It should have taken 30 seconds but it took closer to 30 minutes and I only found the document using RC (random clicking).

Before I go further, let me describe RC. You might be presented with a web site or program/app interface and you want to do something. There might be no obvious way to get to where you want to go, or you might take the obvious route only to find it doesn't go where you expected. Or, of course, you might get random error message like "page not available" or "internal server error" or even the dreaded "this app has quit unexpectedly" or the blue screen of death or spinning activity wheel.

So to make progress it is necessary just to do some RC on different elements, even if they make no sense, until you find what you are looking for. Or in more extreme cases you might even need to "hack" the system by entering deliberately fake information, changing a URL, etc.

What's going on here? Surely the people involved with creating major web sites and widely used apps know what they are doing, don't they? After all, many of these are the creations of large corporations with virtually unlimited resources and budgets. Why are there so many problems?

Well, there are two explanations: first, that errors do happen occasionally, no matter how competent the organisation involved is, and because we use these major sites and apps so often we will tend to see the errors more often too; and second, large corporations create stuff through a highly bureaucratic and obscure process and consistency and attention to detail is difficult to attain under such a scheme.

When I encounter errors, especially on web sites, I like to keep a record of it by taking a screenshot. I keep this in a folder to make me feel better if I make an error on any of my own projects, because it reminds me that sites created by organisations with a hundred programmers and huge budgets often have more problems those created by a single programmer with no budget.

So here are some of the sites I currently have in my errors folder...

APN (couldn't complete your request due to an unexpected error - they're the worst type!)
Apple (oops! an error occurred - helpful)
Audible (we see you are going to x, would you rather go to x?)
Aurora (trying to get an aurora prediction, just got a "cannot connect to database")
BankLink (page not found, oh well I didn't really want to do my tax return anyway)
BBC (the world's most trusted news source, but not the most trusted site)
CNet (one of the leading computer news sources, until it fails)
DCC (local body sites can be useful - when they work)
Facebook (a diabolical nightmare of bad design, slowness, and bugginess)
Herald (NZ's major newspaper, but their site generates lots of errors)
InternetNZ (even Internet NZ has errors on their site)
IRD (Inland Revenue has a few good features, but their web site is terrible overall)
Medtech (yeah, good luck getting essential medical information from here)
Mercury (the messenger of the gods dropped his message)
Microsoft (I get errors here too many times to mention)
Fast Net (not so fast when it doesn't work)
Origin (not sure what the origin of this error was)
Porsche (great cars, web site not so great)
State Insurance (state, the obvious choice for a buggy web site)
Ticketmaster (I don't have permission for the section of the site needed to buy tickets)
TradeMe (NZ's equivalent of eBay is poorly designed and quite buggy)
Vodafone (another ISP with web site errors)
WordPress (the world's leading blogging platform, really?)
YesThereIsAGod (well if there is a god, he needs to hire better web designers)

Note that I also have a huge pile of errors generated by sites at my workplace. Also, I haven't even bothered storing examples of bad design, or of problems with apps.

As I said, there are two types of errors, and those caused by temporary outages are annoying but not disastrous. The much bigger problem is the sites and apps which are just inherently bad. The two most prominent examples are Facebook and Microsoft Word. Yes, those are probably the most widely used web site and most widely used app in the world. If they are so bad why are they so popular?

Well, popularity can mean two things: first, something is very widely used, even if it is not necessarily very well appreciated; and second, something which is well-liked by users and is utilised because people like it. So you could say tax or work is popular because almost everyone participates in them, but that drinking alcohol, or smoking dope, or sex, or eating burgers is popular because everyone likes them!

Facebook and Word are popular but most people think they could be made so much better. Also many people realise there are far better alternatives but they just cannot be used because of reasons not associated with quality. For example, people use Facebook because everyone else does, and if you want to interact with other people you all need to use the same site. And Word is widely used because that is what many workplaces demand, and many people aren't even aware there are alternatives.

The whole thing is a bit grim, isn't it? But there is one small thing I would suggest which could make things better: if you are a developer with a product which has a bad interface, and you can't be almost certain that you can improve it significantly, don't bother trying. People can get used to badly designed software, but coping with changes to an equally bad but different interface in a new version is annoying.

The classic example is how Microsoft has changed the interface between Office 2011 and Office 2016 (these are the Mac versions, but the same issue exists on Windows). The older version has a terrible, primitive user interface but after many years people have learned to cope with it. The newer version has an equally bad interface (maybe worse) and users have to re-learn it for no benefit at all.

So, Microsoft, please just stop trying. You have a captive audience for your horrible software so just leave it there. Bring out a new version so you can steal more money from the suckers who use it, but don't try to improve the user interface. Your users will thank you for it.

View Details and Comments

Is Apple Doomed?

2017-12-20. Computers. Rating 2. ID 1890.

I'm a big Apple fanboy. As I sit here writing this blog post (flying at 10,000 meters on my way to Auckland, because I always write blog posts when I fly) I am actively using 4 Apple products: a MacBook Pro computer, an iPad Pro tablet, an iPhone 6S Plus smartphone, and an Apple Watch. At home I have many Apple computers, phones, and other devices. I also have one Windows PC but I very rarely use that.

So the general state of Apple's "empire" is pretty important to me. Many of the skills I have (such as general trouble-shooting, web programming, scripting, configuration, and general software use) could be transferred to Windows, but I just don't want to. I really like the elegance of Apple's devices on the surface, combined with the power of Unix in the background.

But despite my enthusiasm for their products I have developed an increasing air of concern with Apple's direction. There is the indistinct idea that they have stopped innovating to the extent they did in the past. Then there is the observation that the quality control of both hardware and software isn't what it was. Then there is just a general perception that Apple are getting too greedy by selling products at too high a price and not offering adequate support for the users of their products.

These opinions are nothing new, but what is new is that people who both know a lot about the subject, and would normally be more positive about Apple, are starting to join in the criticism. Sometimes this is through a slight sense of general concern, and other times through quite strident direct criticism.

I would belong to the former class of critics. I think I have noticed an increase in the number of errors Apple is making, at the same time as I notice an apparent general decrease in the overall reliability of their products, and to make matters worse, these are accompanied by what seems to be higher prices.

You will notice I used a lot of qualifiers in the sentence above. I did this deliberately because I have no real data or objective statistics to demonstrate any of these trends. They might not be real because it is very easy to start seeing problems when you look for them, and negative events often "clump" into groups. Sometimes there might be a series of bad things which happen after a long period with no problems, but that doesn't mean there is any general trend involved.

But now is the time for anecdotes! These don't mean much, of course, but I want to list a few just to give an idea of where my concern is coming from.

Recently I set up two new Mac laptop computers in a department where there was a certain amount of pressure from management to switch to Microsoft Surface laptops. The Surface has a really poor reputation for reliability and is quite expensive, so it shouldn't be difficult to demonstrate the superiority of Apple products in this area, right?

Well, no. Wrong, actually. At least in this case. Both laptops had to go for service twice within the first few weeks. I have worked with Apple hardware for decades and have never seen anything remotely as bad as this. And the fact that it was in a situation where Apple was under increased scrutiny didn't help!

In addition, the laptops had inadequate storage, because even though these are marketed as "pro" devices the basic model still has only 128G of SSD storage. That wasn't Apple's fault, because the person doing the purchasing should have got it right, but it didn't help!

Also recently Apple has suffered from some really embarrassing security flaws. One allowed root access to a Mac without a password, and the other allowed malicious control of automated home-control devices. There were also a few other lesser issues in the same time period. As far as I now none of these were exploited to any great extent, but it is still a bad look.

Another issue which seems to be becoming more prominent recently is their repair and replacement service. In general I have had fairly good service from Apple repair centers, but I have heard of several people who aren't as happy.

When you buy a premium device at the premium price Apple demands I don't think it is unreasonable to expect a little bit of extra help if things go wrong. So unless there is clear evidence of fraud, repairs and replacements should be done without the customer having to resort to threats and demands for the intervention of higher levels of staff.

And even if a device only has one year of official warranty (which seems ridiculous to begin with), Apple should offer a similar level of support for a reasonable period without the customer having to resort to quoting consumer law.

Even if Apple wasn't interested in doing what was morally right they should be able to see that providing superior service for what they claim is a superior product at a superior price is just good business because it maintains a positive relationship with the customer.

My final complaint regards Apple's design direction. This is critical because whatever else they stand for, surely good design is their primary advantage over the opposition. But some Apple software recently has been obscure at best and incomprehensibly bizarre at worst, and iTunes has become a "gold standard" for cluttered, confusing user interfaces.

When I started programming Macs in the 1980s there was a large section in the programming documentation about user interface design. The rules were really strict, but resulted in consistent and clear software which came from many different developers, including Apple. I don't do that sort of programming any more but if a similar section exists in current programming manuals there is little sign that people - even Apple themselves - are taking much notice!

So is Apple doomed? Well probably not. They are (by some measures) the world's biggest, richest, and most innovative company. They are vying with a few others to become the first trillion dollar company. And, in many ways they still define the standard against which all others are judged. For example, every new smart phone which appears on the market is framed by some people as an "iPhone killer". They never are, but the fact that products aspire to be that, instead of a Samsung or Huawei killer says a lot about the iPhone.

But despite the fact that Apple isn't likely to disappear in the immediate future, I still think they need to be more aware of their real and perceived weaknesses. If they aren't there is likely to be an extended period of slow decline and reduced relevance. And a slow slide into mediocrity is, in many ways, worse than a sudden collapse.

So, Tim Cook, if you are reading this blog post (and why wouldn't you), please take notice. Here's just one suggestion: when your company releases a new laptop with connections that are unusable without dongles, throw a few in with the computer, and keep the price the same as the model it replaces, and please, try to make them reliable, and if they aren't, make sure the service and replacement process is quick and easy.

It's really not that hard to avoid doom.

View Details and Comments

1K of RAM

2017-07-25. Computers. Rating 1. ID 1868.

One of my first computers had just 1K of RAM. That's enough to store... well, almost nothing. It could store 0.01% of a (JPEG compressed) digital photo I now take on my dSLR or 0.02% of a short (MP3 compressed) music track. In other words, I would need 10 thousand of these devices (in this case a Sinclair ZX80) to store one digital photo!

I know the comparison above is somewhat open to criticism in that I am comparing RAM with storage and that early computers could have their memory upgraded (to a huge 16K in the case of the ZX80) but the point remains the same: even the most basic computer today is massively superior to what we had in the "early days" of computers.

It should be noted that, despite these limitations, you could still do stuff with those early computers. For example, I wrote a fully functioning "Breakout" game in machine code on the ZX80 (admittedly with the memory expansion) and it was so fast I had to put a massive loop in the code to slow it down. That was despite the fact that the ZX80 had a single 8 bit processor running at 3.25 MHz which is somewhat inferior to my current laptop (now a few years out of date) which has four 64 bit cores (8 threads) running at 2.5 GHz.

The reason I am discussing this point here is that I read an article recently titled "The technology struggles every 90s child can relate to". I wasn't exactly a child in the 90s but I still struggled with this stuff!

So here's the list of struggles in the article...

1. Modems

Today I "know everything" because in the middle of a discussion on any topic I can search the internet for any information I need and have it within a few seconds. There are four components to this which weren't available in the 90s. First, I always have at least one device with me. It's usually my iPhone but I often have an iPad or laptop too. Second, I am always connected to the internet no matter where I am (except for rare exceptions). Third, the internet is full of useful (and not useful) information on any topic you can image. And finally, Google makes finding that information easy (most of the time).

None of that was available in the 90s. To find a piece of information I would need to walk to the room where my desktop computer lived, boot it, launch a program (usually an early web browser), hope no one else was already using the phone line, wait for the connection to start, and laboriously look for what I needed (possibly using an early search engine) allowing for the distinct possibly that it didn't exist.

In reality, although that information retrieval was possible both then and now, it was so impractical and slow in the 90s that it might as well have not existed at all.

2. Photography

I bought a camera attachment for one of my early cell phones and thought how great it was going to be taking photos anywhere without the need to take an SLR or compact (film) camera with me. So how may photos did I take with that camera? Almost none, because it was so slow, the quality was so bad, and because it was an attachment to an existing phone it tended to get detached and left behind.

Today my iPhone has a really good camera built-in. Sure it's not as good as my dSLR but it is good enough, especially for wide-angle shots where there is plenty of light. And because my iPhone is so compact and easy to take everywhere (despite its astonishing list of capabilities) I really do have it with me always. Now I take photos every day and they are good enough to keep permanently.

3. Input devices

The original item here was mice, but I have extended it to mean all input devices. Mice haven't changed much superficially but modern, wireless mice with no moving parts are certainly a lot better than their predecessors. More importantly, alternative input devices are also available now, most notably touch interfaces and voice input.

Before the iPhone no one really knew how to create a good UI on a phone but after that everything changed, and multi-touch interfaces are now ubiquitous and (in general, with a few unfortunate exceptions) are very intuitive and easy to use.

4. Ringtones

This was an item in the article but I don't think things have changed that much now so I won't bother discussing this one.


Back in the day we used to wait hours (or days) for stuff to download from on-line services. Some of the less "official" services were extremely well used back then and that seems to have reduced a bit now, although downloading music and movies is still popular, and a lot faster now.

The big change here is maybe the change from downloads to streaming. And the other difference might be that now material can be acquired legally for a reasonable price rather than risking the dodgy and possibly virus infected downloads of the past.

6. Clunky Devices

In the 90s I would have needed many large, heavy, expensive devices just to do what my iPhone does now. I would need a gaming console, a music player with about 100 CDs to play in it, a hand-held movie player (if they even existed), a radio, a portable TV, an advanced calculator, a GPS unit, a compass, a barometer, an altimeter, a torch, a note pad, a book of maps, a small library of fiction and reference books, several newspapers, and a computer with functions such as email, messaging, etc.

Not only does one iPhone replace all of those functions, saving thousands of dollars and about a cubic meter of space, but it actually does things better than a lot of the dedicated devices. For example, I would rather use my iPhone as a GPS unit than a "real" GPS device.

7. Software

Software was a pain, but it is till often a pain today so maybe this isn't such a big deal! At least it's now easy to update software (it often happens with no user intervention at all) and installing over the internet is a lot easier than from 25 floppy disks!

Also, all software is installed in one place and doesn't involve running from disk or CD. In fact, optical media (CDs and DVDs) are practically obsolete now which isn't a bad thing because they never were particularly suitable for data storage.

8. Multi-User, Multi-Player

The article here talks about the problem of having multiple players on a PlayStation, but I think the whole issue of multiple player games (and multi-user software in general) is now taken for granted. I play against other people on my iPhone and iPad every day. There's no real extra effort at all, and playing against other people is just so much more rewarding, especially when smashing a friend in a "friendly" race in a game like Real Racing 3!

So, obviously things have improved greatly. Some people might be tempted to get nostalgic and ask if things are really that much better today. My current laptop has 16 million times as much memory, hundreds of thousands times as much CPU power, and 3000 times as many pixels as my ZX80 but does it really do that much more? Hell, yes!

View Details and Comments

The Internet is Best!

2017-03-17. Computers. Rating 2. ID 1842.

I hear a lot of debate about whether the internet is making us dumb, uninformed, or more close-minded. The problems with a lot of these debates are these: first, saying the internet has resulted in the same outcome for everyone is too simplistic; second, these opinions are usually offered with no justification other than it is just "common sense" or "obvious"; and third, whatever the deficiencies of the internet, is it better or worse than not having an internet?

There is no doubt that some people could be said to be more dumb as the result of their internet use. By "dumb" I mean being badly informed (believing things which are unlikely to be true) or not knowing basic information at all, and by "internet use" I mean all internet services people use to gather information: web sites, blogs, news services, email newsletters, podcasts, videos, etc.

How can this happen when information is so ubiquitous? Well information isn't knowledge, or at least it isn't necessarily truth, and it certainly isn't always useful. It is like the study (which was unreplicated so should be viewed with some suspicion) showing that people who watch Fox News are worse informed about news than people who watch no news at all.

That study demonstrates three interesting points: first, people can be given information but gather no useful knowledge as a result; second, non-internet sources can be just as bad a source as the internet itself; and third, this study (being unreplicated and politically loaded) might itself be an example of an information source which is potentially misleading.

So clearly any information source can potentially make people dumber. Before the internet people might have been made dumber by reading printed political newsletters, or watching trashy TV, or by listening to a single opinion at the dinner table, or by reading just one type of book.

And some people will mis-use information sources where others will gain a lot by using the same source. Some will get dumber while others get a lot smarter by using the same sources.

And (despite the Fox News study above) if the alternative to having an information source which can be mis-used is having no information source at all, then I think taking the flawed source is the best option.

Anecdotes should be used with extreme caution, but I'm going to provide some anyway, because this is a blog, not a scientific paper. I'm going to say why I think the internet is a good thing from my own, personal perspective.

I'm interested in everything. I don't have a truly deep knowledge about anything but I like to think I have a better than average knowledge about most things. My hero amongst Greek philosophers is Eratosthenes, who was sometimes known as "Beta". This was because he was second best at everything (beta is the second letter in the Greek alphabet which I can recite in full, by the way).

The internet is a great way to learn a moderate amount about many things. Actually, it's also a great way to learn a lot about one thing too, as long as you are careful about your sources, and it is a great way to learn nothing about everything.

I work in a university and I get into many discussions with people who are experts in a wide range of different subjects. Obviously I cannot match an expert's knowledge about their precise area but I seem to be able to at least have a sensible discussion, and ask meaningful questions.

For example, in recent times I have discussed the political situation in the US, early American punk bands, the use of drones and digital photography in marine science, social science study design, the history of Apple computers, and probably many others I can't recall right now.

I hate not knowing things, so when I hear a new word, or a new idea, I immediately Google it on my phone. Later, when I have time, I retrieve that search on my tablet or computer and read a bit more about it. I did this recently with the Gibbard-Satterhwaite Theorem (a mathematical theorem which involves the fairness of voting systems) which was mentioned in a podcast I was listening to.

Last night I was randomly browsing YouTube and came across some videos of extreme engines being started and run. I've never seen so much flame and smoke, and heard so much awesome noise. But now I know a bit about big and unusual engine designs!

The videos only ran for 5 or 10 minutes each (I watched 3) so you might say they were quite superficial. A proper TV documentary on big engines would probably have lasted an hour and had far more detail, as well as having a more credible source, but even if a documentary like that exists, would I have seen it? Would I have had an hour free? What would have made me seek out such an odd topic?

The great thing about the internet is not necessarily the depth of its information but just how much there is. I could have watched hundreds of movies on big engines if I had the time. And there are more technical, detailed, mathematical treatments of those subjects if I want them. But the key point is that I would probably know nothing about the subject if the internet didn't exist.

Here's a few other topics I have got interested in thanks to YouTube: maths (the numberphile series is excellent), debating religion (I'm a sucker for an atheist experience video, or anything by Christopher Hitchens), darts (who knew the sport of darts could be so dramatic?), snooker (because that's what happens after darts), Russian jet fighters, Formula 1 engines, classic British comedy (Fawlty Towers, Father Ted, etc).

What would I do if I wasn't doing that? Watching conventional TV maybe? Now what were my options there: a local "current affairs" program with the intellectual level of an orangutan (with apologies to our great ape cousins), some frivolous reality TV nonsense, a really un-funny American sitcom? Whatever faults the internet has, it sure is a lot better than any of that!

View Details and Comments

Are You Getting It?

2017-01-10. Computers. Rating 2. ID 1831.

Ten years ago Apple introduced one of the most important devices in the history of technology. It has changed many people's lives more than almost anything else, and nothing has really supplanted it in the years since then. Obviously I'm talking about the iPhone, but you already knew that.

Like every new Apple product, this wasn't the first attempt at creating this type of device, it didn't have the best technical specifications, and it didn't sell at a particularly good price. In fact, looking at the device superficially many people (the CTO of RIM included) thought it should have immediately failed.

I got an iPhone when Apple introduced the first revision, the iPhone 3G, and it replaced my Sony phone, which was the best available when I bought it. The Sony phone had a flip screen, plus a smaller screen on the outside of the case, a conventional phone keypad, a rotating camera, and an incredibly impressive list of functions including email and web browsing.

In fact the feature list of the Sony phone was much more substantial than the early iPhones. But the difference was the iPhone's features were something you could use where the Sony's existed in theory but were so awkward, slow, and unintuitive than I never actually used them.

And that is a theme which has been repeated with all of Apple's devices which revolutionised a particular product category (Apple II, Mac, iPod, iPhone, iPad). Looking at the feature list, specs, and price compared with competitors, none of these products should have succeeded.

But they did. Why? Well I'm going to say something here which is very Apple-ish and sounds like a marketing catch-phrase rather than a statement of fact or opinion, so prepare yourself. It is because Apple creates experiences, not products.

OK, sorry about that, but I can explain that phrase. The Sony versus iPhone situation I described above is a perfect example. Looking at the specs and features the Sony would have won most comparisons, but the ultimate purpose for a consumer device is to be used. Do the comparison again, but this time with how those specs and features affect the user and the iPhone wins easily.

And it was the same with the other products I mentioned above. Before the Mac, computers were too hard to use. The Mac couldn't do much initially, but what it could do was so much more easily accessible than with PCs. The iPod was very expensive considering its capacity and list of functions, but it was much easier to use and manage than other MP3 players. And the iPad had a limited feature list, but its operating system was highly customised to creating an intuitive touch interface for the user.

When Steve Jobs introduced the iPhone 10 years ago he teased the audience like this: "[We are introducing] an iPod, a phone and an Internet communicator. An iPod, a phone - are you getting it? These are not separate devices. This is one device. And we are calling it iPhone."

Today I made a list of the functions my iPhone 6S regularly performs for me, where it replaces other devices, technologies and media. This list includes: watch, stopwatch, alarm clock, point and shoot camera, video camera, photo album, PDA, calculator, GPS, map, music player, portable video player, calendar, appointment diary, book library, ebook reader, audiobook player, magazine, newspaper, recipe book, email client, note pad, drawing tablet, night sky star map, web browser, portable gaming console, radio, TV, audio recorder, TV and audio remote control, landline, and mobile phone.

Not only does it do all of those things but it does a lot of them better than the specialised devices it replaces! And, even though the iPhone isn't cheap, if you look at the value of the things it replaces it is a bargain. My guess at the value of all the stuff I listed above is $3000 - $5000 which is at least twice the cost of the phone itself.

My iPhone has one million times the storage of the first computer I programmed on. Its processors are tens of thousands of times faster. Its screen displays 25 times more pixels. And, again, it costs a lot less, even when not allowing for inflation.

Most of what I have said would apply to any modern smart-phone, but the iPhone deserves a special place amongst the others for two reasons. First, it is a purer example of ease of use and user-centered functionality than other phones; and second, it was the one phone which started the revolution.

Look at pictures of the most advanced phones before and after the iPhone and you will see a sudden transition. Apple lead the way - not on how to make a smartphone - but on how to make a smartphone that people would actually want to use. And after that, everything changed.

View Details and Comments

The Next Big Thing

2017-01-08. Computers. Rating 1. ID 1830.

Many (and I really do mean many) years ago, when I was a student, I started a postgrad diploma in computer science. One of the papers was on artificial intelligence and expert systems, an area which was thought (perhaps naively) to have great potential back in the "early days" of computing. Unfortunately, very little in that area was achieved for many years after that. But now I predict things are about to change. I think AI (artificial intelligence, also very loosely described as "thinking computers") is the next big thing.

There are early signs of this in consumer products already. Superficially it looks like some assistants and other programs running on standard computers, tablets, and phones are performing AI. But these tend to work in very limited ways, and I suspect they follow fairly conventional techniques in producing the appearance of "thinking" (you might notice I keep putting that word in quotes because no one really knows what thinking actually is).

The biggest triumph of true AI last year was Google's AlphaGo program which won a match 4 games to 1 against Lee Sedol, one of the world's greatest human players. That previous sentence was significant, I think, because in future it will be necessary to distinguish between AIs and humans. If an AI can already beat a brilliant human player in what is maybe the world's most complex and difficult game, then how long will it be before humans will be hopelessly outclassed in every game?

Computers which play Chess extremely well generally rely on "brute force" techniques. They check every possible outcome of a move many steps ahead and then choose the move with the best outcome. But Go cannot be solved that way because there are simply too many moves. So AlphaGo uses a different technique. It actually learns how to play Go through playing games against humans, itself, and other AIs, and develops its own strategy for winning.

So while a conventional Chess playing program and AlphaGo might seem similar, in important ways they are totally different. Of course, the techniques used to win Go could be applied to any similar game, including Chess, it's just that the pure brute force technique was sufficient and easier to implement when that challenge was first met.

Also last year a computer "judge" predicted the verdicts of the European Court of Human Rights cases with 79% accuracy. What does that really mean? Well it means that the computer effectively judged the cases and reached the same result as a human judge in about 80% of those cases. I have no data on this, but I suspect two human judges might agree and disagree to a similar degree.

So computers can perform very "human" functions like judging human rights cases, and that is quite a remarkable achievement. I haven't seen what techniques were used in that case but I suspect deep learning methods like neural networks would be required.

So what does all this mean? I think it was science fiction author, Arthur C Clarke, who said that a thinking machine would be the last invention humans would ever have to create, because after that the machines themselves would do the inventing. I don't think we are close to that stage yet but this is a clear start and I think the abilities of AIs will escalate exponentially over the next few decades until Clarke's idea will be fulfilled.

And, along with another technology which is just about ready to become critical, 3D printing, society will be changed beyond recognition. The scenario portrayed in so many science fiction stories will become reality. The question is, which science fiction story type will be most accurate: the utopian type or the dystopian type. It could go either way.

View Details and Comments

They're Taking Over!

2016-08-31. Computers. Rating 2. ID 1809.

As an IT professional and technology enthusiast I generally feel quite positive about advances where computers become better than humans at yet another thing. Many people thought that a computer would never beat a human at chess, but now it is accepted that computers will always be better. When our silicon creations beat us at chess we moved on to another, more complex, game, Go. But now computers have beaten the world champion at that too. And in the process made a move that an expert described as "beautiful and mysterious".

So what's next? Well how about one of the most esteemed jobs in our society and one which most people, who don't really understand what is going on, might say would be the last that a mere machine could tackle. I'm talking about law, and even the top tier of the legal profession: being a judge.

Before I start on that I would like to make an important distinction between the approach to the two games above: Chess and Go. Most computers solve Chess problems by using brute force, that is considering millions of possible moves and counter-moves and taking the move that leads to the best outcome. But that wasn't practical for Go so the program instead learns how to play by playing against other players and against itself. It really could be said to be learning like a human would and that is the approach future AI will probably use.

An experiment was done in the UK which replicated court cases and compared the AI's decision with a judge's. The computer agreed with the judge in 31 out of the 32 cases - maybe the judge got the last case wrong!

Computers do well evaluating complex and technical areas such as international trade dispute law, but are also useful for more common laws, such as divorce and child custody. Plus computers are much better and faster at doing the research tasks that law firms currently use legal professionals for. Another highly rated job that won't exist much longer maybe?

An expert has stated that creating a computer system that can answer all legal questions is easy, but getting that system used in most societies (which might be quite resistant to change) is the difficult part!

I find the idea of replacing lawyers and judges with computers quite appealing for a few reasons. First, traditionally it has been poorly paid manual workers who have been at threat of being replaced so it is nice to see society's elite aren't immune. Second, there are so many cases of terrible decisions being made by judges that having an unbiased computer do the work instead seems like a potentially good idea. And third, if highly rate jobs like these can be replaced then the idea of replacing other jobs becomes easier (the medical profession will be next).

It all sounds quite exciting, as long as you can get over the rather obsolete idea that all humans should exist just to work. But there are a few more unsettling possibilities which are also being tested now. One is to predict whether people convicted of crimes are likely to re-offend in future. There are already claims that this system is biased against blacks. Unfortunately the algorithm in use is secret so no one can ever know.

And that brings me to what is maybe the key point I want to make in how I think this technology should be implemented. Allowing computers to control important aspects of our society, like law, needs to be transparent and accountable. We cannot trust corporations who will inevitably hide the details of what their programs do through copyright and patents. So all the code needs to be open source so that we all know exactly what we are getting.

Many people will just deny that the computer takeover I am describing can happen, and many will say that even if it can happen we shouldn't let it. I say it can happen and it should happen, but only if it is done properly. Private business has no place in something so critical. We need a properly resourced and open public organisation to do this work. And everything they do should be completely open to view by anyone.

If we do this properly the computer takeover can be a good thing. And yes, I know this is a cliche, but I will say it: I, for one, welcome our silicon overlords!

View Details and Comments

Pokemon No!

2016-07-30. Computers. Rating 3. ID 1804.

I am a proud computer (and general technology) geek and I see all things geeky as being a big part of my culture. So I don't really identify much with my nationality of New Zealander, or of traditional Pacific or Maori values (I'm not Maori anyway but many people still think that should be part of my culture), or of the standard interests of my compatriots like rugby, outdoor activities, or beer - well OK, maybe I do identify with the beer!

Being a geek transcends national boundaries and traditional values. I go almost everywhere with my 4 main Apple products: a MacBook Pro laptop, an iPad Pro, an iPhone 6S, and an Apple Watch. They are all brilliant products and I do use them all every day.

For me, the main aspects of being a geek involve "living on the internet" and sourcing most of my information from technology sources, and participating in geek events and activities.

By "living on the internet" I mean that I can't (or maybe just don't) go for any period of time (I mean a few hours) without participating in social media, checking internet information sources (general news, new products, etc), or seeking out random material on new subjects from sites such as Quora.

I mainly stay informed not by watching TV (although I still do watch TV news once per day) or listening to radio news (again, I do spend a small amount of time on that too) but by listening to streaming material and podcasts. In fact, podcasts are my main source of information because I can listen to them at any time, avoid most advertising, and listen again to anything which was particularly interesting.

And finally there are the events and activities. Yeah, I mainly mean games. I freely admit that I spend some time every day playing computer games. Sometimes it is only 5 minutes but it is usually more, and sometimes a lot more. Some people think a mature (OK, maybe getting on towards "old") person like me shouldn't be doing that and that I should "grow up". Needless to say I think these people are talking crap.

And so we come to the main subject of this post, the latest computer (or more accurately phone and tablet) game phenomenon: Pokemon GO. The game was released first in the US, Australia, and New Zealand and instantly became a huge hit. Of course, since it was a major new component of geek culture, I felt I should be playing it, but I didn't want it to become a big obsession.

And I think I did well avoiding it for almost 3 days, but yes, I'm playing it now, with moderate intensity (level 17 after a couple of weeks). Today I explained the gameplay to an older person who never plays games and he asked: but what is the point? Well, there is no real, practical point of course, but I could ask that about a lot of things.

For example, if an alien landed and I took him to a rugby game he might ask what's the point of those guys running around throwing a ball to each other. Obviously, there's no point. And what's the point of sitting in front of a TV and watching some tripe like "The Block" or some crappy sopa opera? Again, there's no point. In reality, what's the point of living? Well, let's not go there until I do another post about philosophy.

So anyone who criticises playing computer games because they have no practical point should think a little bit more about what they are really saying and why.

And there's another factor in all of this that bugs me too. It's the fact that almost universally the people who criticise games like Pokemon GO not only have never played them but know almost nothing about them either. They are just letting their petty biases and ignorance inform their opinions. It's quite pathetic, really.

So to all those people who criticise me for playing Pokemon GO, Real Racing 3 (level 162 after many years play, and yes, it is the greatest game of all time), Clash of Clans (level 110 after 4 years play), and a few others, I say get the hell over it. And if you do want to criticise me just get a bit better informed first. And maybe you should stop all those pointless habits you have (and that I don't criticise you for) like watching junk programs on TV.

And now, if you'll excuse me, I've got to go find some more Pokemon. Gotta catch 'em all!

View Details and Comments

Why We Have Bad Software

2016-07-25. Computers. Rating 3. ID 1803.

Many people get extremely frustrated with their interactions with technology, especially computers. I notice this a lot because I work with IT where I am a Mac generalist: I do general support, programming, a bit of server management, and a bunch of other stuff as well.

And when I say "many people" get frustrated I should add myself to that list as well because, either directly or indirectly (by trying to help frustrated users) I am also exposed to this phenomenon.

The strange thing is that generally the problems don't happen because people are trying to do something unusual, or using some virtually unknown piece of software, or trying to do things in an overly complex way. Most of the frustration happens just trying to get the basics working. By that I mean things like simple word processing in Microsoft Word, simple file access on servers, and simple synchronisation of calendars.

None of these things should be hard, but they often are. In comparison doing complex stuff like creating web apps, or doing complicated graphics manipulations, or completing advanced maths or stats processing often works without a single problem.

Why is this? Well I guess I need to concede (before I offer my own theory) that one reason is that there are far more people doing the simple things and they're doing them far more often, so if there was a certain failure rate with any process it would show up more for the stuff that is done a lot.

But those simple tasks, like word processing, have been with us on computers for several decades now so it might be reasonable to ask why haven't they been refined to a greater degree than they have. Is it really so hard to create a word processor which works in a more intuitive, reliable, and responsive way than what he have now? (yes, I'm talking to you, Microsoft)

Well there is. But it involves doing something a lot of people don't want to do. It involves staying away from the big, dominant companies in IT, especially Microsoft. Well not entirely, because realistically you need to run either Windows or macOS (Linux just doesn't really work on the desktop) and you need to buy some hardware from Dell, Apple, etc. But what about after that?

Recently I have tried to keep away from the dominant companies in software. For example, I operate a zero-Microsoft policy and am progressing well on my zero-Adobe policy as well. In addition I avoid all the big corporates' products (Oracle, Cisco, etc) wherever possible.

I don't think it's healthy to take this to extremes or to where it becomes more a political thing than a practical one, because then I might end up like the open source fanatics whose decisions are based more on ideology than pragmatism. But it is still a useful guideline.

And I am pragmatic because I do have Microsoft Office and Adobe Creative Suite (all fully licensed) on my machine, I just almost never use them. And, of course, I do use a Mac and therefore use the hardware and operating system made by Apple, the biggest computer corporation in the world.

Although I readily admit to being an Apple "fanboy" I do have to say that, considering the huge resources they have available, they do often fail to perform as well as they should. For example, software is often released with fairly obvious bugs. How much does it cost to hire a few really good bug checkers?

And sometimes Apple products take too long to properly implement some features. With all the programmers they could hire why is this?

I don't want to pick on Apple and I really have to ask the following question: Microsoft, why is Office 2016 for Mac such a pile of junk? Why is it so slow? Why is it so ugly? Why is it so lacking in functionality (that is one area where Microsoft usually does well: their software is crap in almost every way except it has an impressive feature set).

And just to complete bashing the big three, what's happening at Adobe? Why does In Design take a week to launch on anything except the latest hardware? Why are there so many poor user interface design choices in Adobe software? And why is the licensing so annoying?

I think the failure of the big companies to create products as good as they should be able to comes back to several factors...

First, large teams of programmers (and probably teams of anything else too) will always be less efficient than smaller teams simply because more time will have to spend trying to coordinate the team rather than actually doing the core work.

Second, in large teams there will be inevitable "disconnections" between the components of a major project that different individuals make. This might result in an inconsistent user experience or maybe even bugs when the components don't work together properly.

Third, it is likely that many decisions in a large team will be made by managers and that is almost always a bad thing, because managers are generally technically ignorant and have different priorities such as meeting time constraints, fitting in with non-technical corporate aims, or cutting corners in various ways, rather than producing the best technical result.

Fourth, large companies often have too many rules and policies which are presumably formulated to solve a particular problem but more often can be applied without any real thought for any specific situation.

Many software projects are too large for a single programmer or a small team so some of the issues I have listed cannot be fully avoided. But at least if computer users all understand that big companies usually don't produce the best products they won't be surprised the next time they have a horrible experience using Microsoft Word.

And maybe they might just look at alternatives.

View Details and Comments

You have requested 20 entries and 20 have been displayed.


[Contact][Server Blog][AntiMS Apple][Served on Mac]