Wow - that's quite a difference. I wonder if the discrepancy might be due to the way that they compute the data volume? As you know, data are sent over the 3G network in a protocol known as TCP/IP. User data are encapsulated in 'packets' that 'transport' them over the carrier's network. These packets contain additional information - like IP addresses and help with error control. But they add overhead to the user data that are being transported. It's likely that the iPad is reporting net data flow - i.e. the actual underlying user data that are being transported, while Bell is reporting gross data flow - i.e. user data plus TCP/IP overheads.
A TCP/IP packet has a more-or-less fixed overhead irrespective of the volume of user data contained in that packet. So user data that are fragmented into many small packets - and there are good technical reasons for doing this - will attract a significantly higher overhead than data that are fragmented into larger packets - because the TCP/IP overhead is the same in absolute terms for small and large packets but, of course, percentage-wise, it's larger when the user data in each packet are small.
The percentage difference that you mention between the iPad's estimate and Bell's estimate sounds about right for a TCP/IP overhead discrepancy.
Bell's argument would be - well, we have to carry the TCP/IP packets and it's not up to us how big or small the packets are. We charge for all the data the user asks us to carry - whether it's actual user data or the encapsulating packets.
The iPad's argument would be that you're only interested in the net user data - probably, in this case, not a correct assumption.
Tim
Scotland