Ever wanted to look at the view hierarchy of closed source 3rd party apps? Welcome Cyanogenmod with google apps package to get google play store. I wanted to check the view hierarchy of Google Plus's Auto Hideable ToolBar and with little effort got a CM build on a Nexus 4 and managed to find the ComposeBarView.
Disappearing facebook chat heads
I was curious about the implementation of Facebook's chat heads launched few weeks back as a part of Facebook Home. Wondered if chat head was an overlay or being written directly to the surface but it didn't make sense for the framework to grant such control to an app and Facebook's home and chat head was being distributed as just yet another app from App store. So how does it work. Turns out that Chat head uses a public API, SYSTEM_ALERT_WINDOW and App store lets users know about this permission and that the app might display content on top of other applications.
In android UI, all controls need a context to be created and Chat Head is no exception and is created by Facebook messenger application (com.facebook.orca). Killing Facebook messenger application clears active Chat Heads as the context isn't valid anymore. Based on the ownership of chat heads, it is indeed vulnerable to low memory killer, even though the chat head might have the focus and user might be interacting with it. Android Framework doesn't consider these app as important like a foreground activity. Focus activity gets 0 oom score but system_alert_window gets a best of only 2 (Nexus 4 running latest Jelly bean) and there by making it vulnerable to low memory killer. So i wouldn't be surprised if the chat heads disappear all of a sudden. Facebook probably didn't make this public but should definitely consider distributing a custom ROM and making chat heads full proof. But with increasing memory sizes, it might not happen.
Update
Dropbox, Facebook SDK Overhead in Android
Dropbox had recently announced a SDK for android to facilitate app developers to use its cloud functionality. Facebook too had a similar SDK. Considering the existing database and reach of facebook, it only makes sense for application developers to use facebook SDK to share relevant information like scores in a game etc. Users might end up using multiple applications built on top of same 3rd Party SDK, but what exactly is the memory implication in the device?
The SDKs, especially when it doesn't involve any native libraries, gets linked into respective application process, each having their own version of the jar. In case of Android, once the process gets started it doesn't get killed until low memory killer kicks in. Any attempt by the user to launch a new application using the same SDK doesn't leverage the already loaded code segment of the SDK and with more application launches, the memory overhead just increases. Application built just on Android SDK doesn't have this issue as its stubbed out to ensure proper compilation and at runtime, zygote which forks the application process pre-loads most often used classes and relies on linux's copy-on-write mechanism.
All said and done, what could 3rd party SDK developers do to address this in short run? They could probably work with Android OEM's to release SDK add-ons. This would reduce the memory overhead and would be easy for application developers to use. They wouldn't have to deal with the different installation procedure for the respective 3rd party SDK. However, it might not be practical for the SDK developers to deal with independent and ever growing number of OEMs. The best possible solution would probably be for the dalvik virtual machine to facilitate class sharing across process space. This is not new and few virtual machines like the one from IBM have already solved this. However, considering the increasing memory in modern smart phones, dalvik virtual machine might never need to address this, until a popular and massive 3rd party SDK for Android gets enough attention of end users (frequent app process kills).
The SDKs, especially when it doesn't involve any native libraries, gets linked into respective application process, each having their own version of the jar. In case of Android, once the process gets started it doesn't get killed until low memory killer kicks in. Any attempt by the user to launch a new application using the same SDK doesn't leverage the already loaded code segment of the SDK and with more application launches, the memory overhead just increases. Application built just on Android SDK doesn't have this issue as its stubbed out to ensure proper compilation and at runtime, zygote which forks the application process pre-loads most often used classes and relies on linux's copy-on-write mechanism.
All said and done, what could 3rd party SDK developers do to address this in short run? They could probably work with Android OEM's to release SDK add-ons. This would reduce the memory overhead and would be easy for application developers to use. They wouldn't have to deal with the different installation procedure for the respective 3rd party SDK. However, it might not be practical for the SDK developers to deal with independent and ever growing number of OEMs. The best possible solution would probably be for the dalvik virtual machine to facilitate class sharing across process space. This is not new and few virtual machines like the one from IBM have already solved this. However, considering the increasing memory in modern smart phones, dalvik virtual machine might never need to address this, until a popular and massive 3rd party SDK for Android gets enough attention of end users (frequent app process kills).
Override layout_height/layout_width for Custom Views
The default inflater of android for inflating views from layout resource expects both layout_width and layout_height attribute to be specified in the layout manifest. Any attempt to skip these attributes for custom views (having layout params in code) causes a run time exception, "You must supply a layout_width attribute" and with the current version (Jellybean) of layout inflater, custom views will have to override the layout parameters (specified in the layout xml). There are different solutions like updating the layout parameters in View's onMeasure, the only downside being that the onMesaure is invoked more often that what is needed for this purpose. The alternative is to override, setLayoutParams, which is invoked once by the layout inflater.
@override
public void setLayoutParams(LayoutParams params) {
params.height = ...;
params.width = ...;
super.setLayoutParams(params);
}
Android Framework Ports - Maximum Hidden Apps
Android open source project (ICS) supports a maximum of 15 hidden
applications running at any point and any attempt to launch new apps kills the
least recently used ones. As per the documentation, this is done to reduce the
load on RAM and has been the case since early version of Android.
However, the OEMs who port android on their board often don't realize that this number is a based on the size of RAM on the device and i just came across one such device whose activity manager started knocking out processes as soon as the device booted up (without any user intervention to start an application). It was obvious that they wanted more applications to be launched simultaneously, however didn't care about the maximum limit.
02-19 13:43:55.194 I/ActivityManager( 1096): No longer want com.lge.omadmclient:remote (pid 23052): hidden #16
And as to what the factor is... Its probably based on the memory specifications of Android's compatibility definition document. Any additional memory (on top of the one specified in CDD) should be considered to increase the maximum limit proportionately. On the other hand, an increased maximum limit could accomodate a heavy duty process and could impact run time performance which is indicative of the fact that the limit should be based on the threshold memory size rather than the number of process in memory. This should facilitate easy, quick and stable ports of Android and avoid burning of CPU cycles during boot up.
However, the OEMs who port android on their board often don't realize that this number is a based on the size of RAM on the device and i just came across one such device whose activity manager started knocking out processes as soon as the device booted up (without any user intervention to start an application). It was obvious that they wanted more applications to be launched simultaneously, however didn't care about the maximum limit.
02-19 13:43:55.194 I/ActivityManager( 1096): No longer want com.lge.omadmclient:remote (pid 23052): hidden #16
And as to what the factor is... Its probably based on the memory specifications of Android's compatibility definition document. Any additional memory (on top of the one specified in CDD) should be considered to increase the maximum limit proportionately. On the other hand, an increased maximum limit could accomodate a heavy duty process and could impact run time performance which is indicative of the fact that the limit should be based on the threshold memory size rather than the number of process in memory. This should facilitate easy, quick and stable ports of Android and avoid burning of CPU cycles during boot up.
Android's Warm start up of Applications
Android framework
supports boot up notification (broadcasts) for applications to start after the
device boots up or after the framework reboots. Framework has to start a new
process to host the Broadcast Receiver component and this takes more processing
time compared to an already existing process. The only alternative is to have
the empty process created even before processing broadcast boot up event and
this is a small time window in the context of device boot up and yet android
supports this for applications installed as a part of the system image.
All that is
needed is to enable the application to be persistent in its manifest
(android:persistent="true"). Activity Manager has the logic to create
empty process (via zygote) for all persistent applications.
Atomicity of Reference assignment - Java
Java's JVM specifications (http://docs.oracle.com/javase/specs/jls/se7/jls7.pdf) claims that writes to and reads of references is atomic on both 32 bit and 64 bit implementation and i happened to have a code which enforced mutual exclusion on the destination variable,
public void set(Test ref)
{
synchronized (this)
{
public void set(Test ref)
{
synchronized (this)
{
// obj is the member variable
obj = ref;
}
}
At some point the need for synchronization was in question, as write into a reference is atomic and i started looking at underlying assembly instructions for the above code without synchronized block. I used 64 bit implementation of Open JDK 7 Update 3 on Ubuntu running on Intel x86 64 bit processor. The assembly code turned out as,
obj = ref;
}
}
At some point the need for synchronization was in question, as write into a reference is atomic and i started looking at underlying assembly instructions for the above code without synchronized block. I used 64 bit implementation of Open JDK 7 Update 3 on Ubuntu running on Intel x86 64 bit processor. The assembly code turned out as,
Decoding compiled method 0x00007f8c5d061110:
Code:
Code:
[Entry Point]
[Constants]
# {method} 'set' '(LTest;)V' in 'Test'
# this: rsi:rsi = 'Test'
# parm0: rdx:rdx = 'Test'
# [sp+0x20] (sp of caller)
0x00007f8c5d061240: mov 0x8(%rsi),%r10d
0x00007f8c5d061244: cmp %r10,%rax
0x00007f8c5d061247: jne 0x00007f8c5d0378a0 ; {runtime_call}
0x00007f8c5d06124d: xchg %ax,%ax
[Verified Entry Point]
0x00007f8c5d061250: push %rbp
0x00007f8c5d061251: sub $0x10,%rsp
0x00007f8c5d061255: nop ;*synchronization entry ; - Test::set@-1 (line 19)
0x00007f8c5d061256: mov %rsi,%r10
0x00007f8c5d061259: mov %rdx,%r8
0x00007f8c5d06125c: mov %r8d,0x10(%rsi)
0x00007f8c5d061260: shr $0x9,%r10
0x00007f8c5d061264: mov $0x7f8c661d7000,%r11
0x00007f8c5d06126e: mov %r12b,(%r11,%r10,1) ;*putfield obj ; - Test::set@2 (line 19)
0x00007f8c5d061272: add $0x10,%rsp
0x00007f8c5d061276: pop %rbp
0x00007f8c5d061277: test %eax,0xcc88d83(%rip) # 0x00007f8c69cea000 ; {poll_return}
0x00007f8c5d06127d: retq
0x00007f8c5d06127e: hlt
0x00007f8c5d06127f: hlt
It uses two move instructions, one (mov%rdx, %r8) to copy the input reference (param0) into a intermediate register (r8) and the other one (move %r8d, 0x10(%rsi)) to copy the 32bits of register (r8) into the destination reference.
What happens if the CPU's context changed after the first move instruction and other thread tries to access the value of the reference (obj)? It would end up getting the old value of the reference, despite the fact that an earlier thread had started the copy process of the reference with a new value. Is this desirable and expected? Its up to the application's requirements. In my case, i wanted to ensure a first come first serve (FCFS) behavior for threads. The thread which started the copy of source reference had to complete with a successful write into the destination reference before other threads could use the value of destination reference. Hence, i had to use either a synchronized block or AtomicReference.
The synchronized block in the Test function translates well into the assembly, and enforces mutual exclusion over the shared reference,
[Entry Point]
[Constants]
# {method} 'set' '(LTest;)V' in 'Test'
# this: rsi:rsi = 'Test'
# parm0: rdx:rdx = 'Test'
# [sp+0x40] (sp of caller)
0x00007f713905f340: mov 0x8(%rsi),%r10d
0x00007f713905f344: cmp %r10,%rax
0x00007f713905f347: jne 0x00007f71390378a0 ; {runtime_call}
0x00007f713905f34d: xchg %ax,%ax
[Verified Entry Point]
0x00007f713905f350: mov %eax,-0x6000(%rsp)
0x00007f713905f357: push %rbp
0x00007f713905f358: sub $0x30,%rsp ;*synchronization entry ; - Test::set@-1 (line 19)
0x00007f713905f35c: mov %rdx,(%rsp)
0x00007f713905f360: mov %rsi,%rbp
0x00007f713905f363: mov (%rsi),%rax
0x00007f713905f366: mov %rax,%r10
0x00007f713905f369: and $0x7,%r10
0x00007f713905f36d: cmp $0x5,%r10
0x00007f713905f371: jne 0x00007f713905f3da
0x00007f713905f373: mov $0xcc29d340,%r11d ; {oop('Test')}
0x00007f713905f379: mov 0xb0(%r11),%r10
0x00007f713905f380: mov %r10,%r11
0x00007f713905f383: or %r15,%r11
0x00007f713905f386: mov %r11,%r8
0x00007f713905f389: xor %rax,%r8
0x00007f713905f38c: test $0xffffffffffffff87,%r8
0x00007f713905f393: jne 0x00007f713905f50e ;*monitorenter ; - Test::set@3 (line 19)
0x00007f713905f399: mov (%rsp),%r10
0x00007f713905f39d: mov %r10,%r11
0x00007f713905f3a0: mov %r11d,0x10(%rbp)
0x00007f713905f3a4: mov %rbp,%r10
0x00007f713905f3a7: shr $0x9,%r10
0x00007f713905f3ab: mov $0x7f7142670000,%r11
0x00007f713905f3b5: mov %r12b,(%r11,%r10,1)
0x00007f713905f3b9: mov $0x7,%r10d
0x00007f713905f3bf: and 0x0(%rbp),%r10
0x00007f713905f3c3: cmp $0x5,%r10
0x00007f713905f3c7: jne 0x00007f713905f445 ;*monitorexit ; - Test::set@10 (line 22)
[Constants]
# {method} 'set' '(LTest;)V' in 'Test'
# this: rsi:rsi = 'Test'
# parm0: rdx:rdx = 'Test'
# [sp+0x40] (sp of caller)
0x00007f713905f340: mov 0x8(%rsi),%r10d
0x00007f713905f344: cmp %r10,%rax
0x00007f713905f347: jne 0x00007f71390378a0 ; {runtime_call}
0x00007f713905f34d: xchg %ax,%ax
[Verified Entry Point]
0x00007f713905f350: mov %eax,-0x6000(%rsp)
0x00007f713905f357: push %rbp
0x00007f713905f358: sub $0x30,%rsp ;*synchronization entry ; - Test::set@-1 (line 19)
0x00007f713905f35c: mov %rdx,(%rsp)
0x00007f713905f360: mov %rsi,%rbp
0x00007f713905f363: mov (%rsi),%rax
0x00007f713905f366: mov %rax,%r10
0x00007f713905f369: and $0x7,%r10
0x00007f713905f36d: cmp $0x5,%r10
0x00007f713905f371: jne 0x00007f713905f3da
0x00007f713905f373: mov $0xcc29d340,%r11d ; {oop('Test')}
0x00007f713905f379: mov 0xb0(%r11),%r10
0x00007f713905f380: mov %r10,%r11
0x00007f713905f383: or %r15,%r11
0x00007f713905f386: mov %r11,%r8
0x00007f713905f389: xor %rax,%r8
0x00007f713905f38c: test $0xffffffffffffff87,%r8
0x00007f713905f393: jne 0x00007f713905f50e ;*monitorenter ; - Test::set@3 (line 19)
0x00007f713905f399: mov (%rsp),%r10
0x00007f713905f39d: mov %r10,%r11
0x00007f713905f3a0: mov %r11d,0x10(%rbp)
0x00007f713905f3a4: mov %rbp,%r10
0x00007f713905f3a7: shr $0x9,%r10
0x00007f713905f3ab: mov $0x7f7142670000,%r11
0x00007f713905f3b5: mov %r12b,(%r11,%r10,1)
0x00007f713905f3b9: mov $0x7,%r10d
0x00007f713905f3bf: and 0x0(%rbp),%r10
0x00007f713905f3c3: cmp $0x5,%r10
0x00007f713905f3c7: jne 0x00007f713905f445 ;*monitorexit ; - Test::set@10 (line 22)
Bottom line, writes to and reads from reference is atomic but assignment (copy from source and write into destination) of a reference isn't atomic and when in doubt, source of truth is in assembly instructions. :-)
Subscribe to:
Posts (Atom)