最近在研究Georgia理工的一篇论文,论文中,他们开发了一个app,那个app能够覆盖
在目标app上,并且模仿目标app的界面,这样,用户会在我伪装的app中进行输入操作,
从而监控到用户的所有输入和动作,再通过Android accessibility api来将用户的操作输入
到目标app中,从而实现继续监控
关于Android accessibility来讲用户操作输入到目标app已经实现了,但是模仿目标app
界面的功能不知道是如何实现的,求思路,谢谢!
以下是论文的原话
First, although M-Aegis is confined within the OS’ app sandbox, it must be able to determine with which TCA (target client app)the user is currently interacting. This allows M-Aegis to invoke specific logic to handle the TCA, and helps M-Aegis clean up the screen when the TCA is termi- nated. Second, M-Aegis requires information about the GUI layout for the TCA it is currently handling. This allows M-Aegis to properly render mimic GUIs on L- 7.5 to intercept user I/O. Third, although isolated from the TCA, M-Aegis must be able to communicate with the TCA to maintain functionality and ensure user expe- rience is not disrupted. For example, M-Aegis must be able to relay user clicks to the TCA, eventually send en- crypted data to the TCA, and click on TCA’s button on behalf of the user. For output on screen, it must be able to capture ciphertext so that it can decrypt it and then render it on L-7.5.