• Android之getSystemService方法实现详解


    一.简介

    什么是系统服务?为什么要使用系统服务
    我们在Android开发过程中经常会用到各种各样的系统管理服务,比如对Wifi进行操作时需要使用到WifiManager,对电源进行操作就需要使用到PowerMaager,对于电池得操作就需要使用到BatterManager…

    可以说,系统服务时Android对于我们开发者所提供的能够对于系统底层进行配置操作的一种方式。了解这些系统服务,对于我们安卓的开发使用起着至关重要的作用。

    //获取电源相关的服务
    PowerManager pm = (PowerManager)context.getSystemService(Context.POWER_SERVICE);
    
    //当我们获取到电源相关的服务的时候,就可以通过其来进行各种操作
    
    //息屏函数,,,,此操作需要权限:android.Manifest.permission#DEVICE_POWER
    //这个函数有个坑,参数要使用SystemClock.uptimeMillis()作为基准,才能立即息屏,不然不起作用,而SystemClock.uptimeMillis()+1000,是1秒后息屏
    pm.goToSleep(long time);//pm.goToSleep(SystemClock.uptimeMillis());
    
    //亮屏函数,,,,此操作注意点(权限和参数)同上
    pm.wakeUp(long time);
    
    //判断屏幕是否是点亮状态
    pm.isScreenOn();
    
    //判断手机是否是省电模式,,,当然,也可以使用set方法设置省电模式
    pm.isPowerSaveMode();
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17

    getSystemService方法定义于context类, 但context类是一个抽象类,它的具
    体实现在contextImpl类中, getSystemService方法在contextImpl类中的代码如下:

        //位于ContextImpl.java
        public Object getSystemService(String name) {
            ...
            return SystemServiceRegistry.getSystemService(this, name);
        }
    
    • 1
    • 2
    • 3
    • 4
    • 5

    调用SystemServiceRegistry的getSystemService方法

    //SystemServiceRegistry.java
    public static Object getSystemService(ContextImpl ctx, String name) {
            if (name == null) {
                return null;
            }
            final ServiceFetcher<?> fetcher = SYSTEM_SERVICE_FETCHERS.get(name);
            ...
            final Object ret = fetcher.getService(ctx);
            if (sEnableServiceNotFoundWtf && ret == null) {
                // Some services do return null in certain situations, so don't do WTF for them.
                switch (name) {
                    case Context.CONTENT_CAPTURE_MANAGER_SERVICE:
                    case Context.APP_PREDICTION_SERVICE:
                    case Context.INCREMENTAL_SERVICE:
                        return null;
                }
                return null;
            }
            return ret;
        }
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20

    SYSTEM_SERVICE_FETCHERS是一个ArrayMap类

    private static final Map<String, ServiceFetcher<?>> SYSTEM_SERVICE_FETCHERS =
                new ArrayMap<String, ServiceFetcher<?>>();
    
    • 1
    • 2

    这个SYSTEM_SERVICE_FETCHERS的map是通过registerService函数进行填充,通过注释看到registerService是在初始化静态代码块中调用

    /**
         * Statically registers a system service with the context.
         * This method must be called during static initialization only.
         */
        private static <T> void registerService(@NonNull String serviceName,
                @NonNull Class<T> serviceClass, @NonNull ServiceFetcher<T> serviceFetcher) {
            SYSTEM_SERVICE_NAMES.put(serviceClass, serviceName);
            SYSTEM_SERVICE_FETCHERS.put(serviceName, serviceFetcher);
            SYSTEM_SERVICE_CLASS_NAMES.put(serviceName, serviceClass.getSimpleName());
        }
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    static {
    ...
    registerService(Context.NSD_SERVICE, NsdManager.class,
                    new CachedServiceFetcher<NsdManager>() {
                @Override
                public NsdManager createService(ContextImpl ctx) throws ServiceNotFoundException {
                    IBinder b = ServiceManager.getServiceOrThrow(Context.NSD_SERVICE);
                    INsdManager service = INsdManager.Stub.asInterface(b);
                    return new NsdManager(ctx.getOuterContext(), service);
                }});
    
            registerService(Context.POWER_SERVICE, PowerManager.class,
                    new CachedServiceFetcher<PowerManager>() {
                @Override
                public PowerManager createService(ContextImpl ctx) throws ServiceNotFoundException {
                    IBinder powerBinder = ServiceManager.getServiceOrThrow(Context.POWER_SERVICE);
                    IPowerManager powerService = IPowerManager.Stub.asInterface(powerBinder);
                    IBinder thermalBinder = ServiceManager.getServiceOrThrow(Context.THERMAL_SERVICE);
                    IThermalService thermalService = IThermalService.Stub.asInterface(thermalBinder);
                    return new PowerManager(ctx.getOuterContext(), powerService, thermalService,
                            ctx.mMainThread.getHandler());
                }});
    ...
    }
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24

    要明白CachedServiceFetcher的作用,只要明白createService和getService方法。createService方法本身没有实现,需要重载来实现的。然后在getService时,会调用createService来创建对应的服务管理类对象。

    static abstract class CachedServiceFetcher<T> implements ServiceFetcher<T> {
            private final int mCacheIndex;
    
            CachedServiceFetcher() {
                // Note this class must be instantiated only by the static initializer of the
                // outer class (SystemServiceRegistry), which already does the synchronization,
                // so bare access to sServiceCacheSize is okay here.
                mCacheIndex = sServiceCacheSize++;
            }
    
            @Override
            @SuppressWarnings("unchecked")
            public final T getService(ContextImpl ctx) {
                final Object[] cache = ctx.mServiceCache;
                final int[] gates = ctx.mServiceInitializationStateArray;
                boolean interrupted = false;
    
                T ret = null;
    
                for (;;) {
                    boolean doInitialize = false;
                    synchronized (cache) {
                        // Return it if we already have a cached instance.
                        T service = (T) cache[mCacheIndex];
                        if (service != null || gates[mCacheIndex] == ContextImpl.STATE_NOT_FOUND) {
                            ret = service;
                            break; // exit the for (;;)
                        }
    
                        // If we get here, there's no cached instance.
    
                        // Grr... if gate is STATE_READY, then this means we initialized the service
                        // once but someone cleared it.
                        // We start over from STATE_UNINITIALIZED.
                        if (gates[mCacheIndex] == ContextImpl.STATE_READY) {
                            gates[mCacheIndex] = ContextImpl.STATE_UNINITIALIZED;
                        }
    
                        // It's possible for multiple threads to get here at the same time, so
                        // use the "gate" to make sure only the first thread will call createService().
    
                        // At this point, the gate must be either UNINITIALIZED or INITIALIZING.
                        if (gates[mCacheIndex] == ContextImpl.STATE_UNINITIALIZED) {
                            doInitialize = true;
                            gates[mCacheIndex] = ContextImpl.STATE_INITIALIZING;
                        }
                    }
    
                    if (doInitialize) {
                        // Only the first thread gets here.
    
                        T service = null;
                        @ServiceInitializationState int newState = ContextImpl.STATE_NOT_FOUND;
                        try {
                            // This thread is the first one to get here. Instantiate the service
                            // *without* the cache lock held.
                            service = createService(ctx); //调用createService方法
                            newState = ContextImpl.STATE_READY;
    
                        } catch (ServiceNotFoundException e) {
                            onServiceNotFound(e);
    
                        } finally {
                            synchronized (cache) {
                                cache[mCacheIndex] = service;
                                gates[mCacheIndex] = newState;
                                cache.notifyAll();
                            }
                        }
                        ret = service;
                        break; // exit the for (;;)
                    }
                    // The other threads will wait for the first thread to call notifyAll(),
                    // and go back to the top and retry.
                    synchronized (cache) {
                        // Repeat until the state becomes STATE_READY or STATE_NOT_FOUND.
                        // We can't respond to interrupts here; just like we can't in the "doInitialize"
                        // path, so we remember the interrupt state here and re-interrupt later.
                        while (gates[mCacheIndex] < ContextImpl.STATE_READY) {
                            try {
                                // Clear the interrupt state.
                                interrupted |= Thread.interrupted();
                                cache.wait();
                            } catch (InterruptedException e) {
                                // This shouldn't normally happen, but if someone interrupts the
                                // thread, it will.
                                Slog.w(TAG, "getService() interrupted");
                                interrupted = true;
                            }
                        }
                    }
                }
                if (interrupted) {
                    Thread.currentThread().interrupt();
                }
                return ret;
            }
    
            public abstract T createService(ContextImpl ctx) throws ServiceNotFoundException;
        }
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59
    • 60
    • 61
    • 62
    • 63
    • 64
    • 65
    • 66
    • 67
    • 68
    • 69
    • 70
    • 71
    • 72
    • 73
    • 74
    • 75
    • 76
    • 77
    • 78
    • 79
    • 80
    • 81
    • 82
    • 83
    • 84
    • 85
    • 86
    • 87
    • 88
    • 89
    • 90
    • 91
    • 92
    • 93
    • 94
    • 95
    • 96
    • 97
    • 98
    • 99
    • 100

    例如:

       public PowerManager createService(ContextImpl ctx) throws ServiceNotFoundException {
                    IBinder powerBinder = ServiceManager.getServiceOrThrow(Context.POWER_SERVICE);//这里通过servicemanager拿到IBinder对象
                    IPowerManager powerService = IPowerManager.Stub.asInterface(powerBinder);//将Ibinder对象转换成代理接口
                    IBinder thermalBinder = ServiceManager.getServiceOrThrow(Context.THERMAL_SERVICE);
                    IThermalService thermalService = IThermalService.Stub.asInterface(thermalBinder);
                    return new PowerManager(ctx.getOuterContext(), powerService, thermalService,
                            ctx.mMainThread.getHandler());
                }});
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8

    这样,应用层就可以通过这个xxxManager来使用对应的service提供的功能。

    二、serviceManager的流程分析
    getServiceOrThrow方法位于frameworks/base/core/java/android/os/ServiceManager.java中

    public static IBinder getServiceOrThrow(String name) throws ServiceNotFoundException {
            final IBinder binder = getService(name);
            if (binder != null) {
                return binder;
            } else {
                throw new ServiceNotFoundException(name);
            }
        }
    
    
    public static IBinder getService(String name) {
            try {
                IBinder service = sCache.get(name);
                if (service != null) {
                    return service;
                } else {
                    return Binder.allowBlocking(rawGetService(name));
                }
            } catch (RemoteException e) {
                Log.e(TAG, "error in getService", e);
            }
            return null;
        }
    //这里sCache是一个缓存,可见https://blog.csdn.net/nihaomabmt/article/details/116784540
    分析
    
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26

    如果缓存中没有,则调用这里getIServiceManager(),一看就是跨进程调用。如下:

    private static IBinder rawGetService(String name) throws RemoteException {
            final long start = sStatLogger.getTime();
    
            final IBinder binder = getIServiceManager().getService(name);
            ...
    }
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
        private static IServiceManager getIServiceManager() {
            if (sServiceManager != null) {
                return sServiceManager;
            }
    
            // Find the service manager
            sServiceManager = ServiceManagerNative
               .asInterface(Binder.allowBlocking(BinderInternal.getContextObject()));
            return sServiceManager;
        }
    
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11

    这里最后也是调用了BinderInternal.getContextObject(),这里getContextObject是个native方法,所以继续追到了native层:
    这个对应native方法在:
    frameworks\base\core\jni\android_util_Binder.cpp中

    static jobject android_os_BinderInternal_getContextObject(JNIEnv* env, jobject clazz)
    {
        sp<IBinder> b = ProcessState::self()->getContextObject(NULL);
        return javaObjectForIBinder(env, b);
    }
    
    • 1
    • 2
    • 3
    • 4
    • 5

    大家看他又调用到了 ProcessState::self()->getContextObject,获取了native层面的IBinder对象,然后再调用javaObjectForIBinder把这个对象进行转换成java对象
    先来看 ProcessState::self()->getContextObject方法:

    sp<IBinder> ProcessState::getContextObject(const sp<IBinder>& /*caller*/)
    {
        sp<IBinder> context = getStrongProxyForHandle(0);
        return context;
    }
    
    • 1
    • 2
    • 3
    • 4
    • 5

    这里其实是调用getStrongProxyForHandle(0),记住哦这里传递handle的参数是0,因为ServiceManager的一个应用固定就是0,他是我们跨进程通信一切一切的基础,几乎所有跨进程通信都是需要通过它

    sp<IBinder> ProcessState::getStrongProxyForHandle(int32_t handle)
    {
        sp<IBinder> result;
    
        AutoMutex _l(mLock);
    
        handle_entry* e = lookupHandleLocked(handle);
    
        if (e != NULL) {
            IBinder* b = e->binder;
            if (b == NULL || !e->refs->attemptIncWeak(this)) {
                if (handle == 0) {
                    Parcel data;
                    status_t status = IPCThreadState::self()->transact(
                            0, IBinder::PING_TRANSACTION, data, NULL, 0);
                    if (status == DEAD_OBJECT)
                       return NULL;
                }
    
                b = new BpBinder(handle); 
                e->binder = b;
                if (b) e->refs = b->getWeakRefs();
                result = b;
            } else {
                // This little bit of nastyness is to allow us to add a primary
                // reference to the remote proxy when this team doesn't have one
                // but another team is sending the handle to us.
                result.force_set(b);
                e->refs->decWeak(this);
            }
        }
    
        return result;
    }
    
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35

    这刚开始handle_entry肯定为null,所以这里其实就是new BpBinder(0)对象进行返回,那么这里大家就应该清楚,最后进程获取ServiceManager对应的IBinder对象,其实本质就是BpBinder对象,这个对象由对应的handle值,通过这个handle值传递到了驱动,驱动就可以知道与哪个进程进行通信,0当然是ServiceManager进程。
    到这里我们就获取的一个BpBinder,但是这个是也是native层面的,怎么把它变成java层面的对象呢?这里就需要来看javaObjectForIBinder(env, b)方法:

    jobject javaObjectForIBinder(JNIEnv* env, const sp<IBinder>& val)
    {
       //。。省略
    	//这里构造出了gBinderProxyOffsets的java对象
        object = env->NewObject(gBinderProxyOffsets.mClass, gBinderProxyOffsets.mConstructor);
        if (object != NULL) {
            // The proxy holds a reference to the native object.
            env->SetLongField(object, gBinderProxyOffsets.mObject, (jlong)val.get());//这里吧val这个c++的对象指针设置到gBinderProxyOffsets.mObject这个java属性中,这里即实现binderProxy.mObject = new Bpbinder(0)
            val->incStrong((void*)javaObjectForIBinder);
    
            // The native object needs to hold a weak reference back to the
            // proxy, so we can retrieve the same proxy if it is still active.
            jobject refObject = env->NewGlobalRef(
                    env->GetObjectField(object, gBinderProxyOffsets.mSelf));
            //前面只是基于val建立对应的java对象,及java对象可以直接拿到val指针,但是我们进程也需要val指针可以获取到java对象,这个就是,来关联
            val->attachObject(&gBinderProxyOffsets, refObject,
                    jnienv_to_javavm(env), proxy_cleanup);
    
            // Also remember the death recipients registered on this proxy
            sp<DeathRecipientList> drl = new DeathRecipientList;
            drl->incStrong((void*)javaObjectForIBinder);
            env->SetLongField(object, gBinderProxyOffsets.mOrgue, reinterpret_cast<jlong>(drl.get()));
    
            // Note that a new object reference has been created.
            android_atomic_inc(&gNumProxyRefs);
            incRefsCreated(env);
        }
    
        return object;
    }
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30

    其中的gBinderProxyOffsets的赋值如下:

    const char* const kBinderProxyPathName = "android/os/BinderProxy";
    
    static int int_register_android_os_BinderProxy(JNIEnv* env)
    {
        gErrorOffsets.mError = MakeGlobalRefOrDie(env, FindClassOrDie(env, "java/lang/Error"));
        gErrorOffsets.mOutOfMemory =
            MakeGlobalRefOrDie(env, FindClassOrDie(env, "java/lang/OutOfMemoryError"));
        gErrorOffsets.mStackOverflow =
            MakeGlobalRefOrDie(env, FindClassOrDie(env, "java/lang/StackOverflowError"));
    
        jclass clazz = FindClassOrDie(env, kBinderProxyPathName);
        gBinderProxyOffsets.mClass = MakeGlobalRefOrDie(env, clazz);
        gBinderProxyOffsets.mGetInstance = GetStaticMethodIDOrDie(env, clazz, "getInstance",
                "(JJ)Landroid/os/BinderProxy;");
        gBinderProxyOffsets.mSendDeathNotice =
                GetStaticMethodIDOrDie(env, clazz, "sendDeathNotice",
                                       "(Landroid/os/IBinder$DeathRecipient;Landroid/os/IBinder;)V");
        gBinderProxyOffsets.mNativeData = GetFieldIDOrDie(env, clazz, "mNativeData", "J");
    
        clazz = FindClassOrDie(env, "java/lang/Class");
        gClassOffsets.mGetName = GetMethodIDOrDie(env, clazz, "getName", "()Ljava/lang/String;");
    
        return RegisterMethodsOrDie(
            env, kBinderProxyPathName,
            gBinderProxyMethods, NELEM(gBinderProxyMethods));
    }
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26

    好了那么回到最初始:
    ServiceManagerNative
    .asInterface(Binder.allowBlocking(BinderInternal.getContextObject()));
    这个里面allowBlocking可以暂时忽略,名字它只是个和阻塞不阻塞相关,BinderInternal.getContextObject()本身就已经是BinderProxy类型对象了,所以asInterface其实就是:
    new ServiceManagerProxy(obj);这里obj其实就是ServiceManagerProxy类中的mRemote,通信时候其实都是调用这个mRemote的transact方法

    //位于frameworks/base/core/java/android/os/ServiceManagerNative.java
    //这里的obj即binderProxy
    public static IServiceManager asInterface(IBinder obj) {
            if (obj == null) {
                return null;
            }
            // ServiceManager is never local
            return new ServiceManagerProxy(obj);
        }
    
    class ServiceManagerProxy implements IServiceManager {
        public ServiceManagerProxy(IBinder remote) {
            mRemote = remote;
            mServiceManager = IServiceManager.Stub.asInterface(remote);
        }
        ...
    }
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17

    到此,我们的getIServiceManager方法分析完毕,其返回的是一个new ServiceManagerProxy(obj)对象,obj = android.os.binderProxy,
    mRemote 为BinderProxy 对象,mServiceManager 则是所有接口的入口,ServiceManager的真正proxy,下面会继续分析。
    通常andorid 都会将 .aidl 文件编译成对应的 java 文件,而最终的 java 源码是在out/soong/.intermediates/ 下,aidl 编译后的JAVA 文件都会最终打包到aidl.srcjar 中,那具体的文件怎么确定呢?需要依赖frameworks 文件夹或者system 文件夹的 *.aidl.d 文件。

    如果aidl 文件是定义在frameworks 下,那么就可以到framework 下对应的目录查找对应的 *.aidl.d 文件。

    这个的 IServiceManager 就是 IServiceManager.aidl 编译后的类名。
    BinderProxy 对象获取到后,通过asInterface,创建ServiceManager 的Java 端代理ServiceManagerProxy:

    framworks/base/core/java/android/os/ServiceManagerNative.java
     
    class ServiceManagerProxy implements IServiceManager {
        public ServiceManagerProxy(IBinder remote) {
            mRemote = remote;
            mServiceManager = IServiceManager.Stub.asInterface(remote);
        }
        ...
        public void addService(String name, IBinder service, boolean allowIsolated, int dumpPriority)
                throws RemoteException {
            mServiceManager.addService(name, service, allowIsolated, dumpPriority);
        }
        ...
        private IBinder mRemote;
        private IServiceManager mServiceManager;
    }
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16

    mRemote 为BinderProxy 对象,mServiceManager 则是所有接口的入口,ServiceManager的真正proxy,下面会继续分析

    这个的 IServiceManager 就是 IServiceManager.aidl 编译后的类名

          @Override public void addService(java.lang.String name, android.os.IBinder service, boolean allowIsolated, int dumpPriority) throws android.os.RemoteException
          {
            android.os.Parcel _data = android.os.Parcel.obtain();
            android.os.Parcel _reply = android.os.Parcel.obtain();
            try {
              _data.writeInterfaceToken(DESCRIPTOR);
              _data.writeStrongBinder(service);
              ...
              boolean _status = mRemote.transact(Stub.TRANSACTION_addService, _data, _reply, 0);
              if (!_status && getDefaultImpl() != null) {
                getDefaultImpl().addService(name, service, allowIsolated, dumpPriority);
                return;
              }
              _reply.readException();
            }
            ...
          }
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17

    proxy 的代码比较简单:

    • 通过接口writeStrongBinder() 将需要添加的 service 写到 Parcel 中;
    • 通过mRemote.transact() 进行通信,确认是否达到addService() 目的;
    • 如果上面失败,则尝试通过 getDefaultImpl() 进行 addService();
      通常是走第二步,第三步的default impl 需要通过接口 setDefaultImpl() 指定。

    通过第 2.2 节获悉,mRemote 就是ServiceManager 的java 端BinderProxy 对象。

    3.1 writeStrongBinder()

    frameworks/base/core/java/android/os/Parcel.java
     
        public final void writeStrongBinder(IBinder val) {
            nativeWriteStrongBinder(mNativePtr, val);
        }
    
    • 1
    • 2
    • 3
    • 4
    • 5

    JNI 接口位于android_os_Parcel.cpp 中,而此接口也是在zygote 启动是加载。这里不过多分析,详细请查看 AndroidRuntime.cpp 中数组 gRegJNI 里的成员 register_android_os_Parcel。

    最终调用到JNI android_os_Parcel_writeStrongBinder():

    frameworks/base/core/jni/android_os_Parcel.cpp
     
    static void android_os_Parcel_writeStrongBinder(JNIEnv* env, jclass clazz, jlong nativePtr, jobject object)
    {
        Parcel* parcel = reinterpret_cast<Parcel*>(nativePtr);
        if (parcel != NULL) {
            const status_t err = parcel->writeStrongBinder(ibinderForJavaObject(env, object));
            if (err != NO_ERROR) {
                signalExceptionForError(env, clazz, err);
            }
        }
    }
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    //位于frameworks/native/libs/binder/Parcel.cpp
    status_t Parcel::writeStrongBinder(const sp<IBinder>& val)
    {
        return flattenBinder(val);
    }
    
    status_t Parcel::flattenBinder(const sp<IBinder>& binder)
    {
        flat_binder_object obj;
        obj.flags = FLAT_BINDER_FLAG_ACCEPTS_FDS;
        if (binder != nullptr) {
            BBinder *local = binder->localBinder();
            if (!local) {
                BpBinder *proxy = binder->remoteBinder();
                if (proxy == nullptr) {
                    ALOGE("null proxy");
                } else {
                    if (proxy->isRpcBinder()) {
                        ALOGE("Sending a socket binder over RPC is prohibited");
                        return INVALID_OPERATION;
                    }
                }
                const int32_t handle = proxy ? proxy->getPrivateAccessorForId().binderHandle() : 0;
                obj.hdr.type = BINDER_TYPE_HANDLE;
                obj.binder = 0; /* Don't pass uninitialized stack data to a remote process */
                obj.handle = handle;
                obj.cookie = 0;
            } else {
                int policy = local->getMinSchedulerPolicy();
                int priority = local->getMinSchedulerPriority();
                ...
                obj.hdr.type = BINDER_TYPE_BINDER;
                obj.binder = reinterpret_cast<uintptr_t>(local->getWeakRefs());
                obj.cookie = reinterpret_cast<uintptr_t>(local);//这里将其BBinde存储下来
            }
        } else {
            obj.hdr.type = BINDER_TYPE_BINDER;
            obj.binder = 0;
            obj.cookie = 0;
        }
    
        obj.flags |= schedBits;
    
        status_t status = writeObject(obj, false);
        if (status != OK) return status;
    
        return finishFlattenBinder(binder);
    }
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48

    来看下核心的 ibinderForJavaObject():

    frameworks/base/core/jni/android_util_Binder.cpp
    sp<IBinder> ibinderForJavaObject(JNIEnv* env, jobject obj)
    {
        if (obj == NULL) return NULL;
     
        // Instance of Binder?
        if (env->IsInstanceOf(obj, gBinderOffsets.mClass)) {
            JavaBBinderHolder* jbh = (JavaBBinderHolder*)
                env->GetLongField(obj, gBinderOffsets.mObject);
            return jbh->get(env, obj);
        }
     
        // Instance of BinderProxy?
        if (env->IsInstanceOf(obj, gBinderProxyOffsets.mClass)) {
            return getBPNativeData(env, obj)->mObject;
        }
     
        ALOGW("ibinderForJavaObject: %p is not a Binder object", obj);
        return NULL;
    }
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20

    纯正的service 属于 Binder 类型,所以匹配时与gBinderOffsets.mClass 匹配,该变量是也是在注册的时候赋值,详细看 int_register_android_os_Binder()。

    gBinderOffsets.mObject 获取的是 Binder.java 中的 mObject 对象(详细看int_register_android_os_Binder函数),也就是在Binder 构造时,从native 获取的 BBinderHolder 指针:

    frameworks/base/core/java/android/os/Binder.java
     
        public Binder(@Nullable String descriptor)  {
            mObject = getNativeBBinderHolder();
     
            ...
        }
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    frameworks/base/core/jni/android_util_Binder.cpp
     
    static jlong android_os_Binder_getNativeBBinderHolder(JNIEnv* env, jobject clazz)
    {
        JavaBBinderHolder* jbh = new JavaBBinderHolder();
        return (jlong) jbh;
    }
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7

    ibinderForJavaObject() 除了上面获取到的JavaBBinderHolder,最终就是通过该 holder 调用 get()接口,获取 servcie 的BBinder:

    frameworks/base/core/jni/android_util_Binder.cpp
        sp<JavaBBinder> get(JNIEnv* env, jobject obj)
        {
            AutoMutex _l(mLock);
            sp<JavaBBinder> b = mBinder.promote();
            if (b == NULL) {
                b = new JavaBBinder(env, obj);
                ...
            }
     
            return b;
        }
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12

    来看下JavaBBinder:

    frameworks/base/core/jni/android_util_Binder.cpp
     
    class JavaBBinder : public BBinder
    {
        ...
    }
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6

    所以,writeStrongBinder() 最终就是 native 中 BBinder 写入parcel 传入transact()。

    BinderProxy.transact()

    frameworks/base/core/java/android/os/BinderProxy.java
     
        public boolean transact(int code, Parcel data, Parcel reply, int flags) throws RemoteException {
            Binder.checkParcel(this, code, data, "Unreasonably large binder buffer");
     
            ....
            try {
                return transactNative(code, data, reply, flags);
            } finally {
                ...
            }
        }
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12

    其实这里核心就是调用transactNative:

    frameworks/base/core/jni/android_util_Binder.cpp
     
    static jboolean android_os_BinderProxy_transact(JNIEnv* env, jobject obj,
            jint code, jobject dataObj, jobject replyObj, jint flags) // throws RemoteException
    {
        ...
        IBinder* target = getBPNativeData(env, obj)->mObject.get();//这里拿到的是bpBinder,如果是serviceManager,则为bpBinder(0)
        status_t err = target->transact(code, *data, reply, flags);
        ...
    }
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10

    BpBinder::transact分析,

    //位于frameworks/native/libs/binder/BpBinder.cpp
    status_t BpBinder::transact(
        uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
    {
        // Once a binder has died, it will never come back to life.
        if (mAlive) {
                ...
                status = IPCThreadState::self()->transact(binderHandle(), code, data, reply, flags);
            }
    
            if (status == DEAD_OBJECT) mAlive = 0;
    
            return status;
        }
    
        return DEAD_OBJECT;
    }
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    //位于frameworks/native/libs/binder/IPCThreadState.cpp
    status_t IPCThreadState::transact(int32_t handle,
                                      uint32_t code, const Parcel& data,
                                      Parcel* reply, uint32_t flags)
    {
      ...
        err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, nullptr);
     ...
        if ((flags & TF_ONE_WAY) == 0) {
            ...
            if (reply) {
                err = waitForResponse(reply);//这里等待驱动的回应
            } else {
                Parcel fakeReply;
                err = waitForResponse(&fakeReply);
            }
        ...
        return err;
    }
    
    //这里将Parcel中的数据,转成binder_transaction_data 数据,便于与驱动统一的结构体
    status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags,
        int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer)
    {
        binder_transaction_data tr;
    
        tr.target.ptr = 0; /* Don't pass uninitialized stack data to a remote process */
        tr.target.handle = handle;
        tr.code = code;
        tr.flags = binderFlags;
        tr.cookie = 0;
        tr.sender_pid = 0;
        tr.sender_euid = 0;
    
        const status_t err = data.errorCheck();
        if (err == NO_ERROR) {
            tr.data_size = data.ipcDataSize();
            tr.data.ptr.buffer = data.ipcData();
            tr.offsets_size = data.ipcObjectsCount()*sizeof(binder_size_t);
            tr.data.ptr.offsets = data.ipcObjects();
        } else if (statusBuffer) {
            tr.flags |= TF_STATUS_CODE;
            *statusBuffer = err;
            tr.data_size = sizeof(status_t);
            tr.data.ptr.buffer = reinterpret_cast<uintptr_t>(statusBuffer);
            tr.offsets_size = 0;
            tr.data.ptr.offsets = 0;
        } else {
            return (mLastError = err);
        }
    
        mOut.writeInt32(cmd);
        mOut.write(&tr, sizeof(tr));
    
        return NO_ERROR;
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55

    waitForResponse讲解,这个函数主要是等待驱动的回应

    //位于frameworks/native/libs/binder/IPCThreadState.cpp
    status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
    {
        uint32_t cmd;
        int32_t err;
    
        while (1) {
            if ((err=talkWithDriver()) < NO_ERROR) break;//talkWithDriver即与驱动交互的函数
            err = mIn.errorCheck();
            if (err < NO_ERROR) break;
            if (mIn.dataAvail() == 0) continue;
    
            cmd = (uint32_t)mIn.readInt32();
            switch (cmd) {
            case BR_ONEWAY_SPAM_SUSPECT:
                ALOGE("Process seems to be sending too many oneway calls.");
                CallStack::logStack("oneway spamming", CallStack::getCurrent().get(),
                        ANDROID_LOG_ERROR);
                [[fallthrough]];
            case BR_TRANSACTION_COMPLETE:
                if (!reply && !acquireResult) goto finish;
                break;
    
            case BR_DEAD_REPLY:
                err = DEAD_OBJECT;
                goto finish;
    
            case BR_FAILED_REPLY:
                err = FAILED_TRANSACTION;
                goto finish;
    
            case BR_FROZEN_REPLY:
                err = FAILED_TRANSACTION;
                goto finish;
    
            case BR_ACQUIRE_RESULT:
                {
                    ALOG_ASSERT(acquireResult != NULL, "Unexpected brACQUIRE_RESULT");
                    const int32_t result = mIn.readInt32();
                    if (!acquireResult) continue;
                    *acquireResult = result ? NO_ERROR : INVALID_OPERATION;
                }
                goto finish;
    
            case BR_REPLY:
                {
                    binder_transaction_data tr;
                    err = mIn.read(&tr, sizeof(tr));
                    ALOG_ASSERT(err == NO_ERROR, "Not enough command data for brREPLY");
                    if (err != NO_ERROR) goto finish;
    
                    if (reply) {
                        if ((tr.flags & TF_STATUS_CODE) == 0) {
                            reply->ipcSetDataReference(
                                reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
                                tr.data_size,
                                reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
                                tr.offsets_size/sizeof(binder_size_t),
                                freeBuffer);
                        } else {
                            err = *reinterpret_cast<const status_t*>(tr.data.ptr.buffer);
                            freeBuffer(nullptr,
                                reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
                                tr.data_size,
                                reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
                                tr.offsets_size/sizeof(binder_size_t));
                        }
                    } else {
                        freeBuffer(nullptr,
                            reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
                            tr.data_size,
                            reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
                            tr.offsets_size/sizeof(binder_size_t));
                        continue;
                    }
                }
                goto finish;
    
            default:
                err = executeCommand(cmd);
                if (err != NO_ERROR) goto finish;
                break;
            }
        }
    
    finish:
        if (err != NO_ERROR) {
            if (acquireResult) *acquireResult = err;
            if (reply) reply->setError(err);
            mLastError = err;
        }
    
        return err;
    }
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59
    • 60
    • 61
    • 62
    • 63
    • 64
    • 65
    • 66
    • 67
    • 68
    • 69
    • 70
    • 71
    • 72
    • 73
    • 74
    • 75
    • 76
    • 77
    • 78
    • 79
    • 80
    • 81
    • 82
    • 83
    • 84
    • 85
    • 86
    • 87
    • 88
    • 89
    • 90
    • 91
    • 92
    • 93
    • 94
  • 相关阅读:
    使用C++的CCF-CSP满分解决方案 202104-2 邻域均值
    【六】ElasticSearch8.x Java API 实体类、工具类、测试类及常见问题
    前后端分离项目Vue+node.js二手商品交易系统74qb3
    七彩动态|棱镜七彩获“北京国家金融科技认证中心”颁发的「金融开源技术服务商能力评估证书」
    中学生物教学杂志中学生物教学杂志社中学生物教学编辑部2022年第15期目录
    软件测试面试题及答案 这个可以免费白嫖的题库不要错过了
    linux C++实现线程绑定CPU
    AQS介绍
    运维知识点-MySQL从小白到入土
    【zlm】 PTS & DTS
  • 原文地址:https://blog.csdn.net/qq_34888036/article/details/133676517