Assuming module foo
with method bar
:
import foo
method_to_call = getattr(foo, 'bar')
result = method_to_call()
You could shorten lines 2 and 3 to:
result = getattr(foo, 'bar')()
if that makes more sense for your use case.
You can use getattr
in this fashion on class instance bound methods, module-level methods, class methods... the list goes on.
Maybe a bit of example code will help: Notice the difference in the call signatures of foo
, class_foo
and static_foo
:
class A(object):
def foo(self, x):
print(f"executing foo({self}, {x})")
@classmethod
def class_foo(cls, x):
print(f"executing class_foo({cls}, {x})")
@staticmethod
def static_foo(x):
print(f"executing static_foo({x})")
a = A()
Below is the usual way an object instance calls a method. The object instance, a
, is implicitly passed as the first argument.
a.foo(1)
# executing foo(<__main__.A object at 0xb7dbef0c>, 1)
With classmethods, the class of the object instance is implicitly passed as the first argument instead of self
.
a.class_foo(1)
# executing class_foo(<class '__main__.A'>, 1)
You can also call class_foo
using the class. In fact, if you define something to be
a classmethod, it is probably because you intend to call it from the class rather than from a class instance. A.foo(1)
would have raised a TypeError, but A.class_foo(1)
works just fine:
A.class_foo(1)
# executing class_foo(<class '__main__.A'>, 1)
One use people have found for class methods is to create inheritable alternative constructors.
With staticmethods, neither self
(the object instance) nor cls
(the class) is implicitly passed as the first argument. They behave like plain functions except that you can call them from an instance or the class:
a.static_foo(1)
# executing static_foo(1)
A.static_foo('hi')
# executing static_foo(hi)
Staticmethods are used to group functions which have some logical connection with a class to the class.
foo
is just a function, but when you call a.foo
you don't just get the function,
you get a "partially applied" version of the function with the object instance a
bound as the first argument to the function. foo
expects 2 arguments, while a.foo
only expects 1 argument.
a
is bound to foo
. That is what is meant by the term "bound" below:
print(a.foo)
# <bound method A.foo of <__main__.A object at 0xb7d52f0c>>
With a.class_foo
, a
is not bound to class_foo
, rather the class A
is bound to class_foo
.
print(a.class_foo)
# <bound method type.class_foo of <class '__main__.A'>>
Here, with a staticmethod, even though it is a method, a.static_foo
just returns
a good 'ole function with no arguments bound. static_foo
expects 1 argument, and
a.static_foo
expects 1 argument too.
print(a.static_foo)
# <function static_foo at 0xb7d479cc>
And of course the same thing happens when you call static_foo
with the class A
instead.
print(A.static_foo)
# <function static_foo at 0xb7d479cc>
Best Solution
EDIT: I'm changing my answer so you avoid pain. multiprocessing is immature, the docs on BaseManager are INCORRECT, and if you're an object-oriented thinker that wants to create shared objects on the fly at run-time, USE PYRO OR YOU WILL SERIOUSLY REGRET IT! If you are just doing functional programming using a shared queue that you register up front like all the stupid examples GOOD FOR YOU.
Short Answer
Multiprocessing:
Pyro:
Edit: The first time I answered this I had just dived into 2.6 multiprocessing. In the code I show below, the Texture class is registered and shared as a proxy, however the "data" attribute inside of it is NOT. So guess what happens, each process has a separate copy of the "data" attribute inside of the Texture proxy, despite what you might expect. I just spent untold amount of hours trying to figure out how a good pattern to create shared objects during run-time and I kept running in to brick walls. It has been quite confusing and frustrating. Maybe it's just me, but looking around at the scant examples people have attempted it doesn't look like it.
I'm having to make the painful decision of dropping multiprocessing library and preferring Pyro until multiprocessing is more mature. While initially I was excited to learn multiprocessing being built into python, I am now thoroughly disgusted with it and would rather install the Pyro package many many times with glee that such a beautiful library exists for python.
Long Answer
I have used Pyro in past projects and have been very happy with it. I have also started to work with multiprocessing new in 2.6.
With multiprocessing I found it a bit awkward to allow shared objects to be created as needed. It seems like, in its youth, the multiprocessing module has been more geared for functional programming as opposed to object-oriented. However this is not entirely true because it is possible to do, I'm just feeling constrained by the "register" calls.
For example:
manager.py:
server.py:
client.py:
The awkwardness I'm describing comes from server.py where I register a getTexture function to retrieve a function of a certain name from the TextureManager. As I'm going over this the awkwardness could probably be removed if I made the TextureManager a shareable object which creates/retrieves shareable textures. Meh I'm still playing, but you get the idea. I don't remember encountering this awkwardness using pyro, but there probably is a solution that's cleaner than the example above.