Replies: 10 comments 8 replies
-
Hi, thank you for the question. There is some work going on in this area which makes the answer a little complicated. For now, the easiest thing is to stop and restart the camera, changing the exposure every time, something like this:
Unfortunately this process is a bit slow, and this is where some of the work currently happening will help. For example, if you wanted raw images (not processed by the ISP) you could fire updated exposure times at the camera without stopping it and you'd get the exposures you want returned to you back-to-back (after a latency of a few frames). Unfortunately that doesn't quite work with the processed images (output by the ISP), but I'm hoping there will be some fixes for that merged into libcamera any day now. Looking slightly further out we plan to have a mechanism for sending controls to the camera (as above), but where you will get notified exactly when those controls have been applied, without even having to check whether the image metadata "looks like what you wanted", and there will be a reduction in the number of frames of latency too. But no estimated time scale for that yet, I'm afraid. |
Beta Was this translation helpful? Give feedback.
-
I've pasted some code below. The ugliest thing is the
The other thing to be aware of is that you don't necessarily get your exposures in the order you list, but the callback knows which you have. The whole process is somewhat influenced by camera framerate, the number of buffers you allocate for the camera to use and so on. But maybe give it a try and see how you get on. |
Beta Was this translation helpful? Give feedback.
-
Hi, you will need to do this:
In the callback you will have to store the buffer (
I would probably save them after capturing them all, because writing those DNGs is quite slow and will cause frame drops, but obviously you can experiment. |
Beta Was this translation helpful? Give feedback.
-
Hi, thanks again for your reply. Here is my code for my project. It captures 3 frames in a row, then the motor winds to the next frame (from 8mm film) and then it takes 3 frames again. It takes about 1.2 seconds to capture the 3 frames. In my eyes this is quite fast. I used a ramdisk for testing, where I buffer the images. But I couldn't see any difference in the capturing speed compared to saving directly to a usb-stick. Where exactly does the time for capturing the image or saving it as a .dng file come from? Does it depend on the cpu power?
|
Beta Was this translation helpful? Give feedback.
-
Hi, I would have expected the ramdisk to be faster, but it can be quite hard to guess about this stuff. Maybe because you're not using full resolution, and there's probably a certain amount of buffering in the file system anyway? The DNG library may be doing quite a lot of CPU work too, you'd have to track down the code for it and take a look. In the end, the only reliable answers come from profiling...! |
Beta Was this translation helpful? Give feedback.
-
If you are interested, consider joining the Kinograph-Forum and describe your system. People there are always interested in new approaches towards film scanning. |
Beta Was this translation helpful? Give feedback.
-
Hi David, |
Beta Was this translation helpful? Give feedback.
-
@davidplowman, thank you for your code snippet! It worked seamlessly with my IMX219. However, when I try to use it with my IMX708, it can't seem to match the exposure time correctly. I'm currently on the latest Raspberry Pi OS (Bullseye), and I've made modifications to the tuning file as suggested in issue #622. Additionally, I've implemented a 2-second wait after every configuration change, as discussed in issue #800. To obtain results, I find myself needing to drastically alter the Here are some of the results: wanted 100 got 858
wanted 1000 got 858
wanted 100000 got 77918
wanted 1000000 got 106399
wanted 500000 got 499987
wanted 2000000 got 499987 Why is that? And this is my code: # #!/usr/bin/python3
from picamera2 import Picamera2
def exposure_captures(exposure_list=[100000, 500000, 1000000, 2000000]):
with Picamera2() as picam2:
# config = picam2.create_still_configuration(raw={}, buffer_count=2)
config = picam2.create_still_configuration(raw={})
picam2.set_controls({'AfMode': controls.AfModeEnum.Manual, 'LensPosition': 15.0})
picam2.configure(config)
picam2.start()
capture_multiple_exposures(picam2, exposure_list, config, callback_func)
picam2.stop()
return [f"{i}.dng" for i in range(len(exposure_list))]
def capture_multiple_exposures(picam2, exp_list, config,callback):
def match_exp(metadata, indexed_list):
err_factor = 0.01 # changed it to 1 or 10 to get results
err_exp_offset = 30
exp = metadata["ExposureTime"]
gain = metadata["AnalogueGain"]
for want in indexed_list:
want_exp, _ = want
if abs(gain - 1.0) < err_factor and abs(exp - want_exp) < want_exp * err_factor + err_exp_offset:
return want
return None
indexed_list = [(exp, i) for i, exp in enumerate(exp_list)]
while indexed_list:
request = picam2.capture_request()
match = match_exp(request.get_metadata(), indexed_list)
if match is not None:
indexed_list.remove(match)
exp, i = match
callback(i, exp, request, picam2, config)
if indexed_list:
exp, _ = indexed_list[0]
picam2.set_controls({"ExposureTime": exp, "AnalogueGain": 1.0})
indexed_list.append(indexed_list.pop(0))
request.release()
def callback_func(i, wanted_exp, request, picam2, config):
print(i, "wanted", wanted_exp, "got", request.get_metadata()["ExposureTime"])
meta = request.get_metadata()
exT = request.get_metadata()["ExposureTime"]
picam2.helpers.save_dng(request.make_buffer("raw"), meta, config['raw'], f"{i}_{exT}.dng") |
Beta Was this translation helpful? Give feedback.
-
I would try setting some fixed exposure times "by hand" and see what you actually get back. There will be a minimum and maximum exposure time, and some degree of quantisation, and this will give you a feel for what the sensor does. For example, type
and then query
to see what it did. After configuring the camera, also try |
Beta Was this translation helpful? Give feedback.
-
The answer @davidplowman |
Beta Was this translation helpful? Give feedback.
-
Hello, I would like to migrate my project to picamera2. With the outdated picamera library, I was able to pre-set the exposure time for each individual image. For example image1 - 1ms, image2 - 5ms and image3 - 9ms. I am having a hard time figuring out how to solve this problem with picamera2. How can I change the exposure time before each picture is taken? Preferably so that the images can be taken one after the other as fast as possible?
Would be very happy about your suggestions.
Beta Was this translation helpful? Give feedback.
All reactions