Design, Hardware, and Field Testing

Hey everyone, Ryan here. It’s been a busy few weeks since the last update. We’ve been moving fast and crossing things off the todo list. Here’s a few of the things the team has been working on:

Design/Interface

After getting the app in more people’s hands, we learned how to improve the interface and make it even more friendly. We think you’ll really like it. We’ve moved from a side menu to a slide between screens (similar to the iOS camera app). We moved basic focus into the main app screens, and advanced focus now has its own screen. This lets you set up things like multipoint focus or whole scene focus for things like custom focus stacking depths. The smart assistant screen got cleaned up quite a bit with a much simpler view of the current constraints. We’re not quite done implementing the new design yet, but once it’s ready we’ll post some demos.

Hardware

The hardware side is going well also. We’ve decided we should redo our enclosures snap system to simplify the injection molding process and improve durability. This should go pretty quickly. We’re also working to formalize some of our stress testing procedures for prototypes. This should give us a better idea of how any design changes affect things.

Also, parts are starting to come in for the beta unit run. It’s still a little ways off, but we wanted to make sure the hard to get parts were ready to go. I’m happy to say they arrived a lot sooner than we expected.

Field Testing

In late July, I was able to make a quick trip to Oregon to test the current version of the software in a few different scenarios. Things went really well. For most of the machine learning, I can test around my house, but getting out in the field shows us where things don’t translate as well.

AI/Machine Learning

After the Oregon trip, we realized that we need to work on making Arsenal’s models more “shutter invariant.” Some of the photos in our training dataset are taken with longer shutter speeds. When we read data off the sensor, so we can suggest settings, the frame from the sensor is from a short exposure. In the past this was causing some issues, but I’m glad to say we were able to train our models to ignore the effects of shutter time when searching for similar photos.

This process has been a bit more work than you might expect, and involves using a dataset of photos with short and long photos of the same shot. I’ll spare you the details for now, but I’m glad to say this has really helped improve the suggested settings in certain situations.

The second model we’ve been working on is our exposure model. Exposure varies based on the scene and type of lighting. To get really smooth day to night transitions, we need to have a good exposure model. I’m happy to say the current model is looking really good. There’s a little more work and fine tuning to be done, but we’ve got a great foundation that can make some great day to night transitions without any complex calculations on your part. Probably another week of work on it, then we’ll start running it through another round of field testing.

That's it for today's update. Thanks!

  • Ryan