I've been using Frigate for a long time and it's a really cool project that has been quite reliable. The configuration can be a little bit of a headache to learn, but it gets better with every release.
Viserion is new to me though, that looks really cool.
I've been running frigate for a while now and I find it's object detection has a higher than preferred false-positive rate.
For instance, it kept thinking the tree in my back yard is a person. I find it hilarious that it often assigns a higher likelihood the tree is a person than me! I've needed to put a mask over the tree as a last resort.
Assuming the tree is big you can set max object areas for person and then it will never happen again. I had to do this with some areas where shadows looked like people in the afternoons.
Beats me, I'm just getting into this now. I started with a Reolink NVR, but it's a piece of crap, so I'm looking for a better alternative.
It looks like either Frigate or Viseron will do what I want. I started setting up Frigate, but realized I should downgrade my Reolink Duo 3 to a Duo 2 before I go too far. The Duo 3 really doesn't offer much better image quality but forces you to use h265 and consumes a lot more bandwidth. Once I stabilize my camera setup I'll get back to setting up both Frigate and Viseron and see what performs better. I like that the pro upgrade of Frigate allows you to customize the model and may make use of that.
Congrats! What hardware you use to run the inference 24/7? I built a simpler version for running on low end hardware [0] for recognizing if there’s a person on my parcel, so I know someone have trespassed and I can launch siren, lights etc.
This runs with a Geforce GTX 1060. By a quick search it's 120 W. Maybe it's only the peak power consumption but it's still a lot. Do commercial products, if there are any, consume that much power?
There's a wide range of inference accelerators in commercial use.
For "edge" or embedded applications, an accelerator such as the Google Coral Edge TPU is a useful reference point where it is capable of up to 4 Trillion Operations per Second (4 TOPS), with up to 2 Watts of power consumption (2 TOPS/W), however the accelerator is limited to INT8 operations. It also has around 8 MB of memory for model storage.
Meanwhile a general purpose or gaming GPU can support a wider range of instructions, single-precision, double-precision floating point, integer, etc).
Sorry I’m not familiar with TPUs only GPUs but how much VRAM do Corals have? YOLO 11x is 56M params which if it was quantized to int8 would still be 56MB. Plus you would need some for your inputs.
OP is probably using an AI accelerator like this: https://coral.ai/products/accelerator which works great on a PI and uses very little power. It will do the Yolo part, but you can't really expect it to do the multimodal LLM part, although you could try to run Florence directly on the PI too.
YOLO is quick enough that you can just run it on a CPU, assuming you don’t want to run it at full resolution (no point) and full frame rate (ditto) for multiple streams. When you run it scaled down at a 2-3 fps you’ll get several streams per CPU core no problem. Energy use can be minimized by running a quick motion detection pass before, but that would obviously make the system miss things creeping through the frame pixel by pixel (very unlikely if you ask me)
They are calling the ollama API to run Llava. Llava is a combo of an LLM base model and + vision projector (clip or ViT), and is usually around 4 - 8GB. Since every token generated needs access to all of the model weights, you would have to send 4 - 8 GB through USB with the Coral. Even at a generous 10gbit/s that is 8GB / 1.25GB = 6.4seconds per token. A 150 (short paragraph) generation would be 16minutes.
Can confirm. The Coral inference accelerator is quite performant with very low power draw. Once I figured out some passthrough and config issues I was able to run Frigate in an LXC container on Proxmox using Coral USB for inference. It's been working reliably 24/7 for months now.
Yeah. But it’s likely it’s an 8-bit quantised, likely very small model with a small number of parameters. Which translates into poor recall and lots of false positives.
How many parameters is the model you are using with hailo? And what’s the quantisation and what model is it actually ?
They are asking about LLMs. There is a confusion it seems -- you are thinking of the object detection model (YOLO) which runs perfectly fine in (near) real time with a Coral or other NPU. The parent is referring the Llava part, which is a full-fledged language model with a vision projector glued onto to it for vision capability. Large language models are generally quantized (converted from full precision float values to less precise floats or ints for instance F16, Q8, Q4) because they would otherwise be extremely large and slow and require a ton of RAM (the model has to access the entire weights for every token generated, so if you don't have a gigantic amount of VRAM you would be pushing many tens of gigabytes of model weights through the system bus slowly).
I'm confused about why you need yolo and llava. Can't you simply use yolo without a multimodal LLM? What does that add? You can use yolo to detect and grab screen coordinates on its own, right?
Hello from the privacy crowd! Please use this responsibly. Tech can be a lot of fun and I encourage you to play around with things and I appreciate it when you push the boundaries of what is technically feasible. But please be mindful that surveillance tech can also be used to oppress people and infringe on their freedoms. Use tech for good!
MobileNetV3 and EfficientDet are othwr possible alternatives to YOLO. I was able to get higher than 1.5 FPS on Raspberry Pi Zero 2W which draws 1W on average. With efficient queuing approach, one can eliminate all bottlenecks.
Here are hardware recommendations from another similar (and well established) project: [1] [2]. Even though they don't recommend Reolink cameras, I have both Amcrest and Reolink cameras working well with Frigate for more than a year now.
+1 for Frigate and Reolink. I have it running in a Proxmox VM on an old dell r710 (yes, it’s sucks watts and needs replacing) but all said, Frigate, is, amazing! The ease of integration with HA is equally great.
Many Amcrest IP Cameras are manufactured by Dahua and use localized versions of Dahua firmware. The same applies to the Lorex brand in the United States.
Some things that matter when it comes to configuring your IP Cameras (Beyond security, etc):
- Support for RTSP
- Configurable Encoding Settings (e.g. h264 coded, bitrate, i-frame intervals, framerate)
- Support for Substreams (i.e. a full-resolution main stream for recording, and at least one lower-resolution substream for preview/detection/etc)
...
Make sure the hardware you select is capable of the above.
Configurability will matter because Identification is not the same as Detection (Reference: "DORI" - Detection, Observation, Recognition, and Identification from IEC EN62676-4). If you want to be able to successfully identify objects or entities using your cameras, it will require more care than basic Observation or Detection.
AFAIK, the FCC ban pertains to particular applications (or marketing of products for such applications). It did not apply to consumer applications.
"On November 25, 2022, the Federal Communications Commission (FCC) released new rules restricting equipment that poses national security risks from being imported to or sold in the United States. Under the new rules, the FCC will not issue new authorizations for telecommunications equipment produced by Huawei Technologies Company (Huawei) and ZTE Corporation (ZTE), the two largest telecommunications equipment manufacturers in the People’s Republic of China (PRC).
The FCC also will not authorize equipment produced by three PRC-based surveillance camera manufacturers—Hytera Communications (Hytera), Hangzhou Hikvision Digital Technology (Hikvision), and Dahua Technology (Dahua)—until the FCC approves these entities’ plans to ensure that their equipment is not marketed or sold for public safety purposes, government facilities, critical infrastructure, or other national security purposes. The FCC did not, however, revoke any of its prior authorizations for these companies’ equipment, although it sought comments on whether it should do so in the future."
You'll want to find an IP Camera that supports the RTSP protocol, which is most of them.
If your budget supports commercial style or commercial grade cameras, looking at Dahua or Hikvision manufactured cameras would be a good starting point to get an idea of specs, features, and cost.
US - FCC Ban The US Federal Communications Commission (FCC) banned Dahua and Hikvision from new equipment authorizations in November 2022. Most products that use electricity require FCC equipment authorizations; otherwise, they are illegal to import, sell, market, or use, even for private individuals.
Jul 5, 2024
Also it’s not like you stop supporting these OEMs if you buy other made in china cameras. They’re essentially all designed and manufactured by very few of these large OEMs, all of which are implicated in CCP state surveillance.
You’d have to buy from actual Western companies like Axis or Dallmeier.
A lot of the commercial-style or commercial-grade IP Cameras sold are rebadged Dahua or Hikvision products.
Compromised firmware or other backdoors are a concern for a wide range of products. With IP Cameras, a commonly recommended practice includes putting them on a non-internet accessible network, disabling any remote access, UPnP type features, etc. You can run IP cameras in an air-gapped configuration as well.
Home/consumer-grade cameras have plenty of shortcomings too.
”Analysts noticed that CCTV cameras in Taiwan and South Korea were digitally talking to crucial parts of the Indian power grid – for no apparent reason. On closer investigation, the strange conversation was the deliberately indirect route by which Chinese spies were interacting with malware they had previously buried deep inside the Indian power grid.”
link?
i am close to CCTV retailers and dahua and hikvision are only brands of CCTV widely available with two exceptions of "cp plus" and "hawkvision" which are in all lilkelihood rebranded or made in china products.
so what are your options? i have been contemplating getting a door phone + cctv for my home for the past so many years but problems like these prevent me from investing into an ecosystem.
edit: oh. looks like pager attacks has their attention now.
> are in all lilkelihood rebranded or made in china products
IPVM did all the legwork on this a while ago and unconvered that, not that surprisingly, two and a half OEMs (including Dahua and Hikvision) are manufacturing essentially every not-completely-garbage CCTV camera coming out of china, and a bunch that very explicitly claimed to not come out of china.
I can recommend the Axis brand. Very user friendly while power user friendly as well, true local offerings. I personally bought mine used, it's an older model, and even then, it holds up really well.
Default yolo models are stuck at 640x640, so literally any camera that is at least capable of that resolution. Llava I believe is about the same. You'd need ubuntu and something that can run a llava model in vaguely real time, so a 4090/4080.
>> It calculates the center of every detection box, pinpoint on screen and gives 16px tolerance on all directions. Script tries to find closest object as fallback and creates a new object in memory in last resort. You can observe persistent objects in /elements folder
I’ve never implemented this kind of object persistence algo - is this a good approach? Seems naive but maybe that’s just because it’s simple.
All I see, usually, is some AI YOLO algorithm applied to an offline video.
This is the first time that I've seen a "complete" setup. Any info to learn more on applying YOLO and similar models to real time streams (whatever the format)?
We’ve got an open source pipeline as part of inference[1] that handles the nuances (multithreading, batching, syncing, reconnecting) of running multiple real time streams (pass in an array of RTSP urls) for CV models like YOLO: https://blog.roboflow.com/vision-models-multiple-streams/
If you do it naively your video frames will buffer waiting to be consumed causing a memory leak and eventual crash (or quick crash if you’re running on a device with constrained resources).
You really need to have a thread consuming the frames and feeding them to a worker that can run on its own clock.
Could try with Florence by Microsoft instead of Yolo and Llava, though the results are not going to be as great. Florence will do the inference on CPU. This is just for fun.