A life under surveillance
- Introduction
- Installation
- Configuration
- Calculation methods for field of view
- More information
- Legal information for nerds
Welcome to PanoptiCity !
This project's purpose is to help display information about CCTV, cameras, to easily map where they are, what they can see, and get data about their usage in cities. The website also give users an easy way to contribute into the OpenStreetMap database if they want to add cameras when they see some that are not already known or improve the attributes of existing ones.
PanoptiCity is my way to act and try to raise awareness about mass surveillance in all cities, to make people realize the amount of cameras around us that they usually not even see. In a time were artifical intelligence is generalizing, it is more than ever the moment to ask ourselves, is it really the model of society we want to build collectively ?
One major inspiration for this project has been the website SunderS. It gave me the idea of improving the project with new features and therefore obviously needs to be cited. Information used is from the awesome OpenStreetMap database. Other attributions and projects used for this application can be found later on this page.
![]() |
![]() |
---|---|
![]() |
![]() |
- Get data from OpenStreetMap database
- Compute field of vision of each camera, take in consideration surounding buildings
- Mutliple models for field of vision computation, based on an analysis of the technical features of available cameras on the cctv market. Possiblity to switch between models.
- Connection with OpenStreetMap account
- Edition of existing cameras
- Creation form to contribute new cameras
- Dark/light mode
To discover all the features, go to panopticity.fr !
Thank you for your interest in this project. This section will guide you to facilitate the installation, configuration and run of the project on your server.
If you encounter any problem feel free to open an issue to ask for support.
To run this application you'll need docker.
- If not done already, install docker on the server.
- Download this project :
git clone https://github.com/babastienne/panopticity
- Go to the downloaded folder :
cd panopticity
Define some variables used by the application by editing environment variables :
cp .env.dist .env
- Then edit the
.env
file and replace the variables values by the ones you want to use. - Edit the configuration file for the front-end
front-end/CONFIG.js
and override with your parameters.
It is now time to launch the project for the first time :
- Initialize the database by running
docker compose run --rm postgis
. When you seedatabase system is ready to accept connections
you can exit by doingctr+c
(should not take more than a few seconds). - Then create the database structure by applying Django migration. To do so run:
docker compose run --rm web ./manage.py migrate
To import data to your project, you need to download file corresponding to the area you want to cover. This file will be used to import cameras as well as buildings (needed to compute the field of view of each camera). After this initial update you have two options :
- Keep on your server the original file : it'll be used to replicate future modifications done in OpenStreetMap. Usefull if you want to keep your building database up to date with new modifications.
- Remove the original file : you'll still have the possiblity to get updates for cameras but not for the buildings.
By default this project come with sample data so you can follow the import procedure without having to download any file (usefull if you just want to test or develop on the project).
In OpenStreetMap, there is multiple ways of keeping informations up to date. On this project we choose to import data from PBF files. To keep change of the updates we use diff files that are generated regularly. It can by : daily / hourly / minutely.
Depending on the frequency of updates you want and the area you wish to cover, you'll need to choose where to download your data file. Few suggestions :
- https://planet.openstreetmap.org/ : Official source with possiblity to keep change daily / hourly / minutely. It is only for the entire planet therefore the file size can be important and the database for buldings may not be able to keep up without good resources.
- https://download.openstreetmap.fr/ : Daily extracts and minutely diffs. Files are splits into continents / countries / states. Very usefull to download specific region and keep up with the changes almost in real time.
You can find an up to date list of mirrors on the OpenStreetMap dedicated wiki page to explore more options. If you want to keep your data up to date you'll need to find one that handle diffs.
We will refer at your downloaded pbf file as <my-pbf-file>
and in the next commands.
-
Download both your desired pbf and state files and put them in the
osm-data
folder. (If you want to use sample data you can skip this step) -
Import the buildings in the database (can take some time depending of your area. For loading entire France it took 12 minutes) by running the following command:
docker compose run osm2pgsql -O flex -S /data/buildings.lua /osm-data/<my-pbf-file>
- Load cameras (usually takes more time than the previous command):
docker compose run --rm web ./manage.py load_cameras /osm-data/<my-pbf-file>
- Generate your state file from your original data file. To do so run the command:
docker compose run --rm web pyosmium-get-changes -O /osm-data/<my-pbf-file> -f /osm-data/sequence.state.txt -v
. It will create a state file inosm-data/sequence.state.txt
.
After this step, if you don't want to update your buildings in the future and want to save some space on your server you can download your original data file.
- This last command should have prompted something an URL on the terminal. Probably in the format
INFO: Using replication server at <URL_OF_THE_REPLICATION_SERVER>
This is the URL that will be used to get diffs. Copy it and then you can run the following lines to update your cameras (replace with your url):
docker compose run --rm web pyosmium-get-changes --server <URL_OF_THE_REPLICATION_SERVER> -f /osm-data/sequence.state.txt -o /osm-data/diff.osc.gz
> This command creates a diff file (the fileosm-data/diff.osc.gz
) that contains every differences between the original data file and the last version of OSM data on the replication server. The command also edit the sequence.state.txt file to update the sequence number with the last version fetched on the server.docker compose run --rm web ./manage.py load_cameras -d -u /osm-data/diff.osc.gz
> This command updates the camera database with the differences.- (Optionnal) the
osm-data/diff.osc.gz
can be removed. It will otherwise be overwrite next time so this is not mandatory.
Note: If you want to stay up to date, the last three commands can easily by put into a bash script and then launched regularly with a cron depending of your update frequency (minute, hour, day). The file
update-cameras.sh
is an example of something that can be done to automatize the process (it could be improved with log monitoring).
Because of the volume of the building database, we recommand not to update those objects too often. The process consist of completely reloading the database from scratch so it is time consuming and therefore need to be done only occasionaly.
To update your building database without having to completely re-download your data, you need to keep on your server your original data file. This file well be updated by the process.
docker compose run --rm web pyosmium-up-to-date /osm-data/<my-pbf-file>
> This command will fetch the diffs since the last version of your file and apply them to your data file. This command can take some time depending of the last time you did the operation.- Re-create the building database :
docker compose run osm2pgsql -O flex -S /data/buildings.lua /osm-data/<my-pbf-file>
- (Option) Re-compute camera field of views (can be long) :
docker compose run --rm web ./manage.py shell
> from cameras.models import Camera
> for elem in Camera:
> elem.save()
>
> # This operation will be long
>
> exit()
exit
We don't recommand to automatize this operation.
- To launch the back-end, you need to run
docker compose up -d
. The back-end will be running on localhost on port 8000. You need to configure your server as an http_proxy to this port. Anything behind then endpoint/api
need to be available. - The front-end is a static html website so it can be served by any web server.
An example of basic server configuration can be found on the file nginx.conf.example
.
Pretty much the same as in production. If you want to contribute to this project there is some contribution ideas available on todo.md. Don't hesitate to ask if you want to share ideas or need help to start.
You can override translations of the front-end interface or add new languages by editing the file front-end/translations.js
. By default the project is translated in english and french. To add a new language you need to duplicate the english object, change it's language code (for example put es
for Spanish) and then translate the entries.
Panopticity does not support the translations for countries variations (e.g
fr-CA
for Canadian french ;en-US
for american english ; etc.). It only support main language translations. Any contribution to improve this behavior is welcome.
By default PanoptiCity check the language configuration of the use browser to determine the language to display. It does not allow the user to switch the language interface directly. If you wish to improve this feel free to contribute.
The latteral menu allow to display static content to give users some information about any subject.
By default PanoptiCity suggests some contents but you can override them to remove or add any entry. To do so, in the file front-end/translations.js
, you'll need to edit the values under the menuContent
object. Each sub-entry corresponds to an item to the menu. The key is the title of the content and the value is the body of the page. The value can contains HTML formated text. To simplify the formating it is also possible to use \n
to split paragraphs.
The field of view is the area visible/covered by a CCTV. the field of view of every camera depends on a lot of variables. The most important are :
- The height of the camera
- The direction in which the camera is pointed
- The angle (tilt) of the camera that indicated if it is pointed toward the horizon or the floor
- The resolution of the lens of the camera. This gives the number of pixels (e.g: 1920x1080 ~= 2MP ; 2556x1440 ~= 4MP ; 3840x2160 ~= 8MP ; etc.).
- The focal lens of the lens. This mainly impact the angle of view and allow some cameras to be wide-angle (low focal) or on the opposite to focus on specific details (high focal). The focal is expressed in mm (e.g: 8mm ; 12mm ; 75mm).
- The sensor format which is the ratio that indicates the size of the image (usually expressed as 1/2.5" ; 2/3" ; etc.).
The combination of those last 3 parameters allow to determine the quality of an image for a specific distance. The quality is expressed in PPM (pixels per meters) representing the pixel density. For example for a camera of 1920x1080 resolution with a 25mm lens and a 1/3" format, the quality of the image of a person standing 10 meters away from the camera will be 998ppm.
By taking those elements in consideration, we can compute the field of view of camera in which a person can be identified, recognized, detected. We use this matching table to establish what quality corresponds to what level :
The level of surveillance and corresponding qualities are inspired from this Department of Homeland Security document about VideoSurveillance Quality.
It is important to note that a lot of modern cameras have the ability to zoom and move. We talk about dome or PTZ cameras (Pan-Tilt-Zoom). It means that for a lot of devices the variables (particularly the focal) can change depending if the camera is zoomed or not. Public cameras can generally alternate between wide angle and zoomed views depending on the operator or detection algorythm behind.
Obvisouly for each camera the information about resolution, focal and sensor format are not in the OpenStreetMap database. First because it would be a pain to contribute but mainly because it is not possible to get this information even when on field.
The other variables (height, angle and direction) are more easy to declare in OpenStreetMap. Panopticity encourage users to declare the height of a camera every time as well as the direction and the angle when it's a fixed or panning camera.
If sometimes there is no data, how can we determine what are the values that should be used in Panopticity then ?
For basic information, we use default values if they are not tagged in OSM. If they are presents we use them. Default values are:
Field | Default Value |
---|---|
Height | 5 meters |
Angle | 15° |
Direction | No default value. If a fixed camera does not have direction, no field of view is displayed |
For the other fields, to make an estimation, we compiled in a file the technnical information of more than 15200 models of CCTV cameras from 143 differents brands. This gave us a global view of the current technical level of the CCTV market as it is in 2025. Keep in mind that new camera models are released every week so depending when you read this lines the numbers can be differents today.
The numbers used can be seen in the docs/AllCameraList.ods
file (or in JSON format in docs/camerasList.json
).
With those numbers, we sorted every variable and were able to determine statistics about quality of cameras. Depending of the camera type (fixed or dome/PTZ cameras), we created three models to help us determine the quality of cameras (and therefore their field of view):
Scenario | Description | Values for Fixed Cameras | Dome/PTZ Cameras |
---|---|---|---|
Best Case Scenario | This is the scenario corresponding to the first decile which means that 90% of cameras on the market have better quality that what is displayed on map as the field of view | 2.8mm focal & 1920x1080 resolution | 2.8mm focal & 1280x1024 resolution |
Mean / Average | The default scenario. There are as many cameras with better quality than the displayed field of view as there are with poorer quality | 6.8mm focal & 2556x1440 resolution | 6.5mm focal & 2556x1440 resolution |
Worst Case Scenario | This is the scenario corresponding to the last decile, which means that the displayed field of view on map is like if all cameras were in the top 10% of the market (in terms of technical abilities) | 26mm focal & 3840x2160 resolution | 68.2mm focal & 3840x2160 resolution |
The models numbers choosen are the result of a statistic analysis made for the cameras technical information compiled. More information about these analysis below.
How it could be improved
One good way to improve this models would be to create a correlation between every camera model and their sales numbers to ponderate the weight of each camera in the model computation. However thoses numbers can't be easily found.
For fixed cameras, we decided to use an angle for the width of view of 85°. Once again this angle depends a lot of the camera used and espacially its type (fisheye cameras for example, bullet cameras, etc.). Why 85° ? Our calculations showed that the average focal for fixed cameras in the best scenario (= first decile) is 2.8mm. From far the mains format of lenses are 1/3" and 1/2.7" (which corresponds respectively to 4.8mm and 5.37mm). With thoses informations we can estimate the angle of view of the majority of cameras:
- Angle of view (in radian) = 2 * ArcTan(Camera format in mm / 2 * Camera focal in mm)
- Conversion of radian in degrees: Degree = Radian * 180 / Pi
The results are:
- For 1/3" lenses: 81.2°
- for 1/2.7" lenses: 87.5°
Therefore, to simplify we choose to use for all directed cameras an angle of view of ~85°.
While dome and PTZ cameras can usually change their tilt angle, it is not the case for fixed cameras. Therefore this data should be taken in consideration when computing the field of view of fixed cameras.
At the moment the tilt angle is used to apply a computing coefficient. We consider the angle <= 17° being the same as 0° to compensate the vertical angle of vision that is at least 35° and because when aiming for a subject at the same level as the camera we tend to tilt it by 17°.
This behavior can be improved to stop applying a coefficient and compute the real limit of field of view based on the camera height.
As mentionned above, the technical information of more than 15 000 cameras as been compiled into a dataset to have a global view of the technical abilities of the equipement sold on the market. The data as been used to determine the three models used to compute field of view of cameras.
The dataset is available for consultation on this repository.
The following sections compile some graphical analysis that display trends of repartition for multiple technical features (resolution, minimum focal, maximum focal, average focal and format) depending of cameras category (fixed or dome/ptz).
Some specific cameras as been removed from dataset analysis, especially thermal or industrial cameras. Some camera type has been kept but affect the results.
For exemple bullets or fisheyes cameras has been kept and categorized as Dome cameras. However, those cameras has a very short focal and are not really of the same type as "classical" dome or PTZ cameras that can have very large focal. This is a know limitation and the models could be improved by a better categorization or a ponderation with a weight for each sub-type determined by the statistical repartition analysis based on real world observations.
Those numbers are the minimum/average/maximum focal available for each camera product. Some cameras only have one focal so each numbers will be the same. But some others can zoom and change their focal therefore these categorization has been made.
Fixed cameras
Dome/PTZ cameras
Fixed cameras
Dome/PTZ cameras
Fixed cameras
Dome/PTZ cameras
Wow, you're still there ? I guess you're really interested in this project. Here is some usefull information and resources.
If you see multiple cameras at the same location (for example on the same pole), it is advised to create one entry for each camera and set the location the closest as possible. This way it will ensure the logic "one node in OSM = one object in real life".
This way of representing objects has been discussed in the community and seems to be the recommanded way:
- See this discussion (in english)
- Or almost the same (in french)
One inspiration for this project has been the SunderS project. Lot of resources can already be found on their website.
Multiple studies has been conducted to measure effectiveness of CCTV in public areas. It usually shows relative effectiveness but not in the spots we could imagine: in car parkings and residential areas ... so not really in public spaces and city centers.
Also, field studies in cities have shown that video surveillance does not significantly help to solve investigations, nor does it reduce the number of violent crimes, drug-related offences or public order disturbances in cities. There are a number of reasons for this ineffectiveness: lack of coordination between security forces (private, state, municipal), poor quality images, misdirected or dirty cameras, etc. But the major problem is the staggering number of video streams compared with the small number of officers who are supposed to be using them.
Moreover, few studies really compare CCTV costs in comparison with human investments, wich could be interesting.
Finally, in a rising AI age, this really asks more about what we want to do collectively as society and where we want to go. Does little effectiveness justify global surveillance and death of anonymity ?
- TechnoPolice by La Quadrature Du Net (french)
- Big Brother Watch
- Outperforming activism: reflections on the demise of the surveillance camera players
- Anonymize yourself with IR LEDs:
- Anti-recognition systems:
- Disable cameras:
- With lasers
- Or physicaly (paint, stickers, rocks ... be creative)
... and add cameras that you spot in your daily life on OpenStreetMap ! The best way to fight back is to know your enemy, so help us map all the existing cameras so we can at least know where they are and try to avoid them (when possible).
This project is here thanks to the work of others. To create this website I've mainly used those following dependencies.
If you notice that I've used your project but don't see it in this list feel free to open an issue or a pull request so it can be added.
- Leaflet v1.9.4 - BSD 2-Clause License
- Leaflet.Locate v0.83.1 - MIT license
- Leaflet.Basemaps v0.2.1 - ISC license (used a forked version)
- Leaflet.markercluster v1.5.3 - MIT license
- Map background attributions can be seen directly on map on the website
- OpenStreetMap - OBdL-1.0 license
- pyosmium v4.0.2 - BSD 2-Clause License
- osm2pgsql v2.0.1 - GPL-2.0 License
- osm-api-js v2.4.0 - MIT license
- Django v5.1 - BSD-3-Clause license
- django-rest-framework-gis v1.1.0 - MIT license
- django-rest-framework v3.15.2 - license
- Pico CSS v2.0.6 - MIT license
- Logo: https://design-kink.com/hal-9000-2001-space-odyssey-free-vector-art/
- Pictograms for cameras: SunderS
For this project, I've used a Cooperative Non-Violent Non-AI Public Software license. In brief (if you really intent to use this software check the complete license though), you are free to use, modify, redistribute, commercialize and do pretty much everything you want with this software as long as:
- It is not used to exerce any violent action or repression or discrimination against any person. This software can't therefore be used by any law-enforcement administration or company ; (Non-Violent clause)
- If a commercial usage is made of this software, the financial gains are equaly redistributed among workers ; (Cooperative or Anticapitalist clause)
- The content of this project can't be used to train any artificial intelligence model ; (Non-AI clause)