You are here:

Annotations

What are training areas?

Training areas are sub-regions that you define in your image that will be given to your detector to learn from, along with the annotations that you will draw inside of them. Any image content outside of these training areas is ignored which is intended because if the detector tried to use your entire image that means you would be forced to annotate your entire image as well, which in the case of geospatial data, can often times be enormous.

Here is an example where on the left you have the original image with some training areas and on the right you have effectively what image content your detector will use for training.

Be careful, training areas are not meant to be examples of the objects you are trying to detect themselves, that is what outlines are for. Training areas are just meant to regions where your object exists, which you will then annotate later using the Drawing tools or Magic Wand.

Guildelines for creating good training areas

    • The key to creating a good dataset is to have training areas that cover a variety of visually different regions and to find examples of your object in those different regions. The amount does depend on your the complexity of your use case. If you are simply detecting white sheep on green grass, you likely will not need many training areas and outlines, since the task is easy for the detector to understand. However if you are trying to find vehicles in a dense busy city, you may need more examples to cover vehicles in parking lots, on roads, maybe parked in construction yards, etc.
    • Your initial training areas should contain multiple examples of your object if possible. This is a good starting point for your detector, after which you can iterate based on the results seen in your accuracy and testing areas, by adding more areas where you observe the detector is performing the weakest (meaning it is generating false negatives or false positives
    • A total 50 examples within your first set of training area(s) is a good starting point, but again it could be more depending on the complexity of your use case.

 

  • We recommend training areas cover an area at least 512 by 512 pixels. You will see this clearly as you are drawing the area.
  • You can also create completely empty training areas over regions that are considered “background” to teach the model what NOT to detect. This helps reduce false positives in your result. However that doesn’t mean you should add background areas over every background region you see just because it’s easy to, you will flood the detector with unnecessary redundant information. Again it’s quality that counts, not quantity.
    • The Dataset Recommendation tool can help you make decisions on where you might need to draw new training areas!

The drawing tools allow you to manually create annotations on your dataset, drawing polygon outlines of your object of interest.

There are 2 main ways for drawing outlines: 

Polygon tool

best for drawing regular shapes, like buildings, fields, solar panel, etc.

Circle tool

best for identify an object based on its center point. For example crops, trees, etc.

What are outlines?

These are the examples of your object that will be shown to your detector. From these it will learn what the object you are looking for looks like. Note that any image content in your training areas that does not have an outline will be treated as something you are NOT looking for.

3 primary rules to drawing good outlines:

  1. Within every training area you absolutely MUST annotate every instance of the object you want to detect. If for example you are trying to count buildings and in your training area and you only annotate some of them you are telling your detector you want it to both detect as well as not detect sheep at the same time, leading to terrible results, if any at all. This is a common mistake that most beginners on the platform make, we can’t stress its importance enough.
  2. Draw outlines of objects that are on overlapping the edge of your training area (or edit your training to area fully include or disclude them). The training area will use the part of the object that you outlined outside the training area as well. This is important because we want to avoid teaching the detector to output “partial” objects.
  3. Draw outlines as precisely and consistently as you can to ensure better results, try to follow the edges of your objects as closely as possible. Sloppy annotations means sloppy results. A detector is only as precise as its annotations because it is simply trying to mimic what you provide.

Tips & tricks

  • Drawing precise outlines can be difficult with objects like vegetation. For example for trees you can try to draw a circle using the circle tool within the tree. The important thing there is consistency (what percentage of the radius of the tree top are you drawing over).
  • Additionally for a detector in “Count” mode make sure the outlines don’t overlap as this will encourage your results to be merged together as well. You will get a warnings in the training report if you make this mistake. In segmentation mode it’s fine for outlines to overlap.
  • If you are having trouble still with merged results you can even try to scale your annotations down a little to be inset and your detector will try to produce results that are slightly inset as well (and therefore less likely to overlap and merge with each other)
  • You can also cut holes and copy/paste annotations. See the drawing outlines section below.

The AI Magic Wand tool allows you to quickly create outlines of complex objects or regions in just a few clicks. This is leveraging the “Segment Anything” model by Meta/Facebook.

 

You can start it by selecting the Magic Wand icon in the training toolbar. Here is a quick demo example of using Magic Wand to annotate irregular waterbodies shapes:

Magic wand topbar

While in magic wand mode, the map view is locked and you can control the wand with the top bar:

To teach the magic wand what you want to extract, you should click on the image to add positive example points of your object of interest. After each click, the magic wand will automatically select similar looking areas on your image.

Once you are happy with the selection, click Submit or press enter on your keyboard, to turn it into an annotation for the current class in your detector.

If you need to select other objects, you can repeat this process as many times you need, without exiting the Magic Wand. 

Please, note that you are allowed to select and create one annotation per time. Therefore, remember to submit each selection individually. 

You can also add negative example points, which will remove areas from your selection, allowing a better control of the magic wand. You can access this feature from the toggle on the topbar, or adding points with a right click on your mouse.

For submitting your selection, click on the button “Submit” or press enter on your keyboard. Your selection will be turned into an annotation.

If you wish to restart your selection, simply click “Reset” to start over again.

To move the map or access the Editing tools, you must exit the magic wand using the “Exit” button. You can also exit from the Magic Wand by selecting any other command in the toolbar.

FAQs about our Magic Wand

  • Left mouse click will add an example based on what is selected in the topbar
  • Right mouse click will add a negative example
  • Middle mouse click will Reset the examples
  • The enter key is a shortcut for the submit button

The magic wand currently only imports the largest outline. So if you want to import multiple objects, the recommended workflow is to do it “object by object” while staying in magic wand mode and clicking Submit after each object.

The reason magic wand doesn’t import multiple objects is that you will often have small bit of “noise” detected around your main object and keeping those as outlines would then require some manual cleaning.

Annotation editing tools

After selecting an annotation, you’ll get access to the editing tools, allowing you to modify it. 

Undo

if you would like to discard the edits you have made to your polygon, please, click on this icon and none of your edits will be saved.

Clone

If you are annotating a large amount of very similar objects, you can copy/paste your annotations, using the cloning mode. You can do so by clicking on this icon, you will see a copy of the annotation following your mouse cursor. Then, click anywhere on the map to clone your annotation.

Cut holes

You can cut holes into your polygon by using the cut hole tool when editing a polygon. Here is an example video showing the drawing of a new polygon and then adding a hole to it.
View the video example

Rotate

You can edit the rotation of your polygon, using the rotate icon. After selecting it, you'll then need to use the white circe at the top to rotate your annotation.

Rotate and resize

When in the editing mode, with no tools selected, you can also use the top circle button to resize and rotate your annotation.

Submit

When you are happy with the changes, you can click submit or press enter on your keyboard, to submit the edits.

Accuracy areas allow you to calculate the accuracy of your detector

How to calculate the accuracy of your detector using Accuracy areas:

1. Create an Accuracy areas

Draw one or more Accuracy Area(s) where you want to test your detector

2. Annotate the Accuracy areas

Annotate all the objects inside your Areas
Train

3. Train the detector

Train your detector and visualize the score, displayed in the drop-down just below the "Train Detector" button.

Accuracy score

When you are training a detector you can calculate the accuracy score for your detector (displayed in the top bar) as well as accuracy score for each individual Accuracy area you created (displayed in the top left corner of each Accuracy area. The accuracy scores for all created Accuracy Areas are also listed in the Annotation Areas panel in the right hand menu.)

There, they can be easily sorted, facilitating swift navigation to identify areas where your detector’s performance may be subpar.

How is the score computed?

The accuracy of your detector might be affected by multiple parameters:

  • The number of training annotations
  • The number of training areas containing objects or containing background and counter-examples
  • The complexity of the background and counter-example objects

The meaning of the score is different depending on whether your detector is in Segmentation mode or Count mode.

In count mode…

For count mode detectors, the accuracy score to use is the Counting F-score. This score is composed of two other scores, recall and precision. The recall is the percentage of your outlined objects in your accuracy area that were correctly matched by a detection. The precision is the percentage of your detections that matched an outline. Combined, these can be used to easily compute a single score called the F-score.

If you want to see all of these scores (recall, precision, f-score, and foreground iou) regardless of what mode you are in, they are all available in the training report as well, which is generated each time you train your detector.

In segmentation mode…

For segmentation mode detector, the accuracy to use it the Per Pixel F-Score (

This is a measure of how well your detected regions overlap with the regions you outlined in your Accuracy Areas. This is a “per pixel” score, meaning it is not suited for matching detected and annotated objects. If you are trying to detect individual objects you should be in count mode, not segmentation mode.

DEPRECATED: Formerly we used the intersection over union, also known as the Jaccard index. This was changed as of 26/09/2022.

Sign up for Picterra University for a more in-depth understanding on detector accuracy as well as many of the other advanced features of the platform.

Accuracy is relative!

Note that there is no single magical number that can be called the “accuracy” of your machine learning model.

This is simply not how machine learning-based approaches work and is a common misunderstanding when approaching machine learning. Along with the the question of “How accurate is your model?” one must necessarily also ask the question “Accuracy on what?”.

You can train a building model that performs at 100% on one set of buildings, but then run it on a different set of buildings that look very different from what you trained it on and you could get much lower, but maybe you don’t care about those other buildings anyways. So it really depends on what data you are assessing your model.

This is why we’ve created accuracy areas for you. It is up to you to properly score your model on the imagery corresponding to the scenario that you have defined your model to work on. Be sure to create ample variety with your accuracy areas as in the end, if your accuracy areas do not cover regions that are representative of what you are trying to detect over at scale, then your accuracy score will not be representative of the true score either.

Testing Areas serve as a way to quickly get some output over selected regions of your training imagery as a qualitative check of the performance of your detector without having to spend time annotating as it is the case for Accuracy Areas. Both Testing Areas and Accuracy Areas output results within them after training. However the main difference is that testing Areas do not require annotations inside them and also will not output any quantitative measure of the accuracy of your detector.

We recommend that you only use Testing Areas as tool to help you more easily decide where to put more training and accuracy areas because in the end they do not contribute to either the performance of your detector or the representative-ness of the score of your detector.

Picterra platform offers a possibility to import some of your existing data as annotations within a detector. This is a handy feature in case you are looking to ie.  automate a digitization process that you were doing manually before.

 

To import data you need to have your data in GeoJSON format. If you have it in other formats (KML, Shapefile), you can convert it in GeoJSON first. /!\ The reference coordinate system should be in EPSG:4326 (WGS84 ellipsoid) and not any other projection system.

 

When creating or improving a detector, use the Import button in the toolbar and select a GeoJSON file. The file should only contain Polygons (no lines).

Annotations can be downloaded directly from the Detector Menu button or from the “Info” popup of the detector.

This will generate a zip file containing one geojson file per image in your detector.

Markers allow you to add notes on your dataset.

They are a helpful tool during the development of the model as they allow you to leave and share “comments”  on your data. 

You can add a new marker by simply selecting the Markers button, and click on the location where it needs to be placed. Then, you’ll be able to add a note to that marker.

Markers could be used for:

  • Marking clouds or other atmospheric disturbances in your imagery
  • Marking significant landmarks or points of interest
  • Marking places where the detector did poorly to remember to come back and add annotations there
  • Marking places where you may have physically visited a detection location already
  • Any other interesting observations you might have

They are also used as part of detector collaboration  workflow: mentioning user/s on a marker to ie share a task or request an input, as well as adding comments.

 

Using markers for collaboration

For more information on markers use for detector collaboration workflow see this page.

Keyboard shortcuts

Use these keyboard shortcuts to speed up your work with the markers:

  • ‘Enter’ submits a comment
  • ‘Ctr’l + ‘Enter’ or ‘Shift’ + ‘Enter’ adds a new line in a comment
  • ‘Down Arrow’ key and ‘Enter’ can be used to select mentioned users 

Importing and exporting markers

You can import and export markers to/from GeoJSON using the Marker menu located in the markers tab. 

Importing markers requires a GeoJSON file containing a Feature Collection of Point geometry. Each feature can have a “Comment” text attribute that will be used as the imported marker initial comment.