Open the chapter list
- Chapter 1. Start Here
- Chapter 2. Set Up Your Workspace
- Chapter 3. Prepare Accelerometer Files
- Chapter 4. Record Observations
- Chapter 5. Build Your First Model
- Chapter 6. Predict with a Model
- Chapter 7. Understand Your Results
- Chapter 8. Optimise for Deployment
- Chapter 9. Scripted Reruns
- Chapter 10. Troubleshooting
Understand Your Results and Export Folder
Source:vignettes/deployment-export-format.Rmd
deployment-export-format.RmdThis is Chapter 7 of 10 in the beginner path.
After the first successful run, many beginners ask the same question:
“Which of these files do I actually care about?” That is a good
question. A finished moover run contains several outputs,
but you do not need to understand every file at once.
In this chapter, we’ll walk through the main files and what they are for.
Start with the run folder
A completed run sits inside runs/<run_id>/. Inside
that folder you will usually see several subfolders, including
results/, models/, plots/, and
qc/.
A useful way to think about them is this:
-
results/holds the data products from the run -
models/holds the model export bundles -
plots/holds figures that help you inspect performance -
qc/holds previews and checks that help you confirm the data looked sensible
The export bundle
Inside the run’s models/ folder, you will find one or
more exported model bundles. That bundle is the part most likely to be
shared with collaborators.
A typical bundle includes:
rf_model_full.rdsfeature_manifest.csvmodel_spec.jsonmetrics_overall.csvmetrics_by_class.csvconfusion_matrix.csvtest_vectors.csvrf_tree_dump.json
Which files matter to different people?
Different users care about different parts of the export.
For the R user
The most important files are usually:
- the model bundle folder itself
rf_model_full.rdsfeature_manifest.csv- the metrics files
These are the files you need to inspect the model in R or use it again later.
Why test vectors matter
The test vector files are especially useful because they give you real examples of inputs and expected outputs from the model. If somebody is reimplementing the feature calculations in Python or on an embedded device, the test vectors are usually the quickest way to check whether they are getting the same results.