Production ML Systems: Test Your Knowledge Return to pathway You are using machine learning to build a classification model that predicts unicorn appearances. Your dataset details 10,000 unicorn appearances and 10,000 unicorn non-appearances. The dataset contains the location, time of day, elevation, temperature, humidity, tree cover, presence of a rainbow, and several other features. After launching your unicorn appearance predictor, you will need to keep your model fresh by retraining on new data. Because you are gathering too much new data to train on, you decide to limit the training data by sampling the new data over a window of time. You also need to account for daily and annual patterns in unicorn appearances. What window of time do you choose? One day, because a larger window would result in lots of data and your model would take too long to train. One week, so that your dataset is not too large but you can still smooth out patterns. One year, to ensure that your model is not biased by yearly patterns. You launch your unicorn appearance predictor. It's working well! You go on vacation and return after three weeks to find that your model quality has dropped significantly. Assume that unicorn behavior is unlikely to change significantly in three weeks. What is the most likely explanation for the decrease in quality? Training-serving skew: the format of the serving data gradually changed at some point after the model started serving. You used accuracy as a metric during training. Your model is stale. None of the above. You review the model's predictions for Antarctica, and discover the model has been making poor predictions there since the model was released into production. Which of the following could be the source of the problem? You didn't have enough training examples for Antarctica. You used dynamic training instead of static training. Your model has become stale. All of the above. Your unicorn appearance predictor has operated for a year. You've fixed many problems, and quality is now high. However, you notice a small but persistent problem. Your model quality has drifted slightly lower in urban areas. What might be the cause? The high quality of your predictions lead users to easily find unicorns, affecting unicorn appearance behavior itself. Urban areas are difficult to model. Unicorn appearances are reported multiple times in heavily populated areas, skewing your training data. Through all your troubleshooting, you've greatly improved the quality of the unicorn model's predictions, and as a result, usage has increased tenfold. However, users are now complaining that the model is extremely slow; inference requests typically take more than 30 seconds to return predictions. Which of the following changes could help solve this problem? Switch the model from dynamic training to static training. Switch the model from dynamic inference to static inference. Validate the model quality before serving. None of the above solutions would help. Submit answers error_outline An error occurred when grading the quiz. Please try again.