Become Amazon Certified with updated DAS-C01 exam questions and correct answers
An IOT company is collecting data from multiple sensors and is streaming the data to Amazon Managed
Streaming for Apache Kafka (Amazon MSK). Each sensor type has
its own topic, and each topic has the same number of partitions.
The company is planning to turn on more sensors. However, the company wants to evaluate which sensor
types are producing the most data sothat the company can scale
accordingly. The company needs to know which sensor types have the largest values for the following metrics:
ByteslnPerSec and MessageslnPerSec.
Which level of monitoring for Amazon MSK will meet these requirements?
A company wants to enrich application logs in near-real-time and use the enriched dataset for further analysis. The application is running on Amazon EC2 instances across multiple Availability Zones and storing its logs using Amazon CloudWatch Logs. The enrichment source is stored in an Amazon DynamoDB table.
Which solution meets the requirements for the event collection and enrichment?
A marketing company has an application that stores event data in an Amazon RDS database. The company is
replicating this data to Amazon Redshift for reporting and
business intelligence (BI) purposes. New event data is continuously generated and ingested into the RDS
database throughout the day and captured by a change data
capture (CDC) replication task in AWS Database Migration Service (AWS DMS). The company requires that
the new data be replicated to Amazon Redshift in near-real
time.
Which solution meets these requirements?
A company has a business unit uploading .csv files to an Amazon S3 bucket. The company's data platform team has set up an AWS Glue crawler to do discovery, and create tables and schemas. An AWS Glue job writes processed data from the created tables to an Amazon Redshift database. The AWS Glue job handles column mapping and creating the Amazon Redshift table appropriately. When the AWS Glue job is rerun for any reason in a day, duplicate records are introduced into the Amazon Redshift table.
Which solution will update the Redshift table without duplicates when jobs are rerun?
A company has an application that ingests streaming dat
a. The company needs to analyze this stream over a 5-minute timeframe to evaluate the stream for anomalies with Random Cut Forest (RCF) and summarize the current count of status codes. The source and summarized data should be persisted for future use.
Which approach would enable the desired outcome while keeping data persistence costs low?
© Copyrights DumpsCertify 2024. All Rights Reserved
We use cookies to ensure your best experience. So we hope you are happy to receive all cookies on the DumpsCertify.