Become Amazon Certified with updated AWS-DEA-C01 exam questions and correct answers
A telecommunications company collects network usage data throughout each day at a rate of several thousanddata points each second. The company runs an application to process the usage data in real time. Thecompany aggregates and stores the data in an Amazon Aurora DB instance.Sudden drops in network usage usually indicate a network outage. The company must be able to identifysudden drops in network usage so the company can take immediate remedial actions.Which solution will meet this requirement with the LEAST latency?
A company has three subsidiaries. Each subsidiary uses a different data warehousing solution. The firstsubsidiary hosts its data warehouse in Amazon Redshift. The second subsidiary uses Teradata Vantage onAWS. The third subsidiary uses Google BigQuery.The company wants to aggregate all the data into a central Amazon S3 data lake. The company wants to useApache Iceberg as the table format.A data engineer needs to build a new pipeline to connect to all the data sources, run transformations by usingeach source engine, join the data, and write the data to Iceberg.Which solution will meet these requirements with the LEAST operational effort?
A company has three subsidiaries. Each subsidiary uses a different data warehousing solution. The firstsubsidiary hosts its data warehouse in Amazon Redshift. The second subsidiary uses Teradata Vantage onAWS. The third subsidiary uses Google BigQuery.The company wants to aggregate all the data into a central Amazon S3 data lake. The company wants to useApache Iceberg as the table format.A data engineer needs to build a new pipeline to connect to all the data sources, run transformations by usingeach source engine, join the data, and write the data to Iceberg.Which solution will meet these requirements with the LEAST operational effort?
A data engineer wants to orchestrate a set of extract, transform, and load (ETL) jobs that run on AWS. TheETL jobs contain tasks that must run Apache Spark jobs on Amazon EMR, make API calls to Salesforce, andload data into Amazon Redshift.The ETL jobs need to handle failures and retries automatically. The data engineer needs to use Python toorchestrate the jobs.Which service will meet these requirements?
A Data Engineering Consultant is tasked with establishing a CI/CD pipeline for a data engineering project in AWS. The project involves a multi-stage data processing application, requiring reliable build, test, and deployment phases, and should leverage infrastructure as code for consistency and speed. The team desires a highly automated pipeline, well integrated into the AWS ecosystem, with minimal manual interventions and quick turnaround times for deploying updates.
Which of the following setups would best meet these requirements?
© Copyrights DumpsCertify 2025. All Rights Reserved
We use cookies to ensure your best experience. So we hope you are happy to receive all cookies on the DumpsCertify.