- Real-Time Data Streaming and Processing
Kinesis provides real-time data streaming. Producer applications can add data records to the stream and consumers can read the data after it is added. Data streaming between the producer, who is adding records, and the consumer, who is reading the record, is less than one second.
- Durability and Elasticity
Kinesis Stream provides high availability and durability of the data stream. This ensures that data is not lost and real-time complex processing can be achieved. Kinesis Stream can scale based on the traffic flow of data records.
- Managed Service
Kinesis is a fully AWS-managed service. You don't need to look after the Kinesis infrastructure to manage it; you just need to configure Kinesis Streams during its creation.
- Concurrent Consumers
AWS Kinesis doesn't restrict the consumer applications to read data records. Multiple consumer applications can read the same record and process it. For example, you can achieve two tasks on data records—store raw data on S3 and process data records. You can have two consumer applications where one application will store raw data to S3 and the second application will process the data. In addition to this, you can also use Kinesis Client Library (KCL) to have multiple consumer workers to process data records.
Kinesis limits
stores records of a stream for up to 24 hours, by default, which can be extended to max 7 days
maximum size of a data blob (the data payload before Base64-encoding) within one record is 1 megabyte (MB)
Each shard can support up to 1000 PUT records per second.
AWS Kinesis Data Firehose
- Kinesis Data Firehose is a fully managed service as there is no need to write applications or manage resources
- data transfer solution for delivering real time streaming data to destinations such as S3, Redshift, Elasticsearch service, and Splunk.
- is NOT real time as it buffers incoming streaming data to a certain size or for a certain period of time before delivering it to destinations. Buffer
- Size is in MBs and Buffer Interval is in seconds.
- supports multiple producers as datasource, which include Kinesis data stream, Kinesis Agent, or the Kinesis Data Firehose API using the AWS SDK, CloudWatch Logs, CloudWatch Events, or AWS IoT
- supports out of box data transformation as well as custom transformation using Lambda function to transform incoming source data and deliver the transformed data to destinations
- supports interface VPC endpoint to keep traffic between the Amazon VPC and Kinesis Data Firehose from leaving the Amazon network. Interface VPC endpoints don’t require an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection
Data Stream
AWS Kinesis Stream is a stream of data records used for real-time data transfer. AWS Kinesis Stream follows the pattern of producer and consumer to push and read data respectively from streams. Kinesis Stream can be used to process and aggregate data by integrating different AWS resources such as S3, Redshift, Elastic Search, and so on. For example, to analyze the call data records of telecommunication network, which are enormous records, the processing needs to be in real time. Whenever call data records are generated, applications will push data to Kinesis Stream, and the consumer application will read data from the stream and process it. This generates reports and billing for specific call data records. Simultaneously, Kinesis Stream also invokes events to store raw data to S3 as per configuration. This makes it possible for an application to serve high throughput, reliability, and durability.
- Amazon Kinesis Data Streams enable real-time processing of streaming data at massive scale
- Kinesis Streams enables building of custom applications that process or analyze streaming data for specialized needs
- Data such as clickstreams, application logs, social media etc can be added from multiple sources and within seconds is available for processing to the Amazon Kinesis Applications
- Kinesis provides ordering of records, as well as the ability to read and/or replay records in the same order to multiple applications.
- Amazon Kinesis is designed to process streaming big data and the pricing model allows heavy PUTs rate.
- Kinesis Streams is useful for rapidly moving data off data producers and then continuously processing the data, be it to transform the data before emitting to a data store, run real-time metrics and analytics, or derive more complex data streams for further processing
- Accelerated log and data feed intake: Data producers can push data to Kinesis stream as soon as it is produced, preventing any data loss and making it available for processing within seconds.
- Real-time metrics and reporting: Metrics can be extracted and used to generate reports from data in real-time.
- Real-time data analytics: Run real-time streaming data analytics.
- Complex stream processing: Create Directed Acyclic Graphs (DAGs) of Kinesis Applications and data streams, with Kinesis applications adding to another Amazon Kinesis stream for further processing, enabling successive stages of stream processing.
Kinesis data Stream is a set of shards
- Shard
- Each shard provides has a sequence of data records
- Streams are made of shards and is the base throughput unit of an Kinesis stream.
- Each shard supports up to 5 transactions per second for reads, up to a maximum total data read rate of 2 MB per second and up to 1,000 records per second for writes, up to a maximum total data write rate of 1 MB per second (including partition keys)
- Each shard provides a fixed unit of capacity. If the limits are exceeded, either by data throughput or the number of PUT records, the put data call will be rejected with a ProvisionedThroughputExceeded exception.
- This can be handled by
- Implementing a retry on the data producer side, if this is due to a temporary rise of the stream’s input data rate
- Dynamically scaling the number of shared (resharding) to provide enough capacity for the put data calls to consistently succeed
- Data Record
- A record is the unit of data stored in an Amazon Kinesis data stream.
- A record is composed of a sequence number, partition key, and data blob, which is an immutable sequence of bytes
- Maximum size of a data blob is 1 MB
- Partition key
- Partition key is used to segregate and route records to different shards of a stream.
- A partition key is specified by the data producer while adding data to an Amazon Kinesis stream
- Sequence number
- A sequence number is a unique identifier for each record.
- Kinesis assigns a Sequence number, when a data producer calls PutRecord or PutRecords operation to add data to a stream.
- Sequence numbers for the same partition key generally increase over time; the longer the time period between PutRecord or PutRecords requests, the larger the sequence numbers become.
- Each shard provides has a sequence of data records
- Streams are made of shards and is the base throughput unit of an Kinesis stream.
- Each shard supports up to 5 transactions per second for reads, up to a maximum total data read rate of 2 MB per second and up to 1,000 records per second for writes, up to a maximum total data write rate of 1 MB per second (including partition keys)
- Each shard provides a fixed unit of capacity. If the limits are exceeded, either by data throughput or the number of PUT records, the put data call will be rejected with a ProvisionedThroughputExceeded exception.
- This can be handled by
- Implementing a retry on the data producer side, if this is due to a temporary rise of the stream’s input data rate
- Dynamically scaling the number of shared (resharding) to provide enough capacity for the put data calls to consistently succeed
- A record is the unit of data stored in an Amazon Kinesis data stream.
- A record is composed of a sequence number, partition key, and data blob, which is an immutable sequence of bytes
- Maximum size of a data blob is 1 MB
- Partition key is used to segregate and route records to different shards of a stream.
- A partition key is specified by the data producer while adding data to an Amazon Kinesis stream
- A sequence number is a unique identifier for each record.
- Kinesis assigns a Sequence number, when a data producer calls PutRecord or PutRecords operation to add data to a stream.
- Sequence numbers for the same partition key generally increase over time; the longer the time period between PutRecord or PutRecords requests, the larger the sequence numbers become.
Q1. You are deploying an application to track GPS coordinates of delivery trucks in the United States. Coordinates are transmitted from each delivery truck once every three seconds. You need to design an architecture that will enable real-time processing of these coordinates from multiple consumers. Which service should you use to implement data ingestion?
- Amazon Kinesis
- AWS Data Pipeline
- Amazon AppStream
- Amazon Simple Queue Service
Q2. You are deploying an application to collect votes for a very popular television show. Millions of users will submit votes using mobile devices. The votes must be collected into a durable, scalable, and highly available data store for real-time public tabulation. Which service should you use?
- Amazon Kinesis
- AWS Data Pipeline
- Amazon AppStream
- Amazon Simple Queue Service
- Amazon DynamoDB
- Amazon Redshift
- Amazon Kinesis
- Amazon Simple Queue Service - :S3 is a cost-effective way to store your data, but not designed to handle a stream of data in real-time
Q3. Your company is in the process of developing a next generation pet collar that collects biometric information to assist families with promoting healthy lifestyles for their pets. Each collar will push 30kb of biometric data In JSON format every 2 seconds to a collection platform that will process and analyze the data providing health trending information back to the pet owners and veterinarians via a web portal Management has tasked you to architect the collection platform ensuring the following requirements are met. Provide the ability for real-time analytics of the inbound biometric data Ensure processing of the biometric data is highly durable, elastic and parallel. The results of the analytic processing should be persisted for data mining. Which architecture outlined below will meet the initial requirements for the collection platform?
- Utilize S3 to collect the inbound sensor data analyze the data from S3 with a daily scheduled Data Pipeline and save the results to a Redshift Cluster.
- Utilize Amazon Kinesis to collect the inbound sensor data, analyze the data with Kinesis clients and save the results to a Redshift cluster using EMR.
- Utilize SQS to collect the inbound sensor data analyze the data from SQS with Amazon Kinesis and save the results to a Microsoft SQL Server RDS instance.
- Utilize EMR to collect the inbound sensor data, analyze the data from EUR with Amazon Kinesis and save me results to DynamoDB.
Q4. Your customer is willing to consolidate their log streams (access logs, application logs, security logs etc.) in one single system. Once consolidated, the customer wants to analyze these logs in real time based on heuristics. From time to time, the customer needs to validate heuristics, which requires going back to data samples extracted from the last 12 hours? What is the best approach to meet your customer’s requirements?
- Send all the log events to Amazon SQS. Setup an Auto Scaling group of EC2 servers to consume the logs and apply the heuristics.
- Send all the log events to Amazon Kinesis develop a client process to apply heuristics on the logs
- Configure Amazon CloudTrail to receive custom logs, use EMR to apply heuristics the logs (CloudTrail is only for auditing)
- Setup an Auto Scaling group of EC2 syslogd servers, store the logs on S3 use EMR to apply heuristics on the logs (EMR is for batch analysis)
- Log clicks in weblogs by URL store to Amazon S3, and then analyze with Elastic MapReduce
- Push web clicks by session to Amazon Kinesis and analyze behavior using Kinesis workers
- Write click events directly to Amazon Redshift and then analyze with SQL
- Publish web clicks by session to an Amazon SQS queue men periodically drain these events to Amazon RDS and analyze with SQL
Q6. You need to replicate API calls across two systems in real time. What tool should you use as a buffer and transport mechanism for API call events?
- AWS SQS
- AWS Lambda
- AWS Kinesis
- AWS SNS
(AWS Kinesis is an event stream service. Streams can act as buffers and transport across systems for in-order programmatic events, making it ideal for replicating API calls across systems)
Q7. Your social media monitoring application uses a Python app running on AWS Elastic Beanstalk to inject tweets, Facebook updates and RSS feeds into an Amazon Kinesis stream. A second AWS Elastic Beanstalk app generates key performance indicators into an Amazon DynamoDB table and powers a dashboard application. What is the most efficient option to prevent any data loss for this application?
- Use AWS Data Pipeline to replicate your DynamoDB tables into another region.
- Use the second AWS Elastic Beanstalk app to store a backup of Kinesis data onto Amazon Elastic Block Store (EBS), and then create snapshots from your Amazon EBS volumes.
- Add a second Amazon Kinesis stream in another Availability Zone and use AWS data pipeline to replicate data across Kinesis streams.
- Add a third AWS Elastic Beanstalk app that uses the Amazon Kinesis S3 connector to archive data from Amazon Kinesis into Amazon S3.
- Kinesis Firehose + RDS
- Kinesis Firehose + RedShift
- EMR using Hive
- EMR running Apache Spark
(Kinesis Firehose provides a managed service for aggregating streaming data and inserting it into RedShift. RedShift also supports ad-hoc queries over well-structured data using a SQL-compliant wire protocol, so the business team should be able to adopt this system easily.)
Q9. Your organization needs to ingest a big data stream into their data lake on Amazon S3. The data may stream in at a rate of hundreds of megabytes per second. What AWS service will accomplish the goal with the least amount of management?
- Amazon Kinesis Firehose
- Amazon Kinesis Streams
- Amazon CloudFront
- Amazon SQS
Q10. Your application generates a 1 KB JSON payload that needs to be queued and delivered to EC2 instances for applications. At the end of the day, the application needs to replay the data for the past 24 hours. In the near future, you also need the ability for other multiple EC2 applications to consume the same stream concurrently. What is the best solution for this?
- Kinesis Data Streams
- Kinesis Firehose
- SNS
- SQS
A company is setting up a centralized logging solution on AWS and has several requirements. The company wants its Amazon CloudWatch Logs and VPC Flow logs to come from different sub accounts and to be delivered to a single auditing account. However, the number of sub accounts keeps changing. The company also needs to index the logs in the auditing account to gather actionable insight.
How should a DevOps Engineer implement the solution to meet all of the company's requirements?
A. Use AWS Lambda to write logs to Amazon ES in the auditing account. Create an Amazon CloudWatch subscription filter and use Amazon Kinesis Data Streams in the sub accounts to stream the logs to the Lambda function deployed in the auditing account.
B. Use Amazon Kinesis Streams to write logs to Amazon ES in the auditing account. Create a CloudWatch subscription filter and use Kinesis Data Streams in the sub accounts to stream the logs to the Kinesis stream in the auditing account.
C. Use Amazon Kinesis Firehose with Kinesis Data Streams to write logs to Amazon ES in the auditing account. Create a CloudWatch subscription filter and stream logs from sub accounts to the Kinesis stream in the auditing account.
D. Use AWS Lambda to write logs to Amazon ES in the auditing account. Create a CloudWatch subscription filter and use Lambda in the sub accounts to stream the logs to the Lambda function deployed in the auditing account.