29 أبريل 2017

Introducation to AWS Cloudformation

What is Cloud Computing with Amazon Web Services?

first and follow solved example in hindi with notes

Forward reference problem (in hindi)

Eliminate left recursion and left factoring in hindi (compiler construct...

Compiler Design Introduction Lecture 1(System programming compiler cons...

Gtu exam paper sem 8 : ( artificial intelligence'17 )

Artificial intelligence ( code-2180703)
Sem-8
  BE- computer Engineering
exam paper 2017

28 أبريل 2017

Alpha Beta pruning in artificial Intelligence in hindi | Solved Example ...

Alpha beta pruning in artificial intelligence

Hill Climbing in Artificial Intelligence

This is a variety of depth-first (generate - and - test) search. A feedback is used here to decide on the direction of motion in the search space. In the depth-first search, the test function will merely accept or reject a solution. But in hill climbing the test function is provided with a heuristic function which provides an estimate of how close a given state is to goal state. The hill climbing test procedure is as follows :

 

1. General he first proposed solution as done in depth-first procedure. See if it is a solution. If so quit , else continue.



2. From this solution generate new set of solutions use , some application rules



3. For each element of this set



(i) Apply test function. It is a solution quit.



(ii) Else see whether it is closer to the goal state than the solution already generated. If yes, remember it else discard it.



4. Take the best element so far generated and use it as the next proposed solution.



This step corresponds to move through the problem space in the direction



Towards the goal state.



5. Go back to step 2.



Sometimes this procedure may lead to a position, which is not a solution, but from which there is no move that improves things. This will happen if we have reached one of the following three states.



(a) A "local maximum " which is a state better than all its neighbors , but is not better than some other states farther away. Local maxim sometimes occur with in sight of a solution. In such cases they are called " Foothills".



(b) A "plateau'' which is a flat area of the search space, in which neighboring states have the same value. On a plateau, it is not possible to determine the best direction in which to move by making local comparisons.



(c) A "ridge" which is an area in the search that is higher than the surrounding areas, but can not be searched in a simple move.



To overcome theses problems we can

(a) Back track to some earlier nodes and try a different direction. This is a good way of dealing with local maxim.



(b) Make a big jump an some direction to a new area in the search. This can be done by applying two more rules of the same rule several times, before testing. This is a good strategy is dealing with plate and ridges.



Hill climbing becomes inefficient in large problem spaces, and when combinatorial explosion occurs. But it is a useful when combined with other methods.

Artificial intelligence min max algorithm with sovle example

Forward reference problem and compiler (in Hindi)

27 أبريل 2017

Phases of compiler in hindi

Assembly language statement

What is Assembler and Assembly Language (in Hindi)

spcc basic concept(assembler,compiler,preprocessor,editor,loader,linker)...

A* algorithm with an solved example in hindi (Artificial intelligence)

Artificial intelligence BE Mid sem paper' 17

Artificial intelligence (2180703)
SEM 8

Genetic algorithm in artificial intelligence

hill climbing in artifical intelligence

Award winning short film Ambani the investor

Depth first search and depth limit search with solved example in artific...

Breadth first search with solved example in artifical intelligence

25 أبريل 2017

AMAZON CLOUD DATABASE SERVICE - Basic concept of Amazon DynamoDB

Basic concept of  Amazon DynamoDB 

         DynamoDB is the first non relational database which is developed at Amazon.

         Amazon DynamoDB data are stored in solid state drivers(SSD) and replicate it synchronously across multiple availability zones.

         DynamoDB is fully managed NOSQL database with high performance and scalability.

         Amazon DynamoDB offeres high availability,reliability, and incremental scalability, with no limits on dataset.

         It provides fast  and predictable performance with seamless scalability.

 Features of DynamoDB

        DynamoDB is accessible via simple web service APIs.

        Serve any level of traffic.

        Store and retrieve any amount of data.

        Pay for what you use.


Benifit of  DynamoDB

        fast,consistent performance

        Highly Scalable

        Fully managed

        Event Driven programming

        Fin-grained Acess control

        Flexible

        No Sql databse

        Fully managed

        Single digit millisecond latency

        massive and seamless scalabillity

        low cost


Data Model

        Table,Items and Attributes

        Table is collection of items.

        Items is collection of attributes(name-value pairs).

        Primary Key(Hash Key) required.   



DEMO OF DynamoDB 




Basic concepts of Amazon Redshift

Basic concepts of  Amazon Redshift

• Amazon Redshift is a fast, fully managed, petabyte-scale data warehouse that makes it simple and cost-effective to analyze all your data using your existing business intelligence tools.


• Start small for $0.25 per hour with no commitments and scale to petabytes for $1,000 per terabyte per year, less than a tenth the cost of traditional solutions.


• Customers typically see 3x compression, reducing their costs to $333 per uncompressed terabyte per year.
• Amazon Redshift is a fast,fully manged,patabyte-scale data warehouse service.


• It means Amazon Redshift provides a simple and cost-effective way to analyze all your data using exiting Business intelligence (BI) and sql clients, regardless of the size of your data.




Benefits of  Amazon Redshift

  •     Fast       
  •    Chep       
  •    Simple      
  •    Elastic     
  •    Compatible


Features of Amazon Redshift

        Optimized for Data Warehousing

        Petabyte Scale

        No Up-Front Costs

        Fault Tolerant

        Automated Backups

        Fast Restores

        Encryption

        Network Isolation

        Audit and Compliance   


Amazon ElastiCache

• Amazon ElastiCache is a web service that makes it easy to deploy, operate, and scale an in-memory data store or cache in the cloud.
• The service improves the performance of web applications by allowing you to retrieve information from fast, managed, in-memory data stores.
• Amazon ElastiCache supports two open-source in-memory engines:
1. Redis
• a fast, open source, in-memory data store and cache.
• Redis-compatible in-memory service that delivers the ease-of-use and power of Redis along with the availability, reliability and performance suitable for the most demanding applications.
• ElastiCache for Redis is fully managed, scalable, and secure - making it an ideal candidate to power high-performance use cases such as Web, Mobile Apps, Gaming, Ad-Tech, and IoT.
1. Memcached
• a widely adopted memory object caching system.
• ElastiCache is protocol compliant with Memcached, so popular tools that you use today with existing Memcached environments will work seamlessly with the service.
• Amazon ElastiCache automatically detects and replaces failed nodes, reducing the overhead associated with self-managed infrastructures and provides a resilient system that mitigates the risk of overloaded databases, which slow website and application load times.


Benefits of Amazon Elasticache


  • Easily 
  • Scalable   
  • Secure and Hardened  
  •  Extreme Performance
  • Redis and Memcached  
  •  Highly Available and Reliable
  •  Compatible
  • Fully managed 


Featured Amazon ElastiCache Customers:







Amazon Machine Images (AMI)

An Amazon Machine Image (AMI) provides the information required to launch an instance, which is a virtual server in the cloud. You specify an AMI when you launch an instance, and you can launch as many instances from the AMI as you need. You can also launch instances from as many different AMIs as you need.
An AMI includes the following:
  • A template for the root volume for the instance (for example, an operating system, an application server, and applications)
  • Launch permissions that control which AWS accounts can use the AMI to launch instances
  • A block device mapping that specifies the volumes to attach to the instance when it's launched.

Private DNS Server (Route 53)

Amazon Route 53 provides three main functions:


  • Domain registration
    • allows you to register domain names
  • Domain Name System (DNS) service
    • translates friendly domains names like www.example.com into IP addresses like 192.0.2.1
    • responds to DNS queries using a global network of authoritative DNS servers, which reduces latency
    • can route Internet traffic to CloudFront, Elastic Beanstalk, ELB, or S3. There’s no charge for DNS queries to these resources
  • Health checking
    • can monitor the health of resources such as web and email servers.
    • sends automated requests over the Internet to the application to
      verify that it’s reachable, available, and functional
    • CloudWatch alarms can be configured for the health checks to send notification when a resource becomes unavailable.
    • can be configured to route Internet traffic away from resources that are unavailable

Amazon simple storage service (S3)

Amazon S3

  • highly-scalable, reliable, and low-latency data storage infrastructure at very low costs.

  • provides a simple web services interface that can be used to store and retrieve any amount of data, at any time, from within Amazon EC2 or from anywhere on the web.

  • allows you to write, read, and delete objects containing from 1 byte to 5 terabytes of data each.

  • number of objects you can store in an Amazon S3 bucket is virtually unlimited.

  • highly secure, supporting encryption at rest, and providing multiple 

  • mechanisms to provide fine-grained control of access to Amazon S3 resources.

  • highly scalable, allowing concurrent read or write access to Amazon S3 data by many separate clients or application threads.

  • provides data lifecycle management capabilities, allowing users to define rules to automatically archive Amazon S3 data to Amazon Glacier, or to delete data at end of life.

AWS Elasticsearch

  • Amazon Elasticsearch Service is a managed service that makes it easy to deploy, operate, and scale Elasticsearch clusters in the AWS Cloud.
  • Elasticsearch is a popular open-source search and analytics engine for use cases such as log analytics, real-time application monitoring, and clickstream analytics
  • Elasticsearch provides
    • real-time, distributed search and analytics engine
    • ability to provision all the resources for Elasticsearch cluster and launches the cluster
    • easy to use cluster scaling options
    • provides self-healing clusters, which automatically detects and replaces failed Elasticsearch nodes, reducing the overhead associated with self-managed infrastructures
    • domain snapshots to back up and restore ES domains and replicate domains across AZs
    • data durability
    • enhanced security with IAM access control
    • node monitoring
    • multiple configurations of CPU, memory, and storage capacity, known as instance types
    • storage volumes for the data using EBS volumes
    • Multiple geographical locations for your resources, known as regions and Availability Zones
    • ability to span cluster nodes across two AZs in the same region, known as zone awareness,  for high availability and redundancy
    • dedicated master nodes to improve cluster stability
    • data visualization using the Kibana tool
    • integration with CloudWatch for monitoring ES domain metrics
    • integration with CloudTrail for auditing configuration API calls to ES domains
    • integration with S3, Kinesis, and DynamoDB for loading streaming data
    • ability to handle structured and Unstructured data
    • HTTP Rest APIs

24 أبريل 2017

AWS Glacier

  • Amazon Glacier is a storage service optimized for archival, infrequently used data, or “cold data.”
  • Glacier is an extremely low-cost storage service that provides durable storage with security features for data archiving and backup.
  • Glacier is designed to provide average annual durability of 99.999999999% for an archive.
  • Glacier redundantly stores data in multiple facilities and on multiple devices within each facility.
  • To increase durability, Glacier synchronously stores the data across multiple facilities before returning SUCCESS on uploading archives.
  • Glacier performs regular, systematic data integrity checks and is built to be automatically self-healing.
  • Glacier enables customers to offload the administrative burdens of operating and scaling storage to AWS, without having to worry about capacity planning, hardware provisioning, data replication, hardware failure detection and recovery, or time-consuming hardware migrations.

  • Glacier is a great storage choice when low storage cost is paramount, with data rarely retrieved, and retrieval latency of several hours is acceptable.
  • S3 should be used if applications requires fast, frequent real time access to the data

  • Glacier can store virtually any kind of data in any format.
  • All data is encrypted on the server side with Glacier handling key management and key protection. It uses AES-256, one of the strongest block ciphers available
  • Glacier allows interaction through AWS Management Console, Command Line Interface CLI and SDKs or REST based APIs.
    • management console can only be used to create and delete vaults.
    • rest of the operations to upload, download data, create jobs for retrieval need CLI, SDK or REST based APIs
  • Use cases include
    • Digital media archives
    • Data that must be retained for regulatory compliance
    • Financial and healthcare records
    • Raw genomic sequence data
    • Long-term database backups

AWS Billing and Cost Management

AWS Billing and Cost Management

  • AWS Billing and Cost Management is the service that you use to pay AWS bill, monitor your usage, and budget your costs

Analyzing Costs with Graphs

  • AWS provides Cost Explorer tool which allows filter graphs by API operations, Availability Zones, AWS service, custom cost allocation tags, EC2 instance type, purchase options, region, usage type, usage type groups, or, if Consolidated Billing used, by linked account.

Budgets

  • Budgets can be used to track AWS costs to see usage-to-date and current estimated charges from AWS
  • Budgets use the cost visualization provided by Cost Explorer to show the status of the budgets and to provide forecasts of your estimated costs.
  • Budgets can be used to create CloudWatch alarms that notify when you go over your budgeted amounts, or when the estimated costs exceed budgets
  • Notifications can be sent to an SNS topic and to email addresses associated with your budget notification

Cost Allocation Tags

  • Tags can be used to organize AWS resources, and cost allocation tags to track the AWS costs on a detailed level.
  • Upon cost allocation tags activation, AWS uses the cost allocation tags to organize the resource costs on the cost allocation report making it easier to categorize and track your AWS costs.
  • AWS provides two types of cost allocation tags,
    • an AWS-generated tag AWS defines, creates, and applies the AWS-generated tag for you,
    • and user-defined tags that you define, create,
  • Both types of tags must be activated separately before they can appear in Cost Explorer or on a cost allocation report

Alerts on Cost Limits

  • CloudWatch can be used to create billing alerts when the AWS costs exceed specified thresholds
  • When the usage exceeds threshold amounts, AWS sends an email notification

Consolidated Billing


  • Consolidated billing enables consolidating payments from multiple AWS accounts (Linked Accounts) within the organization to a single account by designating it to be the Payer Account.
  • Consolidate billing
    • is strictly an accounting and billing feature.
    • allows receiving a combined view of charges incurred by all the associated accounts as well as each of the accounts.
    • is not a method for controlling accounts, or provisioning resources for accounts.
  • Payer account is billed for all charges of the linked accounts.
  • Each linked account is still an independent account in every other way
  • Payer account cannot access data belonging to the linked account owners
  • However, access to the Payer account users can be granted through Cross Account Access roles
  • AWS limits work on the account level only and AWS support is per account only

AWS IAM Overview

  • AWS Identity and Access Management (IAM) is a web service that helps you securely control access to AWS resources for your users.

  • IAM is used to control :
    • Identity – who can use your AWS resources (authentication)
    • Access – what resources they can use and in what ways (authorization)

  • IAM can also keep your account credentials private.

  • With IAM, multiple IAM users can be created under the umbrella of the AWS account or temporary access can be enabled through identity federation with corporate directory.or third party providers

  • IAM also enables access to resources across AWS accounts.

AWS CloudWatch

  • AWS CloudWatch monitors AWS resources and applications in real-time.

  • CloudWatch can be used to collect and track metrics, which are the variables to be measured for resources and applications.

  • CloudWatch alarms can be configured
    • to send notifications or
    • to automatically make changes to the resources based on defined rules

  • In addition to monitoring the built-in metrics that come with AWS, custom metrics can also be monitored

  • CloudWatch provides system-wide visibility into resource utilization, application performance, and operational health.

  • By default, CloudWatch stores the log data indefinitely, and the retention can be changed for each log group at any time

  • CloudWatch Alarm history is stored for only 14 days

Auto Scaling & Elastic Load Balancer

  • Auto Scaling dynamically adds and removes EC2 instances, while Elastic Load Balancing manages incoming requests by optimally routing traffic so that no one instance is overwhelmed.
  • Auto Scaling helps to automatically increase the number of EC2 instances when the user demand goes up, and decrease the number of EC2 instances when demand goes down
  • ELB service helps to distribute the incoming web traffic (called the load) automatically among all the running EC2 instances
  • ELB uses load balancers to monitor traffic and handle requests that come through the Internet.
Using ELB & Auto Scaling

  • makes it easy to route traffic across a dynamically changing fleet of EC2 instances
  • load balancer acts as a single point of contact for all incoming traffic to the instances in an Auto Scaling group.