spot7.org logo
Home PHP C# C++ Android Java Javascript Python IOS SQL HTML Categories
  Home » AMAZON » Page 1
Amazon SWF: Where to put the decider logic?
You can deploy your deciders to the same EC2 instances as your activity workers. However, I would not recommend deploying the API service and all the workers to the same instances. In case of a spike in your workflows your SWF workers could scale independently of your API service and vice versa. I think creating a separate Bean Stalk Configuration for your workers would make sense.

Categories : Amazon

Aws IAM user permission to specific region and to a particular server
Your policy works. I tested it and successfully used it to start only specific instances. Some things to note: In your Resource section, be sure to substitute accountid for your own 12-digit Account ID (available in your Billing/Account page) IAM only supports a limited number of resource-specific API calls. Stop, Start and Reboot are included, but the Describe calls are not resource-specific.

Categories : Amazon

EC2 and AMI create automation with a file to run
Launch the instance and wait for the status to become running scp: Get the IP/PublicDNS (as long as you can access the instance) and invoke scp -i scp -i your-private-key proxy_binary user@IP:dest-dir and You have to use expect to automate 3 and 4. Google for expect to automate ssh and configure a machine. If you are not familiar with expect, there is some learning curve.

Categories : Amazon

What to do when I begin running out of storage space?
Amazon S3 is certainly a good option for storing images because: There is no limit on the amount of data you can store The images can be accessed directly from S3 rather than via a web server An easy way to move existing data into Amazon S3 is to use the Amazon Command Line Interface (CLI). This is free software that can call the AWS API for practically any service, including S3. As per the C

Categories : Amazon

Move files between amazon S3 to Glacier and vice versa programatically using API
You can use the API to define lifecycle rules that archive files from Amazon S3 to Amazon Glacier and you can use the API to retrieve a temporary copy of files archived to Glacier. However, you cannot use the API to tell Amazon S3 to move specific files into Glacier. There are two ways to use Amazon Glacier: Directly via the Glacier API, which allows you to upload/download archives to/from Glac

Categories : Amazon

Can I use AWS CloudFormation with a custom AMI?
Yes, definitely, that is a very common use case: Amazon Web Services (AWS) publishes many Amazon Machine Images (AMIs) that contain common software configurations for public use. In addition, the AWS developer community has published many custom AMIs. You can also create your own custom AMIs so that you can quickly and easily start new instances that have everything you need for your

Categories : Amazon

Bucket permissions on Amazon
The fact that "anyone can still access" your objects suggests that you have granted default access to objects, and you are then trying to Deny access that does not have a signature (Pre-signed URL). If you wish to only provide access to objects in Amazon S3 by using a Pre-Signed URL, then you do not require a Bucket Policy. To explain... By default, all objects in Amazon S3 are private. You can

Categories : Amazon

AWS S3 CLI ACL public-read gives me 403 with sync command
In your second policy statement (the one with PutObject,...), be sure to include a wildcard character for your object names. "Resources" : [ "bucket_name/*" ] Do you know you can test your policies using the online Policy Simulator tool ? http://docs.aws.amazon.com/IAM/latest/UsingPolicySimulatorGuide/iam-policy-simulator-guide.html

Categories : Amazon

EC2 security group setting for load balancer, auto scaling group
OutofService indicates that your Elastic Load Balancer is either not ready, or the instances are failing their Health Check. If you point to the little "i" information icon, it will explain why an instance is not InService. Within your Elastic Load Balancer, take a look at the Health Check tab and confirm that it is configured correctly. It will either be checking a URL (eg /index.htm) or checkin

Categories : Amazon

EC2 Create Image EBS volume seems to remain the same
You care correct that the "Create Image" command creates an Amazon Machine Image (AMI). If you start a new EC2 instance with this AMI, it will contain the same data as the machine that was imaged. That's why you are copying your exiting problem to the new instance. Check your disk space with df -h to confirm that you have space available. If you require more disk space, you can copy your disk to

Categories : Amazon

AWS S3 custom error document for access denied
Put your custome image in the document explained here Make sure you read Error Documents and Browser Behavior at the bottom of the page

Categories : Amazon

Ensure free data transfer from S3 to EC2
As long as your S3 buckets are in the same region as your EC2 machines, your data xfers are free. If you always deal with the same region, then you do not have to worry about the data xfer cost. Data Transfer OUT From Amazon S3 To Amazon EC2 in the same region $0.000 per GB

Categories : Amazon

Can't connect to RDS from AWS EC2
Okay - I was able to figure out what was wrong. It seems that I should be using the following to connect to the RDS: mysql -u username -p -h endpoint databaseName This prompts me to enter in my password, which then connects me to my database above in the RDS instance that I have set up. Evidently, you have to specify the database that you want to connect to and the port number is optional.

Categories : Amazon

Cant connect to new Amazon AWS Centos instance
you can-not login with pem file you have to convert pem file to ppk by using puttygen, then you give your hostname and this newly generated ppk file, it will allow you to login. you have to use ec2-user in user-name.

Categories : Amazon

Shared File Systems between multiple AWS EC2 instances
You are right, at the moment it is not possible to add an EBS volume to multiple instances. To create a common storage for all instances, there are options like NFS, mounting S3 buckets or using a distributed cluster filesystem like GlusterFS. However in most cases you can simplify your setup. Try to offload static assets to another (static) domain or even host it on an website-enabled S3 bucket

Categories : Amazon

error trying to change the root volume_size for EC2 instance using ansible
I got interested in answering this because you put in a (nearly) fully working example. I copied it locally, made small changes to work in my AWS account, and iterated to figure out the solution. I suspected a YAML+Ansible problem. I tried a bunch of things and looked around. Michael DeHaan (creator of Ansible) said the complex argument/module style is required as seen in the ec2 examples. Here's

Categories : Amazon

Why can't I access the internet from my private subnet on an AWS VPC?
There could be a lot of reasons, because of various configuration errors, but most common problem is when you neglect to an an internet gateway to your VPC. By default, instances that you launch into a virtual private cloud (VPC) can't communicate with the Internet. You can enable access to the Internet from your VPC by attaching an Internet gateway to the VPC, ensuring that your instanc

Categories : Amazon

On AWS, can we setup the loadbalancer to multicast?
There is no native AWS service that will do this. You can use a variety of reverse proxy tools. For example you can use the experimental iptables ROUTE target or something like this duplicator project. There are many ways to do it, but you'll need to roll your own solution.

Categories : Amazon

How to copy data in bulk from Kinesis -> Redshift
That is already done for you! If you use the Kinesis Connector Library, there is a built-in connector to Redshift https://github.com/awslabs/amazon-kinesis-connectors Depending on the logic you have to process connector can be really easy to implement.

Categories : Amazon

AWS Outbound data transfer charges
Data Transfer OUT From Amazon EC2 To Amazon S3, Amazon Glacier, Amazon DynamoDB, Amazon SES, Amazon SQS, or Amazon SimpleDB in the same AWS Region $0.00 per GB Amazon EC2, Amazon RDS, Amazon Redshift or Amazon ElastiCache instances, Amazon Elastic Load Balancing, or Elastic Network Interfaces in the same Availability Zone Using a private IP address $0.00 per GB Using a public or Elastic IP

Categories : Amazon

What is the correct syntax for filtering by tag in describe-vpcs?
You got pretty close to solving it -- the only problem is that you are not specifying a valid filter for describe-vpcs. Here's the filter that would be relevant to your use case: tag:key=*value* - The key/value combination of a tag assigned to the resource. So when it is asking for Name=string1,Values=string1..., it expects: Name=tag:TagName Values=TagValue Try this instead, works for me

Categories : Amazon

Querying in DynamoDB (with hash-and-range primary key) without providing hash key
Is there an effective way to do this in DynamoDB? It sounds like you are looking for Global Secondary Indexes (GSI). You have your table which has: Hash key: Category#Domain Range key: GroupType#GroupName Other attributes And based off this table it sounds like you want to have a GSI: Hash key: GroupType#GroupName Range key: depends on design (not necessary in GSI) Other attributes that

Categories : Amazon

elastic-beanstalk docker app not updating upon deploy
I wonder if you might try using the user-data input when you define your instances in Beanstalk? Something like this could fire off right at the end of boot: #!/bin/bash cd /app/dir/home sudo docker pull username/container ... other things you may need to do ... More that you can reference about user-data scripts and executables: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.

Categories : Amazon

Self password rotation - Redshift
PostgreSQL allows users to change their own passwords. See: ALTER USER documentation Therefore, you cannot prevent users changing their own password.

Categories : Amazon

After enabling client-to-node encryption, opscenter can't connect to cluster
Have you edited your cluster in OpsCenter to set it up for client-to-node encryption? In OpsCenter, click Settings > Cluster Connections. If you have multiple clusters, select yours from the dropdown. Check "Client to node encryption is enabled on my cluster" and enter the cert settings that follow. That should do it.

Categories : Amazon

Dynamically Insert Date into filename of Redshift Copy Command from S3
Redshift copy command expects exact s3 path for folder or file (s3://abc/def or s3://abc/def/ijk.csv ) You need to give correct path for the file. You can write simple java /php or shell script to ( postgres drivers) create s3 path using date dynamically build query and then fire. So that you replace it with actual date value. So that there will be on syntax errors.

Categories : Amazon

Elastic Beanstalk rolling update timeout not honored
"Pause Time" relates to environmental configuration made to instances. "Command timeouts" relates to commands executed to building the environment (for example if you've customised the container). Neither have anything to do with rolling application updates or zero downtime deployments. The documentation around this stuff is confusing and fragmented. For zero downtime application deployments, AWS

Categories : Amazon

EC2 with Docker and EBS volume, mount EBS volume inside container during init
Short version: This is not an answer, just a little help towards it with clarification on how Docker works. Not directly related, but your Dockerfile should probably look like this: FROM dockerfile/java:oracle-java8 # Expose the port 9000 EXPOSE 9000 # Volumes VOLUME /root/wisdom/logs VOLUME /root/wisdom/application # Change workdir. WORKDIR /root/wisdom RUN touch /root/wisdom.log # Add the

Categories : Amazon

Bees with machine gun using Amazon free tier
The problem is, I think, this line in your boto config file: ec2_region_endpoint = ec2-54-148-72-140.us-west-2.compute.amazonaws.com This is telling boto that it should try to use this hostname to make EC2 requests but this appears to be the hostname of an EC2 instance which will not be able to reply to these requests. Just remove this line and let boto use the pre-configured host name for the

Categories : Amazon

How to Generate AWS DynamoDB Credential Key with STS API which is Limited to Insert and Update One Key/Row
Yes, you can write a policy to restrict access to specific keys via fine-grained access control.

Categories : Amazon

CloudFront - push group of files, when one file is accessed?
Amazon CloudFront uses a pull-model at its edge locations. This means that content is only loaded into an edge location when a request is received. There is no capability to "push" content into edge locations. (This differs from Akamai, which does use a "push" model.) In theory, you could do it by request the URL at every edge location, but requests are automatically directed to the closest edge

Categories : Amazon

Point Domain name to AWS EC2 instance
To point a Domain Name to an EC2 instance, you can either use Route 53 or your own DNS service. In both cases: Assign an Elastic IP address to your EC2 instance In Route 53 or your own DNS service, define a domain/subdomain that points to this IP address The above assumes that you wish to point to a single EC2 instance. If you have multiple instances with a Load Balancer in front, you will req

Categories : Amazon

Why do AWS elastic beanstalk rolling version updates still have a 2min downtime with 503s?
This is an old question, but anyway. Unfortunately, the 'rolling updates' on Elastic Beanstalk only apply to configuration changes, and not to code deployments as per comment from Amazon on this thread. https://forums.aws.amazon.com/thread.jspa?messageID=502158 Alternative deployment strategies are detailed here: http://www.hudku.com/blog/demystified-zero-downtime-with-amazon/ There is opportun

Categories : Amazon

Cannot install inotify on Amazon EC2
I bumped into this issue as well -- it's a bit easier than grabbing an RPM, or the source and compiling. Amazon Linux AMI's come with the EPEL repository source, but it's disabled. So you enable it: sudo yum-config-manager --enable epel Then run a regular yum update and install the toolset: sudo yum update sudo yum install inotify-tools

Categories : Amazon

several open connection in rabbit with different java client version number
Sorry for not updating this post. I found out that another app had a connection leak which was using rabbit 3.2.4. Because rabbit is behind an ELB it was hard to track down the faulty application. This issue is fixed now. Thanks -Parshu

Categories : Amazon

Unable to download AWS CodeDeploy Agent Install file
I figured out the problem, According to Codedeploy documentation for IAM Instance profile http://docs.aws.amazon.com/codedeploy/latest/userguide/how-to-create-iam-instance-profile.html following permissions needs to be given to your IAM instance profile. { "Version": "2012-10-17", "Statement": [ { "Action": [ "s3:Get*", "s3:List*" ], "Effect": "Allow",

Categories : Amazon

AWS OpsWorks: use Redis instead of Memcached
If you want to stick to OpsWorks only then there is no out-of-the box Redis there. You can however create custom recipes and make your own Redis layer. If you do not have to stick to OpsWorks only then ElastiCache can use memcached or Redis.

Categories : Amazon

SQL Error (1045) in statement #0: Access denied for user 'root'@'
i faced similar problem, so for that i created new user by granting all the privileges and then i connected on ec2-instance and it was allowing me to connect after then, you can also do the same, below are the steps. CREATE USER 'username'@'localhost' IDENTIFIED BY 'password'; GRANT ALL PRIVILEGES ON *.* TO 'username'@'your_public_ip' WITH GRANT OPTION; CREATE USER 'username'@'%' IDENTIFIED BY

Categories : Amazon

Running AWS commands from commandline on a ShellCommandActivity
OK. found it - http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-concepts-roles.html The resourceRole in the default object in your pipeline will be the one assigned to resources (Ec2Resource) that are created as a part of the pipeline activation. The default one in configured to have all your permissions and AWS commandline and SDK packages are automatically looking for those crede

Categories : Amazon


Recently Add
How to setup EC2 Security Group to allow working with Firebase?
Amazon SWF: Where to put the decider logic?
Aws IAM user permission to specific region and to a particular server
EC2 and AMI create automation with a file to run
What to do when I begin running out of storage space?
Move files between amazon S3 to Glacier and vice versa programatically using API
Can I use AWS CloudFormation with a custom AMI?
Bucket permissions on Amazon
AWS S3 CLI ACL public-read gives me 403 with sync command
EC2 security group setting for load balancer, auto scaling group
EC2 Create Image EBS volume seems to remain the same
AWS S3 custom error document for access denied
Ensure free data transfer from S3 to EC2
Can't connect to RDS from AWS EC2
Cant connect to new Amazon AWS Centos instance
Shared File Systems between multiple AWS EC2 instances
error trying to change the root volume_size for EC2 instance using ansible
Why can't I access the internet from my private subnet on an AWS VPC?
On AWS, can we setup the loadbalancer to multicast?
How to copy data in bulk from Kinesis -> Redshift
AWS Outbound data transfer charges
What is the correct syntax for filtering by tag in describe-vpcs?
Querying in DynamoDB (with hash-and-range primary key) without providing hash key
elastic-beanstalk docker app not updating upon deploy
Self password rotation - Redshift
After enabling client-to-node encryption, opscenter can't connect to cluster
Dynamically Insert Date into filename of Redshift Copy Command from S3
Elastic Beanstalk rolling update timeout not honored
EC2 with Docker and EBS volume, mount EBS volume inside container during init
Bees with machine gun using Amazon free tier
© Copyright 2017 spot7.org Publishing Limited. All rights reserved.