|Amazon SWF: Where to put the decider logic?|
You can deploy your deciders to the same EC2
instances as your activity workers.
However, I would not recommend deploying the API
service and all the workers to the same instances.
In case of a spike in your workflows your SWF
workers could scale independently of your API
service and vice versa.
I think creating a separate Bean Stalk
Configuration for your workers would make sense.
|Aws IAM user permission to specific region and to a particular server|
Your policy works. I tested it and successfully
used it to start only specific instances.
Some things to note:
In your Resource section, be sure to substitute
accountid for your own 12-digit Account ID
(available in your Billing/Account page)
IAM only supports a limited number of
resource-specific API calls. Stop, Start and
Reboot are included, but the Describe calls are
|EC2 and AMI create automation with a file to run|
Launch the instance and wait for the status to
scp: Get the IP/PublicDNS (as long as you can
access the instance) and invoke scp -i
scp -i your-private-key proxy_binary
You have to use expect to automate 3 and 4. Google
for expect to automate ssh and configure a
machine. If you are not familiar with expect,
there is some learning curve.
|What to do when I begin running out of storage space?|
Amazon S3 is certainly a good option for storing
There is no limit on the amount of data you can
The images can be accessed directly from S3 rather
than via a web server
An easy way to move existing data into Amazon S3
is to use the Amazon Command Line Interface (CLI).
This is free software that can call the AWS API
for practically any service, including S3.
As per the C
|Move files between amazon S3 to Glacier and vice versa programatically using API|
You can use the API to define lifecycle rules that
archive files from Amazon S3 to Amazon Glacier and
you can use the API to retrieve a temporary copy
of files archived to Glacier. However, you cannot
use the API to tell Amazon S3 to move specific
files into Glacier.
There are two ways to use Amazon Glacier:
Directly via the Glacier API, which allows you to
upload/download archives to/from Glac
|Can I use AWS CloudFormation with a custom AMI?|
Yes, definitely, that is a very common use case:
Amazon Web Services (AWS) publishes many Amazon
Machine Images (AMIs)
that contain common software configurations for
public use. In
addition, the AWS developer community has
published many custom AMIs.
You can also create your own custom AMIs so that
you can quickly and
easily start new instances that have everything
you need for your
|Bucket permissions on Amazon|
The fact that "anyone can still access" your
objects suggests that you have granted default
access to objects, and you are then trying to Deny
access that does not have a signature (Pre-signed
If you wish to only provide access to objects in
Amazon S3 by using a Pre-Signed URL, then you do
not require a Bucket Policy. To explain...
By default, all objects in Amazon S3 are private.
|AWS S3 CLI ACL public-read gives me 403 with sync command|
In your second policy statement (the one with
PutObject,...), be sure to include a wildcard
character for your object names.
"Resources" : [ "bucket_name/*" ]
Do you know you can test your policies using the
online Policy Simulator tool ?
|EC2 security group setting for load balancer, auto scaling group|
OutofService indicates that your Elastic Load
Balancer is either not ready, or the instances are
failing their Health Check. If you point to the
little "i" information icon, it will explain why
an instance is not InService.
Within your Elastic Load Balancer, take a look at
the Health Check tab and confirm that it is
configured correctly. It will either be checking a
URL (eg /index.htm) or checkin
|EC2 Create Image EBS volume seems to remain the same|
You care correct that the "Create Image" command
creates an Amazon Machine Image (AMI). If you
start a new EC2 instance with this AMI, it will
contain the same data as the machine that was
imaged. That's why you are copying your exiting
problem to the new instance.
Check your disk space with df -h to confirm that
you have space available.
If you require more disk space, you can copy your
|AWS S3 custom error document for access denied|
Put your custome image in the document explained
Make sure you read Error Documents and Browser
Behavior at the bottom of the page
|Ensure free data transfer from S3 to EC2|
As long as your S3 buckets are in the same region
as your EC2 machines, your data xfers are free. If
you always deal with the same region, then you do
not have to worry about the data xfer cost.
Data Transfer OUT From Amazon S3 To
Amazon EC2 in the same region $0.000 per GB
|Can't connect to RDS from AWS EC2|
Okay - I was able to figure out what was wrong.
It seems that I should be using the following to
connect to the RDS:
mysql -u username -p -h endpoint databaseName
This prompts me to enter in my password, which
then connects me to my database above in the RDS
instance that I have set up. Evidently, you have
to specify the database that you want to connect
to and the port number is optional.
|Cant connect to new Amazon AWS Centos instance|
you can-not login with pem file you have to
convert pem file to ppk by using puttygen, then
you give your hostname and this newly generated
ppk file, it will allow you to login. you have to
use ec2-user in user-name.
|Shared File Systems between multiple AWS EC2 instances|
You are right, at the moment it is not possible to
add an EBS volume to multiple instances. To create
a common storage for all instances, there are
options like NFS, mounting S3 buckets or using a
distributed cluster filesystem like GlusterFS.
However in most cases you can simplify your setup.
Try to offload static assets to another (static)
domain or even host it on an website-enabled S3
|error trying to change the root volume_size for EC2 instance using ansible|
I got interested in answering this because you put
in a (nearly) fully working example. I copied it
locally, made small changes to work in my AWS
account, and iterated to figure out the solution.
I suspected a YAML+Ansible problem. I tried a
bunch of things and looked around. Michael DeHaan
(creator of Ansible) said the complex
argument/module style is required as seen in the
ec2 examples. Here's
|Why can't I access the internet from my private subnet on an AWS VPC?|
There could be a lot of reasons, because of
various configuration errors, but most common
problem is when you neglect to an an internet
gateway to your VPC.
By default, instances that you launch into a
virtual private cloud
(VPC) can't communicate with the Internet. You
can enable access to
the Internet from your VPC by attaching an
Internet gateway to the
VPC, ensuring that your instanc
|On AWS, can we setup the loadbalancer to multicast?|
There is no native AWS service that will do this.
You can use a variety of reverse proxy tools. For
example you can use the experimental iptables
ROUTE target or something like this duplicator
project. There are many ways to do it, but you'll
need to roll your own solution.
|How to copy data in bulk from Kinesis -> Redshift|
That is already done for you!
If you use the Kinesis Connector Library, there is
a built-in connector to Redshift
Depending on the logic you have to process
connector can be really easy to implement.
|AWS Outbound data transfer charges|
Data Transfer OUT From Amazon EC2 To
Amazon S3, Amazon Glacier, Amazon DynamoDB, Amazon
SES, Amazon SQS, or Amazon SimpleDB in the same
AWS Region $0.00 per GB
Amazon EC2, Amazon RDS, Amazon Redshift or Amazon
ElastiCache instances, Amazon Elastic Load
Balancing, or Elastic Network Interfaces in the
same Availability Zone
Using a private IP address $0.00 per GB
Using a public or Elastic IP
|What is the correct syntax for filtering by tag in describe-vpcs?|
You got pretty close to solving it -- the only
problem is that you are not specifying a valid
filter for describe-vpcs. Here's the filter that
would be relevant to your use case:
tag:key=*value* - The key/value combination of a
tag assigned to the resource.
So when it is asking for
Name=string1,Values=string1..., it expects:
Try this instead, works for me
|Querying in DynamoDB (with hash-and-range primary key) without providing hash key|
Is there an effective way to do this in
It sounds like you are looking for Global
Secondary Indexes (GSI). You have your table which
Hash key: Category#Domain
Range key: GroupType#GroupName
And based off this table it sounds like you want
to have a GSI:
Hash key: GroupType#GroupName
Range key: depends on design (not necessary in
Other attributes that
|elastic-beanstalk docker app not updating upon deploy|
I wonder if you might try using the user-data
input when you define your instances in Beanstalk?
Something like this could fire off right at the
end of boot:
sudo docker pull username/container
... other things you may need to do ...
More that you can reference about user-data
scripts and executables:
|Self password rotation - Redshift|
PostgreSQL allows users to change their own
See: ALTER USER documentation
Therefore, you cannot prevent users changing their
|After enabling client-to-node encryption, opscenter can't connect to cluster|
Have you edited your cluster in OpsCenter to set
it up for client-to-node encryption? In OpsCenter,
click Settings > Cluster Connections. If you have
multiple clusters, select yours from the dropdown.
Check "Client to node encryption is enabled on my
cluster" and enter the cert settings that follow.
That should do it.
|Dynamically Insert Date into filename of Redshift Copy Command from S3|
Redshift copy command expects exact s3 path for
folder or file (s3://abc/def or
You need to give correct path for the file.
You can write simple java /php or shell script to
( postgres drivers) create s3 path using date
dynamically build query and then fire. So that you
replace it with actual date value.
So that there will be on syntax errors.
|Elastic Beanstalk rolling update timeout not honored|
"Pause Time" relates to environmental
configuration made to instances. "Command
timeouts" relates to commands executed to building
the environment (for example if you've customised
the container). Neither have anything to do with
rolling application updates or zero downtime
deployments. The documentation around this stuff
is confusing and fragmented.
For zero downtime application deployments, AWS
|EC2 with Docker and EBS volume, mount EBS volume inside container during init|
Short version: This is not an answer, just a
little help towards it with clarification on how
Not directly related, but your Dockerfile should
probably look like this:
# Expose the port 9000
# Change workdir.
RUN touch /root/wisdom.log
# Add the
|Bees with machine gun using Amazon free tier|
The problem is, I think, this line in your boto
This is telling boto that it should try to use
this hostname to make EC2 requests but this
appears to be the hostname of an EC2 instance
which will not be able to reply to these requests.
Just remove this line and let boto use the
pre-configured host name for the
|How to Generate AWS DynamoDB Credential Key with STS API which is Limited to Insert and Update One Key/Row|
Yes, you can write a policy to restrict access to
specific keys via fine-grained access control.
|CloudFront - push group of files, when one file is accessed?|
Amazon CloudFront uses a pull-model at its edge
locations. This means that content is only loaded
into an edge location when a request is received.
There is no capability to "push" content into edge
locations. (This differs from Akamai, which does
use a "push" model.)
In theory, you could do it by request the URL at
every edge location, but requests are
automatically directed to the closest edge
|Point Domain name to AWS EC2 instance|
To point a Domain Name to an EC2 instance, you can
either use Route 53 or your own DNS service. In
Assign an Elastic IP address to your EC2 instance
In Route 53 or your own DNS service, define a
domain/subdomain that points to this IP address
The above assumes that you wish to point to a
single EC2 instance. If you have multiple
instances with a Load Balancer in front, you will
|Why do AWS elastic beanstalk rolling version updates still have a 2min downtime with 503s?|
This is an old question, but anyway.
Unfortunately, the 'rolling updates' on Elastic
Beanstalk only apply to configuration changes, and
not to code deployments as per comment from Amazon
on this thread.
Alternative deployment strategies are detailed
There is opportun
|Cannot install inotify on Amazon EC2|
I bumped into this issue as well -- it's a bit
easier than grabbing an RPM, or the source and
Amazon Linux AMI's come with the EPEL repository
source, but it's disabled. So you enable it:
sudo yum-config-manager --enable epel
Then run a regular yum update and install the
sudo yum update
sudo yum install inotify-tools
|several open connection in rabbit with different java client version number|
Sorry for not updating this post. I found out that
another app had a connection leak which was using
rabbit 3.2.4. Because rabbit is behind an ELB it
was hard to track down the faulty application.
This issue is fixed now.
|Unable to download AWS CodeDeploy Agent Install file|
I figured out the problem, According to Codedeploy
documentation for IAM Instance profile
following permissions needs to be given to your
IAM instance profile.
|AWS OpsWorks: use Redis instead of Memcached|
If you want to stick to OpsWorks only then there
is no out-of-the box Redis there. You can however
create custom recipes and make your own Redis
If you do not have to stick to OpsWorks only then
ElastiCache can use memcached or Redis.
|SQL Error (1045) in statement #0: Access denied for user 'root'@'|
i faced similar problem, so for that i created new
user by granting all the privileges and then i
connected on ec2-instance and it was allowing me
to connect after then, you can also do the same,
below are the steps.
CREATE USER 'username'@'localhost' IDENTIFIED BY
GRANT ALL PRIVILEGES ON *.* TO
'username'@'your_public_ip' WITH GRANT OPTION;
CREATE USER 'username'@'%' IDENTIFIED BY
|Running AWS commands from commandline on a ShellCommandActivity|
OK. found it -
The resourceRole in the default object in your
pipeline will be the one assigned to resources
(Ec2Resource) that are created as a part of the
The default one in configured to have all your
permissions and AWS commandline and SDK packages
are automatically looking for those crede