Search Results

Search found 9437 results on 378 pages for 'wordpress amazon plugins'.

Page 36/378 | < Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >

  • Mounting an Amazon EC2 instance on Mac OS X

    - by hinghoo
    I've got public key authentication working between my Mac OS X and an Amazon EC2 instance so that from the command-line I can just type the following and it works: ssh root@[IPAddressOfEC2Instance] The strange thing is that I can't seem to mount the instance using "Connect to Server" in the Finder. I've tried typing the following server addresses into the "Connect to Server" dialog: ftps://[IPAddressOfEC2Instance] ftps://root@[IPAddressOfEC2Instance] But all I get is You entered an invalid username or password. Please try again. The root user on the EC2 instance has a blank password and I'm wondering if it has to do with that. However, I can't change the password for the root user. I can use an SFTP client to connect to the machine, I just can't mount it with "Connect to server". It asks for a username and password (for a registered user) and it's root/[blank] which it doesn't accept. The other option is "Guest" which brings up an empty folder in the Finder.

    Read the article

  • Amazon EC2 - Reserved Private Addresses

    - by reach4thelasers
    I've got a private DNS Server running on Amazon EC2. I don't need a public IP Address because its only used for private addressing web1.xxx.internal database1.xxx.internal Problem is I had to terminate the instance recently and start a new one. This meant that the private IP address of the DNS server changed and I had to log in to each of my other 15 server one-by-one and change the DNS address to point to the new DNS server. There must be a better way to do this, if so, what is it?

    Read the article

  • Amazon S3 - Storage Class and Server Side Encryption

    - by Steven
    Ahhh! I am using Amazon S3 for some low price storage to clear down out SAN. I created the bucket and created a root folder. I set the storage class to standard and server side encryption AES. I started a copy job to move the files, some files copied over and i checked the files: Reduced Redundancy Encryption set to none WTF? So i deleted all files and folders. I manuallyed created the folder structure and again set the storage class and encryption level. I coped some files and bamm, still showing (at a file level as Reduced and no encryption). So my question is this, is it really raid'd and encrypted just not showing it properly (as the root folder is, how can the file not be??) or (b) am i being a huge tool and missing something?

    Read the article

  • Migrating My GWB Blog Over To WordPress

    - by deadlydog
    Geeks With Blogs has been good to me, but there are too many tempting things about Word Press for me to stay with GWB.  Particularly their statistics features, and also I like that I can apply specific tags to my posts to make them easier to find in Google (and I never was able to get categories working on GWB). The one thing I don’t like about WordPress is I can’t seem to find a theme that stretches the Content area to take up the rest of the space on wide resolutions…..hopefully I’ll be able to overcome that obstacle soon though. For those who are curious as to how I actually moved all of my posts across, I ended up just using Live Writer to open my GWB posts, and then just changed it to publish to my WordPress blog.  This was fairly painless, but with my 27 posts it took probably about 2 hours of manual effort to do.  Most of the time WordPress was automatically able to copy the images over, but sometimes I had to save the pics from my GWB posts and then re-add them in Live Writer to my WordPress post.  I found another developers automated solution (in alpha mode), but opted to do it manually since I wanted to manually specify Categories and Tags on each post anyways.  The one thing I still have left to do is move the worthwhile comments across from the GWB posts to the new WordPress posts. The largest pain point was that with GWB I was using the Code Snippet Plugin for Live Writer for my source code snippets, and when they got transferred over to WordPress they looked horrible.  So I ended up finding a new Live Writer plugin called SyntaxHighlighter that looks even nicer on my posts If anybody knows of a nice WordPress theme that stretches the content area to fit the width of the screen, please let me know.  All of the themes I’ve found seem to have a max width set on them, so I end up with much wasted space on the left and right sides.  Since I post lots of code snippets, the more horizontal space the better! So if you want to continue to follow me, my blog's new home is now at https://deadlydog.wordpress.com

    Read the article

  • Amazon Web Services (AWS) Plug-in for Oracle Enterprise Manager

    - by Anand Akela
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Contributed by Sunil Kunisetty and Daniel Chan Introduction and ArchitectureAs more and more enterprises deploy some of their non-critical workload on Amazon Web Services (AWS), it’s becoming critical to monitor those public AWS resources along side with their on-premise resources. Oracle recently announced Oracle Enterprise Manager Plug-in for Amazon Web Services (AWS) allows you to achieve that goal. The on-premise Oracle Enterprise Manager (EM12c) acts as a single tool to get a comprehensive view of your public AWS resources as well as your private cloud resources.  By deploying the plug-in within your Cloud Control environment, you gain the following management features: Monitor EBS, EC2 and RDS instances on Amazon Web Services Gather performance metrics and configuration details for AWS instances Raise alerts and violations based on thresholds set on monitoring Generate reports based on the gathered data Users of this Plug-in can leverage the rich Enterprise Manager features such as system promotion, incident generation based on thresholds, integration with 3rd party ticketing applications etc. AWS Monitoring via this Plug-in is enabled via Amazon CloudWatch API and the users of this Plug-in are responsible for supplying credentials for accessing AWS and the CloudWatch API. This Plug-in can only be deployed on an EM12C R2 platform and agent version should be at minimum 12c R2.Here is a pictorial view of the overall architecture: Amazon Elastic Block Store (EBS) Amazon Elastic Compute Cloud (EC2) Amazon Relational Database Service (RDS) Here are a few key features: Rich and exhaustive list of metrics. Metrics can be gathered from an Agent running outside AWS. Critical configuration information. Custom Home Pages with charts and AWS configuration information. Generate incidents based on thresholds set on monitoring data. Discovery and Monitoring AWS instances can be added to EM12C either via the EM12c User Interface (UI) or the EM12c Command Line Interface ( EMCLI)  by providing the AWS credentials (Secret Key and Access Key Id) as well as resource specific properties as target properties. Here is a quick mapping of target types and properties for each AWS resources AWS Resource Type Target Type Resource specific properties EBS Resource Amazon EBS Service CloudWatch base URI, EC2 Base URI, Period, Volume Id, Proxy Server and Port EC2 Resource Amazon EC2 Service CloudWatch base URI, EC2 Base URI, Period, Instance  Id, Proxy Server and Port RDS Resource Amazon RDS Service CloudWatch base URI, RDS Base URI, Period, Instance  Id, Proxy Server and Port Proxy server and port are optional and are only needed if the agent is within the firewall. Here is an emcli example to add an EC2 target. Please read the Installation and Readme guide for more details and step-by-step instructions to deploy  the plugin and adding the AWS the instances. ./emcli add_target \       -name="<target name>" \       -type="AmazonEC2Service" \       -host="<host>" \       -properties="ProxyHost=<proxy server>;ProxyPort=<proxy port>;EC2_BaseURI=http://ec2.<region>.amazonaws.com;BaseURI=http://monitoring.<region>.amazonaws.com;InstanceId=<EC2 instance Id>;Period=<data point periond>"  \     -subseparator=properties="=" ./emcli set_monitoring_credential \                 -set_name="AWSKeyCredentialSet"  \                 -target_name="<target name>"  \                 -target_type="AmazonEC2Service" \                 -cred_type="AWSKeyCredential"  \                 -attributes="AccessKeyId:<access key id>;SecretKey:<secret key>" Emcli utility is found under the ORACLE_HOME of EM12C install. Once the instance is discovered, the target will show up under the ‘All Targets’ list under “Amazon EC2 Service’. Once the instances are added, one can navigate to the custom homepages for these resource types. The custom home pages not only include critical metrics, but also vital configuration parameters and incidents raised for these instances.  By mapping the configuration parameters as instance properties, we can slice-and-dice and group various AWS instance by leveraging the EM12C Config search feature. The following configuration properties and metrics are collected for these Resource types. Resource Type Configuration Properties Metrics EBS Resource Volume Id, Volume Type, Device Name, Size, Availability Zone Response: Status Utilization: QueueLength, IdleTime Volume Statistics: ReadBrandwith, WriteBandwidth, ReadThroughput, WriteThroughput Operation Statistics: ReadSize, WriteSize, ReadLatency, WriteLatency EC2 Resource Instance ID, Owner Id, Root Device type, Instance Type. Availability Zone Response: Status CPU Utilization: CPU Utilization Disk I/O:  DiskReadBytes, DiskWriteBytes, DiskReadOps, DiskWriteOps, DiskReadRate, DiskWriteRate, DiskIOThroughput, DiskReadOpsRate, DiskWriteOpsRate, DiskOperationThroughput Network I/O : NetworkIn, NetworkOut, NetworkInRate, NetworkOutRate, NetworkThroughput RDS Resource Instance ID, Database Engine Name, Database Engine Version, Database Instance Class, Allocated Storage Size, Availability Zone Response: Status Disk I/O:  ReadIOPS, WriteIOPS, ReadLatency, WriteLatency, ReadThroughput, WriteThroughput DB Utilization:  BinLogDiskUsage, CPUUtilization, DatabaseConnections, FreeableMemory, ReplicaLag, SwapUsage Custom Home Pages As mentioned above, we have custom home pages for these target types that include basic configuration information,  last 24 hours availability, top metrics and the incidents generated. Here are few snapshots. EBS Instance Home Page: EC2 Instance Home Page: RDS Instance Home Page: Further Reading: 1)      AWS Plugin download 2)      Installation and  Read Me. 3)      Screenwatch on SlideShare 4)      Extensibility Programmer's Guide 5)      Amazon Web Services

    Read the article

  • Is it wise to store a big lump of json on a database row

    - by Ieyasu Sawada
    I have this project which stores product details from amazon into the database. Just to give you an idea on how big it is: [{"title":"Genetic Engineering (Opposing Viewpoints)","short_title":"Genetic Engineering ...","brand":"","condition":"","sales_rank":"7171426","binding":"Book","item_detail_url":"http://localhost/wordpress/product/?asin=0737705124","node_list":"Books > Science & Math > Biological Sciences > Biotechnology","node_category":"Books","subcat":"","model_number":"","item_url":"http://localhost/wordpress/wp-content/ecom-plugin-redirects/ecom_redirector.php?id=128","details_url":"http://localhost/wordpress/product/?asin=0737705124","large_image":"http://localhost/wordpress/wp-content/plugins/ecom/img/large-notfound.png","medium_image":"http://localhost/wordpress/wp-content/plugins/ecom/img/medium-notfound.png","small_image":"http://localhost/wordpress/wp-content/plugins/ecom/img/small-notfound.png","thumbnail_image":"http://localhost/wordpress/wp-content/plugins/ecom/img/thumbnail-notfound.png","tiny_img":"http://localhost/wordpress/wp-content/plugins/ecom/img/tiny-notfound.png","swatch_img":"http://localhost/wordpress/wp-content/plugins/ecom/img/swatch-notfound.png","total_images":"6","amount":"33.70","currency":"$","long_currency":"USD","price":"$33.70","price_type":"List Price","show_price_type":"0","stars_url":"","product_review":"","rating":"","yellow_star_class":"","white_star_class":"","rating_text":" of 5","reviews_url":"","review_label":"","reviews_label":"Read all ","review_count":"","create_review_url":"http://localhost/wordpress/wp-content/ecom-plugin-redirects/ecom_redirector.php?id=132","create_review_label":"Write a review","buy_url":"http://localhost/wordpress/wp-content/ecom-plugin-redirects/ecom_redirector.php?id=19186","add_to_cart_action":"http://localhost/wordpress/wp-content/ecom-plugin-redirects/add_to_cart.php","asin":"0737705124","status":"Only 7 left in stock.","snippet_condition":"in_stock","status_class":"ninstck","customer_images":["http://localhost/wordpress/wp-content/uploads/2013/10/ecom_images/51M2vvFvs2BL.jpg","http://localhost/wordpress/wp-content/uploads/2013/10/ecom_images/31FIM-YIUrL.jpg","http://localhost/wordpress/wp-content/uploads/2013/10/ecom_images/51M2vvFvs2BL.jpg","http://localhost/wordpress/wp-content/uploads/2013/10/ecom_images/51M2vvFvs2BL.jpg"],"disclaimer":"","item_attributes":[{"attr":"Author","value":"Greenhaven Press"},{"attr":"Binding","value":"Hardcover"},{"attr":"EAN","value":"9780737705126"},{"attr":"Edition","value":"1"},{"attr":"ISBN","value":"0737705124"},{"attr":"Label","value":"Greenhaven Press"},{"attr":"Manufacturer","value":"Greenhaven Press"},{"attr":"NumberOfItems","value":"1"},{"attr":"NumberOfPages","value":"224"},{"attr":"ProductGroup","value":"Book"},{"attr":"ProductTypeName","value":"ABIS_BOOK"},{"attr":"PublicationDate","value":"2000-06"},{"attr":"Publisher","value":"Greenhaven Press"},{"attr":"SKU","value":"G0737705124I2N00"},{"attr":"Studio","value":"Greenhaven Press"},{"attr":"Title","value":"Genetic Engineering (Opposing Viewpoints)"}],"customer_review_url":"http://localhost/wordpress/wp-content/ecom-customer-reviews/0737705124.html","flickr_results":["http://localhost/wordpress/wp-content/uploads/2013/10/ecom_images/5105560852_06c7d06f14_m.jpg"],"freebase_text":"No around the web data available yet","freebase_image":"http://localhost/wordpress/wp-content/plugins/ecom/img/freebase-notfound.jpg","ebay_related_items":[{"title":"Genetic Engineering (Introducing Issues With Opposing Viewpoints), , Good Book","image":"http://localhost/wordpress/wp-content/uploads/2013/10/ecom_images/140.jpg","url":"http://localhost/wordpress/wp-content/ecom-plugin-redirects/ecom_redirector.php?id=12165","currency_id":"$","current_price":"26.2"},{"title":"Genetic Engineering Opposing Viewpoints by DAVID BENDER - 1964 Hardcover","image":"http://localhost/wordpress/wp-content/uploads/2013/10/ecom_images/140.jpg","url":"http://localhost/wordpress/wp-content/ecom-plugin-redirects/ecom_redirector.php?id=130","currency_id":"AUD","current_price":"11.99"}],"no_follow":"rel=\"nofollow\"","new_tab":"target=\"_blank\"","related_products":[],"super_saver_shipping":"","shipping_availability":"","total_offers":"7","added_to_cart":""}] So the structure for the table is: asin title details (the product details in json) Will the performance suffer if I have to store like 10,000 products? Is there any other way of doing this? I'm thinking of the following, but the current setup is really the most convenient one since I also have to use the data on the client side: store the product details in a file. So something like ASIN123.json store the product details in one big file. (I'm guessing it will be a drag to extract data from this file) store each of the fields in the details in its own table field Thanks in advance!

    Read the article

  • Is there a way of using HTTPS with Amazon's CloudFront CDN and CNAMEs?

    - by Metalshark
    We use Amazon's CloudFront CDN with custom CNAMEs hanging under the main domain (static1.example.com). Although we can break this uniform appearance and use the original whatever123wigglyw00.cloudfront.net URLs to utilise HTTPS, is there another way? Do Amazon or any other similar provider offer HTTPS CDN hosting? Is TLS and its selective encryption available for use somewhere (SNI: Server Name Indication)? Foot note: assuming that the answer is no, but just in the hope someone knows. EDIT: Now using Google App Engine https://developers.google.com/appengine/docs/ssl for CDN hosting with SSL support.

    Read the article

  • Amazon AWS EC2 + Puppet, get Puppet to know AWS instance tags

    - by Piotr Jasiulewicz
    I am having a problem with my AWS deployment, fairly new to AWS and Puppet. So coming to my question - can you distinguish puppet nodes with AWS machine tags or CNAME domains? A little background about the plan: have multiple clusters of machines, one php cluster, one legacy php cluster, one java cluster, one perl cluster control configuration with puppet - still pretty new to puppet but as a developer I like the idea of being able to version control configuration of servers have autoscaling enabled on those clusters - obviously the main benefit of the cloud that makes the much hight cost when it comes to any reasonable performance worth it (those amazon machines are slower than my phone...) deployment controlled by Capistrano, this makes things a lot easier So in AWS you get those super nasty public/private machine dns's... no way you can identify machines on those. In order to easer the problem, seams like AWS want's you to tag everything - so I did. Found a script that makes a CNAME record for each machine with the tag "ShortName" thanks to the Route53 API. Every machine has a ShortName tag that becomes its CNAME, unfortunately puppet still resolves the private dns name. I'd like to have node 'perl-cluster'{} in puppet, anyone any clue ho to achieve this? Thanks

    Read the article

  • can not connect to SQL running on amazon ec2 machine

    - by njj56
    I am using SQL managment studio 2008 running on an Amazon EC2 machine. I am unable to connect to the database in my asp.net application. The EC2 instance has been set to accept connections over the SQL port. I am also able to remote the machine as well as view websites hosted on the server. Listed below is part of the connection string relating to this instance. When the program is ran and this connection string is called, it returns tcp error 0 - no return response. it just times out. <add name="ProjectServer" connectionString="Data Source=*IP ADDRESS HERE*,1433;Initial Catalog=*Catalog Name*;User ID=IP-0A6ED514\Administrator;"/> I removed the ip and the catalog name for the example, but I am sure they are correct. The only thing that I could think may cause an error, is the differences in names between the user id and the server name - the server name is ip-0A6ED514\sharepoint but the user name is ip-0A6ED514\administrator when I log into the sql server manager on the EC2 instance. A password is not used. Not sure if I would need to leave in a blank string for password - also not sure if the difference between server name and user id to log in makes a difference. Any help is appreciated. Thank you. update - when this connection string is used with out the port, i get tcp provider error 40 - when the port is in there, i get error 0 edit- the sql server is using windows authentication - does this make a difference? Usually I always use SQL server authentication

    Read the article

  • Amazon VPC NAT not working

    - by rpkelly
    I'm trying to create a NAT instance for my VPC to allow instances on private subnets connect to the internet (most importantly, S3). I tried following the instructions here: http://docs.amazonwebservices.com/AmazonVPC/2011-07-15/UserGuide/index.html?VPC_NAT_Instance.html . Unfortunately, the instances in the private subnet (call it 10.10.2.0/24) cannot reach the internet. I have done the following: Create a NAT instance (Amazon's ami-vpc-nat-1.0.0-beta.i386-ebs (ami-d8699bb1)) in public subnet (call it 10.10.1.0/24). Changed "Source / Dest Check" to disabled. Created a new entry in the default routing table (which is used by 10.10.2.0/24) and had it point to the ID of the newly created instance. Associated an Elastic IP address with the NAT instance. Allowed all outbound traffic on the security group of the NAT instance. Ensured that all traffic could pass between the two subnets. I've tried also doing this with an existing instance using iptables, but had no luck. And I have verified that sys.net.ipv4.ip_forward is 1, just in case anyone was wondering. And I still have no internet connectivity from the instances on 10.10.2.0/24. Does anyone have any suggestions?

    Read the article

  • Serving and caching content from Amazon S3 with Tomcat

    - by Rob
    Hi all, We're looking to serve a range of content using Amazon S3 as a store for the content and Tomcat to host the web application. The content is divided into free and paid for content. We intend to authenticate the users when they access the web application running in Tomcat. Based around their authentication we are able to tell if the user has access to paid for content or simply free stuff. So I envision the flow of a request being something like this: Authenticated request to Tomcat If user is "paid" user, display links to premium content Direct requests for paid content back through Tomcat to prevent direct access to it by non-paying users. Tomcat makes request to S3 through a web cache to keep our costs down Content is returned to user. As we have to pay for each request to S3, I'd ideally like to cache content locally to the Tomcat instance after it has been requested for the first time to keep costs to a minimum and to speed things up. I would also like to be able to invalidate this cache if we publish fresh content to S3. So to confirm my proposal: Client Request - Tomcat - Web Cache - S3 To invalidate the cache, I was thinking of using something like PubSubHubbub with the cache waiting for updates to the feed for content that it should invalidate. I'd appreciate some general feedback on this approach as I've no real experience of caching and I'm sure I've made some invalid assumptions. I'd also appreciate any recommendations for caching technologies. Thanks.

    Read the article

  • Amazon EC2 instance missing Network Interface

    - by Sergiks
    I am running Linux on a t1.micro instance at Amazon EC2. Once I noticed bruteforce ssh login attemtps from a certain IP, after litle Googling I issued the two following commands (other ip): iptables -A INPUT -s 202.54.20.22 -j DROP iptables -A OUTPUT -d 202.54.20.22 -j DROP Either this, or maybe some other actions like yum upgrade perhaps, caused the follwing fiasco: after rebooting the server, it came up without the Network Interface! I only can connect to it through AWS Management Console JAVA ssh client - via local 10.x.x.x address. Console's Attach Network Interface as well as Detach.. are greyed out for this instance. Network Interfaces item at the left does not offer any Subnets to choose from, to create a new N.I. Please advice, how can I recreate a Network Interface for the instance? Upd. The instance is not accessible from outside: cannot be pinged, SSH'ed or connected by HTTP on port 80. Here's the ifconfig output: eth0 Link encap:Ethernet HWaddr 12:31:39:0A:5E:06 inet addr:10.211.93.240 Bcast:10.211.93.255 Mask:255.255.255.0 inet6 addr: fe80::1031:39ff:fe0a:5e06/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1426 errors:0 dropped:0 overruns:0 frame:0 TX packets:1371 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:152085 (148.5 KiB) TX bytes:208852 (203.9 KiB) Interrupt:25 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) What is also unusual: a new micro instance I created from scratch, with no relation to the troubled one, was not pingable too.

    Read the article

  • Managing persistent data on an Amazon EC2 web server

    - by Derek
    I've just started trying out Amazon's EC2 service for running an asp.net web app which uses a SQL Server 2005 Express database. I have some questions about how to configure and operate it best for reliability, and I'm hoping to tap into some collective wisdom here as this is my first foray into EC2. Here's how I have it configured currently: OS: Windows 2003 SQL Server Express 2005 Web content stored on an EBS Volume (E Drive) Database Data stored on an EBS Volume (E Drive) Database backups to "C Drive" and then copied off to S3. Elastic IP Address attached to the production instance. Now when I make a change to the OS configuration, I make a new AMI using the bundle feature. Unfortunately, I found that this results in significant downtime. While the bundle is created and the new instance is started. It seems that when I'm ready to make a new AMI, I should: Start up a new temporary instance. Detach the EBS volume from the production instance. Detach the IP Address from the production instance. Attach the IP Address to the temporary instance. Attach the EBS volume to the temporary instance. Create an AMI from the production instance. After the production instance restarts, reverse the attach/detach steps to put it back in production. Is this the right order of events to prevent any chance to corrupt the EBS volume? Will the EBS volume become corrupt if I detach it while a database Write is taking place? Should I snapshot the EBS volume of the production instance and attach it to the temporary instance instead? Or could taking a snapshot of the EBS volume while it's in use cause corruption? Any suggestions to improve the reliability and operations?

    Read the article

  • Run a rails server on Amazon EC2 [on hold]

    - by Jashwant
    Context: I've tried rubber gem, but that does not fulfill my requirements ( I needed to deploy on existing instance, so don't recommend me rubber) So, I followed this excellent tutorial http://stackoverflow.com/questions/15535140/installing-ruby-2-0-and-rails-4-0-0beta-on-aws-ec2 Now, I have ruby 2.0 and rails 4.0.0 running on AWS EC2. I successfully ran the server with RDS (mysql) as db and default webrick as server ( Using command rails server ) But, I've read that webrick is a development server and shouldn't be used at production. What I tried: I googled and came up with some alternatives. Capistrano Nginx / apache with passenger Passenger with Capistrano Unicorn Puma My Question: What exactly is capistrano / passenger ? Are they middleware to ease my deployment process ? I don't see any difficulty in doing rails server command. If they are just middleware, nginx with passenger and capistrano does not make any sense ? Why would I add a learning curve ( to learn nginx, passenger and capistrano configs) just to run my server ? I can just use nginx to deploy my app. Can't I ? What combination should I use on Amazon EC2 (or may be at any some other production server).

    Read the article

  • What is the right way to modify a wordpress query in a plugin?

    - by starepod
    Basically i am playing with a plugin that allows future-dated posts on archive pages. My question is broader than this specific functionality, but everyone likes some context. I have my head around many of the plugin development concepts, but must be missing something very basic. I can successfully rewrite a query that gives me the results i want like this: function modify_where( $where ) { global $wp_query; // define $year, $cat, etc if( is_archive() ) { $where = " AND YEAR(wp_posts.post_date)='".$year."' AND wp_term_taxonomy.taxonomy = 'category' AND wp_term_taxonomy.term_id IN ('".$cat."') AND wp_posts.post_type = 'post' AND (wp_posts.post_status = 'publish' OR wp_posts.post_status = 'future')"; } return $where; } add_filter('posts_where', 'catCal_where' ); However, if i attempt to create a new WP_Query('different_query_stuff') after the main loop the new query uses the same WHERE statement outlined above. The question is : What am I missing? Thanks.

    Read the article

  • Nginx and multiple wordpress instances with fastcgi under same domain

    - by damnsweet
    My site is running on apache. two instances of wordpress exist under paths /tr/ and /eng/. I want to move the setup to nginx but could not manage to get it working. My setup consists of nging 0.7.66, php 5.3.2, and php-fpm. /tr/ and /eng/ are two separate wordpress instances located under /home/istci/webapps/wordpress_tr and /home/istci/webapps/wordpress respectively. Below is the server section from nginx.conf containing only configuration for tr, yet could not get it working either. server { listen 80; server_name www.example.com; charset utf-8; location ~ ^/$ { rewrite ^(.+)$ http://www.example.com/tr/ permanent; } location ~ /tr/.*php$ { fastcgi_pass unix:/home/istci/var/run/wptr.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /home/istci/webapps/wordpress_tr$fastcgi_script_name; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; # required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200; } location /tr/ { root /home/istci/webapps/wordpress_tr/; index index.php index.html index.htm; if (!-e $request_filename) { rewrite ^(.+)$ /tr/index.php?q=$1 last; break; } if (-f $request_filename) { expires 30d; break; } } } php-fpm listens on unix:/home/istci/var/run/wptr.sock. running it in debug-mode shows no active handlers, which means no connection is made to unix socket from nginx. nginx access logs: 127.0.0.1 - - [09/Jun/2010:03:45:11 -0500] "GET /tr/ HTTP/1.0" 404 20 "-" "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2.4) Gecko/20100527 Firefox/3.6.4" nginx debug logs : 2010/06/09 03:38:53 [notice] 6922#0: built by gcc 4.1.2 20080704 (Red Hat 4.1.2-48) 2010/06/09 03:38:53 [notice] 6922#0: OS: Linux 2.6.18-164.9.1.el5PAE 2010/06/09 03:38:53 [notice] 6922#0: getrlimit(RLIMIT_NOFILE): 4096:4096 2010/06/09 03:38:53 [notice] 6923#0: start worker processes 2010/06/09 03:38:53 [notice] 6923#0: start worker process 6924 2010/06/09 03:38:53 [notice] 6923#0: start worker process 6925 2010/06/09 03:39:01 [notice] 6925#0: *1 "^(.+)$" matches "/tr/", client: 127.0.0.1, server: www.example.com, request: "GET /tr/ HTTP/1.0", host: "www.example.com" 2010/06/09 03:39:01 [notice] 6925#0: *1 rewritten data: "/tr/index.php", args: "q=/tr/", client: 127.0.0.1, server: www.example.com, request: "GET /tr/ HTTP/1.0", host: "www.example.com" Any clues about what is wrong with my configuration? Thanks.

    Read the article

  • How much effort is involved in moving a WordPress site to a private server? [on hold]

    - by Alan
    I work in tech, but am on the business side. I have a WordPress site that I would like to move to a personal server and associate with a new domain name. I already have a server (actually, a friend is letting me use his) and the domain name. A friend-of-a-friend, who claims to be an IT pro, has agreed to help, but now is asking for what feels like a lot of money for what he says is a pretty time-intensive job. This doesn't sound right to me, so I thought I would ask here: Would it take months or even days to move the content, and why would it have to be moved in stages? The blog currently uses a basic template and has about 1000 posts. How much effort is really involved in moving a WordPress site from one server to another? Can anyone explain the process? Would it just make more sense to point the domain name at the existing WordPress blog, and pay the nominal yearly fee? I appreciate any answers you can provide.

    Read the article

  • Integrating Amazon S3 in Java via NetBeans IDE

    - by Geertjan
    To continue from yesterday, let's set up a scenario that enables us to make use of this drag/drop service in NetBeans IDE: The above service is applicable to Amazon S3, an Amazon storage provider that is typically used to store large binary files. In Amazon S3, every object stored is contained in a bucket. Buckets partition the namespace of objects stored in Amazon S3. More on buckets here. Let's use the tools in NetBeans IDE to create a Java application that accesses our Amazon S3 buckets. Create a Java application named "AmazonBuckets" with a main class named "AmazonBuckets". Open the main class and then drag the above service into the main method of the class. Now, NetBeans IDE will create all the other classes and the properties file that you see in the screenshot below. The first thing to do is to open the properties file above and enter the access key and secret: access_key=SOMETHINGsecret=SOMETHINGELSE Now you're all set up. Make sure to, of course, actually have some buckets available: Then rewrite the Java class to parse the XML that is returned via the generated code: package amazonbuckets;import java.io.ByteArrayInputStream;import java.io.IOException;import javax.xml.parsers.DocumentBuilder;import javax.xml.parsers.DocumentBuilderFactory;import javax.xml.parsers.ParserConfigurationException;import org.netbeans.saas.amazon.AmazonS3Service;import org.netbeans.saas.RestResponse;import org.w3c.dom.DOMException;import org.w3c.dom.Document;import org.w3c.dom.Node;import org.w3c.dom.NodeList;import org.xml.sax.InputSource;import org.xml.sax.SAXException;public class AmazonBuckets {    public static void main(String[] args) {        try {            RestResponse result = AmazonS3Service.getBuckets();            String dataAsString = result.getDataAsString();            DocumentBuilderFactory dbFactory = DocumentBuilderFactory.newInstance();            DocumentBuilder dBuilder = dbFactory.newDocumentBuilder();            Document doc = dBuilder.parse(                    new InputSource(new ByteArrayInputStream(dataAsString.getBytes("utf-8"))));            NodeList bucketList = doc.getElementsByTagName("Bucket");            for (int i = 0; i < bucketList.getLength(); i++) {                Node node = bucketList.item(i);                System.out.println("Bucket Name: " + node.getFirstChild().getTextContent());            }        } catch (IOException | ParserConfigurationException | SAXException | DOMException ex) {        }    }}That's all. This is simpler to setup than the scenario described yesterday. Also notice that there are other Amazon S3 services you can interact with from your Java code, again after generating a heap of code after drag/drop into a Java source file: I tried the above, e.g., I created a new Amazon S3 bucket after dragging "createBucket", adding my credentials in the properties file, and then running the code that had been created. I.e., without adding a single line of code I was able to programmatically create new buckets. The above outlines a handy set of tools and techniques to use if you want to let your users store and access data in Amazon S3 buckets directly from the application you've created for them.

    Read the article

  • SQL Express 2008 R2 on Amazon EC2 instance: tons of free memory, poor performance

    - by gravyface
    The old SQL Express 2005 was running on a low-end single Xeon CPU Dell server, RAID 5 7200 disks, 2 GB RAM (SBS 2003). I have not done any baseline measurements on the old physical server, but the Web app is used by half a dozen people (maybe 2 concurrently), so I figured "how bad can an Amazon EC2 instance be?". It's pretty horrible: a difference of 8 seconds of load time on one screen. First of all, I'm not a SQL guru, but here's what I've tried: Had a Small Instance, now running a c1.medium (High Cpu Medium) Windows 2008 32-bit R2 EBS-backed instance running IIS 7.5 and SQL Express 2008 R2. No noticeable improvement. Changed Page File from fixed 256 to Automatic. Setup a Striped Mirror from within Disk Management with two attached 1 GB EBS volumes. Moved database and transaction log, left everything else on the boot EBS volume. No noticeable change. Looked at memory, ~1000 MB of physical memory free (1.7 GB total). Changed SQL instance to use a minimum of 1024 RAM; restarted server, no change in memory usage. SQL still only using ~28MB of RAM(!). So I'm thinking: this database is tiny (28MB), why isn't the whole thing cached in RAM? Surely that would speed up performance. The transaction log is 241 MB. Seems kind of large in comparison -- has this not been committed? Is it a cause of performance degradation? I recall something about Recovery Models and log sizes somewhere in my travels, but not positive. Another thing: the old server was running SQL Express 2005. Not sure if that has any impact, but I tried changing the compatibility level from SQL 2000 to 2008, but that had no effect. Anyways, what else can I try here? Seems ridiculous to throw more virtual hardware at this thing. I know I/O is going to be rough on EBS volumes, but surely others are successfully running small .NET/SQL apps on reasonably priced instances?

    Read the article

  • Could Wordpress be used/extended as a medium-size ecommerce site? [migrated]

    - by Aphelion
    Is wordpress reliable enough and could it be used with a proper plugin as a medium sized ecommerce site? We are talking around 1500 products here, estimates say around 5 to 10 costumers per day, usually returning ones. Some days none. I have a client who wants to sell online. We are situated in a country where people dont see web as something serious enough. Still, they literally have a budget of pretty much none. And 1500 products to sell online. Magento, or any other open source ecommerce platform are out of the question, there is just no resources for starting something like that. Or any time. Only way is something free, small, absolutely not resource hungry as Wordpress. I have to fight with making it work with only what i have at disposal. Also, if you can recommend any wordpress plugin for the job(WP-ecommerce?), i will be very thankful. I guess something paid, up to maximum 50$ would work too.

    Read the article

  • Can plugins loaded with MEF resolve their own internal dependencies with the same MEF container for

    - by Dave
    From my experimentation, I think the answer is "kind of", but I could have made a mistake. I have an application that loads appliance plugins with MEF. That part is working fine. Now let's say that my BlenderAppliance wants to resolve several of its dependencies with MEF, which each implement IApplianceFeature. I've just used the ImportMany attribute to my plugin. I made sure to create the plugin using MEF so that the Imports work properly. I said "kind of" because some of the plugin's internals (i.e. the model) are loading with MEF just fine, but the IApplianceFeatures aren't. The difference here is that the IApplianceFeatures are themselves, assemblies. And at the moment, they are in one folder above that of the plugin itself, i.e. + application folder | IApplianceFeature1.dll | IApplianceFeature2.dll +---+ plugin folder | BlenderAppliance.dll Now if my application uses an AggregateCatalog to load the "." and ".\plugins" folders, why doesn't it ever load the IApplianceFeature assemblies for me? Is it possible / advisable to have the plugin create its own MEF container to resolve its dependencies, or does really nasty stuff happen? If you have any stories about this scenario, please share. :)

    Read the article

  • Class Plugins in PHP?

    - by YuriKolovsky
    i just got some more questions while learning PHP, does php implement any built in plugin system? so the plugin would be able to change the behavior of the core component. for example something like this works: include 'core.class.php'; include 'plugin1.class.php'; include 'plugin2.class.php'; new plugin2; where core.class.php contains class core { public function coremethod1(){ echo 'coremethod1'; } public function coremethod2(){ echo 'coremethod2'; } } plugin1.class.php contains class plugin1 extends core { public function coremethod1(){ echo 'plugin1method1'; } } plugin2.class.php contains class plugin2 extends plugin1 { public function coremethod2(){ echo 'plugin2method2'; } } This would be ideal, if not for the problem that now the plugins are dependable on each other, and removing one of the plugins: include 'core.class.php'; //include 'plugin1.class.php'; include 'plugin2.class.php'; new plugin2; breaks the whole thing... are there any proper methods to doing this? if there are not, them i might consider moving to a different langauge that supports this... thanks for any help.

    Read the article

  • Initializing own plugins

    - by jgauffin
    I've written a few jquery plugins for my client. I want to write a function which would initialize each plugin which have been loaded. Example: <script src="/Scripts/jquery.plugin1.min.js")" type="text/javascript"></script> <script src="/Scripts/jquery.plugin2.min.js")" type="text/javascript"></script> <script src="/Scripts/jquery.initializer.min.js")" type="text/javascript"></script> Here I've loaded plugin1 and plugin2. Plugin1 should be attached to all links with class name 'ajax' and plugin2 to all divs with class name 'test2': $('document').ready(function(){ $('a.ajax').plugin1(); $('div.test2').plugin2(); } I know that I can use jQuery().pluginName to check if a plugin exists. But I want to have a leaner way. Can all loaded plugins register a initialize function in an array or something like that which I in document.ready can iterate through and invoke? like: //in plugin1.js myCustomPlugins['plugin1'] = function() { $('a.ajax').plugin1(); }; // and in the initializer script: myCustomPlugins.each(function() { this(); }; Guess it all boiled down to three questions: How do I use a jquery "global" array? How do I iterate through that array to invoke registered methods Are there a better approach?

    Read the article

  • How to move Mdadm RAID drive (EBS based) to different AWS Instance

    - by Stanley
    We have a media-rich web application that is hosted on AWS. We have several Web Servers and we have an NFS server. On the NFS server (Linux server) we have several EBS volumes that are mounted and we've used mdadm to implement the different mounted volumes as a single RAID volume. The Web Servers simply access the NFS storage through a mount point. Amazon has now let us know that they will be performing power maintenance on this server in a couple of days time. Since all our media is on here it would render our site unusable for the hours while Amazon is working on it. We want to try and prevent this downtime. I was thinking that we can prevent server downtime by perhaps setting up a new server temporarily and attaching the EBS drives (raid volume) to that server and have our web servers point there during maintenance. This is a very high risk operation since this involves several terabytes of our production data. What would be the safe way to move over our logical raid drive (md0) to a new amazon instance? I was hoping that I could start with building the new server, mounting the ebs volumes and assembling the RAID partition using mdadm --assemble --scan before unmounting from the existing instance so that I can first test that everything works and thus having it mounted on two instances at the same time, but I don't believe that is possible with the way that filesystems work. How do I move a Linux software RAID to a new machine? suggests a way to move drives, but isn't really a cloud-based question. Perhaps there are simpler ways to prevent system downtime with our solution being hosted on the cloud? I have considered taking an EBS snapshot, but that tries to replicate all the many terabytes of mounted storage, so this is not a practical solution. Any ideas?

    Read the article

< Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >