Search Results

Search found 3025 results on 121 pages for 'amazon ec2'.

Page 106/121 | < Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >

  • Looking for suggestions for hosting Windows 2000 Server in the cloud / VPS / etc?

    - by JohnyD
    I have a Windows 2000 Server, currently virtualized in Hyper-V, that I would like to get running off-site as a backup (cloud, VPS, etc). You can't virtualize in EC2 and I'm fairly certain there are no Server 2000 AMI's floating about (correct me if I'm wrong!). If anyone has a recommendation on how I can get a virtualized Windows 2000 Server running in a secure, remote environment I would be grateful. As far as locations go I'd be interested in both North America as well as Australia and Europe. In a nutshell, we're ploughing our way out of a legacy codebase and this server is the last that remains of the legacy apps. However, it is still very much used by our clients. Everything is backed up each night (data, images, etc) to tape which is then taken offsite. However, in the event of a fire I would love to have a backup legacy server to point DNS records to. So while I am rebuilding from the ashes our services would already be available. It would save a lot of time and make my managers all the more happy (and that's what it's all about, riighhtt? :D) Thank you all for your suggestions. Please let me know if I've left out any important information. Additional info: - the legacy codebase does not function properly in Server 2003

    Read the article

  • use java-ffmpeg wrapper, or simply use java runtime to execute ffmpeg?

    - by user156153
    I'm pretty new to Java, need to write a program that listen to video conversion instructions and convert the video once an new instruction arrives (instructions is stored in Amazon SQS, but it's irrelevant to my question) I'm facing a choice, either use Java RunTime to exec 'ffmpeg' conversion (like from command line), or I can use a ffmpeg wrapper written inJava http://fmj-sf.net/ffmpeg-java/getting%5Fstarted.php I'd much prefer using Java Runtime to exec ffmpeg directly, and avoid using java-ffmpeg wrapper as I have to learn the library. so my question is are there any benefits using java-ffmpeg wrapper over exec ffmpeg directly using Runtime? I don't need ffmpeg to play videos, just convert videos Thanks

    Read the article

  • Embeding EasyVideoPlayer Code into Wordpress Theme - Video not showing

    - by bbacarat
    I'm attempting to place some embed code into a Premium WordPress Theme. NOTE: I'm not great when it comes to php. The embed code is produced by a video player called EasyVideoPlayer. (Basically it allows me to use Amazon S3 and gives me feedback on when people stop watching the video.) This is the embed code I have: _evpInit('ZXh0cmEtbW9uZXktZnJvbS1ob21lLTEubW92'); I've opened the index.php wordpress file and placed this video embed code in between the that represents the area of the website I want it to show up. However the video is not showing. If we place both the theme and video player aside, would you expect the php code to accept what I've done or is this not the way to go about adding this embed code? NOTE:I've contacted both the Wordpress Premium Theme support at Woothemes.com and the video players support for EasyVideoPlayer.com However both tend to stop at the point that another paid product is involved! Grrreat. website is www.extramoneyfromhome.co.uk

    Read the article

  • processes slow after some time of actively running

    - by Yervand Aghababyan
    i have several cron jobs running on an ubuntu machine. each one does some pretty heavy load stuff. The cron jobs are parsing files and the bigger the file the longer it takes them to parse it. The strange thing is that if i make the files too big ( like 30mb) the script kind of hangs. It starts processing them really enthusiastically but after some time (something like 5-10 minutes) the cpu usage of the process drops a lot and it gets into some "zombie" state. If prior to this the process in htop was using 70-80% of the CPU then after this drop occurs it slows down to something like 5-10%. the load average drops down as well. The status of the processes sometimes changes to D in htop, which AFAIR stands for zombie. Today i noticed the same behavior of processes of mysql when executing heavy queries (a query took something like 4 hours to execute). the cron jobs are mostly php and during their processing most of the CPU eats the php process and not mysql. so i think the issue is not with a specific language/program but with the way the processes are "managed". The only other place i've seen similar behavior was on my Amazon EC2 micro instance when after some aggressive use of CPU the CPU quota was taking effect and everything was slowing down dramatically. This is a dedicated machine running ubuntu. what may be the cause?

    Read the article

  • Calling a void async. - Event based pattern, or another method?

    - by alex
    I have a class that basically stores files in amazon s3. Here is what it looks like (simplified) public class S3FileStore { public void PutFile(string ID, Stream content) { //do stuff } } In my client app, I want to be able to call: var s3 = new() S3FileStore(); s3.PutFile ("myId", File.OpenRead(@"C:\myFile1")); s3.PutFile ("myId", File.OpenRead(@"C:\myFile2")); s3.PutFile ("myId", File.OpenRead(@"C:\myFile3")); I want this to be an asynchronous operation - I want the S3FileStore to handle this (i don't want my caller to have to execute PutFile asynchronously so to speak) but, i want to be able to trap exceptions / tell if the operation completed for each file. I've looked at event based async calls, especially this: http://blogs.windowsclient.net/rendle/archive/2008/11/04/functional-shortcuts-2-event-based-asynchronous-pattern.aspx However, I can't see how to call my PutFile (void) method? Are there any better examples?

    Read the article

  • IWebBrowser2: how to force links to open in new window?

    - by Rob McAfee
    The MSDN documentation on WebBrowser Customization explains how to prevent new windows from being opened and how to cancel navigation. In my case, my application is hosting an IWebBrowser2 but I don't want the user to navigate to new pages within my app. Instead, I'd like to open all links in a new IE window. The desired behavior is: user clicks a link, and a new window opens with that URL. A similar question was asked and answered here and rather than pollute that answered post, it was suggested I open a new discussion. The members on the related post suggested I should be able to do this by trapping DISPID_BEFORENAVIGATE2, setting the cancel flag, and writing code to open a new window, but I've found out that the browser control gets lots of BeforeNavigate2 events that seem to be initiated by scripts on the main page. For example, amazon.com fires BeforeNavigate2 events like crazy, and they are not a result of link invocation. Replies appreciated!

    Read the article

  • Is GAE Really GZipping My Content? Slow Response Times with GAE as CDN

    - by viatropos
    I am testing out Google App Engine as a free Content Delivery Network and it feels like it's taking a long time to serve up my content. Why does this gae page take a say a half a second to download, while your typical stack overflow page downloads much faster even with a ton more content? What am I missing here? All I have done is create an app and uploaded an image according to that tutorial, but content is being served very slowly it seems. Any suggestions? (Not considering Amazon or other CDNs right now, just looking for help with GAE). Note: I am using Safari when I visit those links, maybe safari is causing problems?

    Read the article

  • How to keep subtree removal (`rm -rf`) from starving other processes for Disk I/O?

    - by David Eyk
    We have a very large (multi-GB) Nginx cache directory for a busy site, which we occasionally need to clear all at once. I've solved this in the past by moving the cache folder to a new path, making a new cache folder at the old path, and then rm -rfing the old cache folder. Lately, however, when I need to clear the cache on a busy morning, the I/O from rm -rf is starving my server processes of disk access, as both Nginx and the server it fronts for are read-intensive. I can watch the load average climb while the CPUs sit idle and rm -rf takes 98-99% of Disk IO in iotop. I've tried ionice -c 3 when invoking rm, but it seems to have no appreciable effect on the observed behavior. Is there any way to tame rm -rf to share the disk more? Do I need to use a different technique that will take its cues from ionice? Update: The filesystem in question is an AWS EC2 instance store (the primary disk is EBS). The /etc/fstab entry looks like this: /dev/xvdb /mnt auto defaults,nobootwait,comment=cloudconfig 0 2

    Read the article

  • Mysql replace() function, help with query (what chars do I escape?)

    - by jyoseph
    I am trying to update an old cms where images were stored in /images/editor/, they are now stored in a bucket on amazon s3. I'm trying to update the database using mysql replace. I've done this in the past with replacing simple words, but now Mysql is reporting an error, I suspect because this is more than a simple word: UPDATE contents SET desc = replace(desc, '/images/editor/', 'http://s3.amazonaws.com/my_bucket/editor/') Do I need to escape the : or slashes? I've tried escaping it with a '\' to no avail. Can someone get me pointed in the right direction? Thanks! Edit Here's the error I am getting, nothing too telling error : You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'desc = replace(desc, '/images/editor', 'http://s3.amazonaws.com/app_navigator/ed' at line 1

    Read the article

  • How do you design a database to allow fast multicolumn searching?

    - by Fletcher Moore
    I am creating a real estate search from RETS data, but this is a general question. When you have a variety of columns that you would like the user to be able to filter their search result by, how do you optimize this? For example, http://www.charlestonrealestateguide.com/listings.php has 16 or so optional filters. Granted, he only has up to 11,000 entries (I have the same data), but I don't imagine the search is performed with just a giant WHERE AND AND AND ... clause. Or is this typically accomplished with one giant multicolumn index? Newegg, Amazon, and countless others also have cool & fast filtering systems for large amounts of data. How do they do it? And is there a database optimization reason for the tendency to provide ranges instead of empty inputs, or is that merely for user convenience?

    Read the article

  • Why are SIP calls via my server silent?

    - by Archcode
    I have FreeSWITCH SIP server up and running. It has public IP and sits behind 1-to-1 NAT (it's Amazon EC2 instance actually). I can connect to it, make a call to other endpoint (namely, my android device to my pc and vice versa) and signals are send with no problems (call, answer, hangup, etc). Unfortunately, and what drives me crazy, that's all: no audio gets through, no video either. Server does not throw errors, it reports many retransmission though, looks like this: switch_rtp.c:915 [ zrtp engine]: WARNING! HELLO Max retransmissions count reached (20 retries). ID=15 Codecs are set up correctly (same config worked locally on my LAN). NAT/firewall on client side may be a problem, signals do get through (perhaps due to fixed port, data streaming runs on random one, that is currently my best bet). STUN/TURN/ICE setting on client seem to have no effect. Endpoints sit behind symmetric NAT. On server there are no iptables rules, security group is set as suggested there: http://wiki.freeswitch.org/wiki/Firewall Help, please. How to make it work or at least diagnose what's wrong?

    Read the article

  • Best way to perform authentication on every request

    - by Nik
    Hello. In my asp.net mvc 2 app, I'm wondering about the best way to implement this: For every incoming request I need to perform custom authorization before allowing the file to be served. (This is based on headers and contents of the querystring. If you're familiar with how Amazon S3 does rest authentication - exactly that). I'd like to do this in the most perfomant way possible, which probably means as light a touch as possible, with IIS doing as much of the actual work as possible. The service will need to handle GET requests, as well as writing new files coming in via POST/PUT requests. The requests are for an abitrary file, so it could be: GET http://storage.foo.com/bla/egg/foo18/something.bin POST http://storage.foo.com/else.txt Right now I've half implemented it using an IHttpHandler which handles all routes (with routes.RouteExistingFiles = true), but not sure if that's the best, or if I should be hooking into the lifecycle somewhere else? Many thanks for any pointers. (IIS7)

    Read the article

  • What is the current standard for authenticating Http requests (REST, Xml over Http)?

    - by CodeToGlory
    The standard should solve the following Authentication challenges like- Replay attacks Man in the Middle Plaintext attacks Dictionary attacks Brute force attacks Spoofing by counterfeit servers I have already looked at Amazon Web Services and that is one possibility. More importantly there seems to be two most common approaches: Use apiKey which is encoded in a similar fashion like AWS but is a post parameter to a request Use Http AuthenticationHeader and use a similar signature like AWS. Signature is typically obtained by signing a date stamp with an encrypted shared secret. This signature is therefore passed either as an apiKey or in the Http AuthenticationHeader. I would like to know weigh both the options from the community, who may have used one or more and would also like to explore other options that I am not considering. I would also use HTTPS to secure my services.

    Read the article

  • php : open a file download dialog

    - by Hugh Valin
    I have a mpg file hosted in Amazon S3, that I want to link to a page I have, so the user will be able to download it from the page. I have in my page a linke: bla bla" The link to the file workds when I right click it and choose "Save Target As" , but I would like it to work also when I left click it, and that it will open a file download dialog. right now, a left click will direct to a page that has the video directly played in it (in FireFox) or just won't load (in Explorer) I am working in php, anyone has a clue why this happens?

    Read the article

  • Where can I find boost::fusion articles, examples, guides, tutorials?

    - by Kyle
    I am going to go ahead and shamelessly duplicate this question because the accepted answer is essentially "nope, no guides" and it's been nearly a year now since it's been asked. Does anyone know of any useful articles, guides, tutorials, etc. for boost::fusion besides the barebones documentation on boost.org? (which I'm sure is great as a reference after one has learned the library.) I'm completely open to, say, a link to a book on Amazon. Searched for it myself just now but all I came up with was green tea. The top links on Google aren't much better.

    Read the article

  • GWT HTML widget security risks

    - by h2g2java
    In GWT javadoc, we are advised If you only need a simple label (text, but not HTML), then the Label widget is more appropriate, as it disallows the use of HTML, which can lead to potential security issues if not used properly. I would like to be educated/reminded about the security susceptibilities? It would be nice to list the description of the mechanisms of those risks. Are the susceptibilities equally potent on GAE vs Amazon vs my home linux server? Are they equally potent across the browser brands? Thank you.

    Read the article

  • Windows 2008 Server can't connect to FTP

    - by stivlo
    I have Windows 2008 Server R2, and I am trying to install FTP services. My problem is I can't connect from outside, FileZilla complains with: Error: Connection timed out Error: Could not connect to server Here is what I did. With the Server Manager, I've installed the Roles FTP Server, FTP Service and FTP Extensibility. In Internet Information Services version 7.5, I've chosen Add FTP Site, enabled Basic Authentication, Allow a user to connect Read and Write. In FTP Firewall support on the main server, just after start page, I've set Data Channel Port Range to 49100-49250 and set the external IP Address as the one I see from outside. If I click on FTP IPv4 Address and Domain Restrictions, and click on Edit Feature Settings, I see that access for unspecified clients is set to Allow, so I click OK without changing those defaults. In FTP SSL Policy, I've set to Require SSL connection, certificate is self signed. I tried to connect with FileZilla from the same host and it works, however it doesn't work remotely, as I said above. I've enabled pfirewall.log, but apparently nothing gets logged. The server is in Amazon EC2, and on the security group inbound firewall rules, I've set that ports 21 and ports 49100-49250 accepts connections from everywhere. What else should I be checking to solve the problem?

    Read the article

  • Nagios DNX plugins

    - by danneh3826
    I'm toying with the idea of multiple Nagios instances setup to monitor our infrastructure. I've looked at all the various methods of distributed Nagios checks, and I think DNX comes out the closest. DNX handles failure of worker nodes, that's fine. What happens if the main DNX server fails though? Is there a way to replicate the server too? I'm using AWS EC2 primarily, so I can utilise Elastic Load Balancing for the web UI, but I need to be able to handle the AZ where the monitoring server is to fail over, and essentially for a second to pick up the checking load (active/passive, active/active, so long as it doesn't fail completely) The other thing I'm trying to solve is an issue with routing. What I'd like is to have multiple nodes report a fault before Nagios confirms it as critical. Not the NRPE checks, as they're pretty self explanitory, but things more like check_ping. I often have routing issues out of AWS to certain datacenters, so Nagios can often report bad/no ping/timeout as a critical issue, even though the machine in question is working fine. Would it be possible to have a setup where a worker complains a service check is critical, and have a second worker node (positioned in another datacenter/AZ) also report the service as critical before the Nagios central server issues a critical alert? I realise I might be asking a bit much (how far down the line do you go setting up failover systems before it starts to get ridiculous), however surely someone must have thought of this scenario when developing DNX?

    Read the article

  • How to reduce celeryd memory consumption?

    - by Gringo Suave
    I'm using celery 2.5.1 with django on a micro ec2 instance with 613mb memory and as such have to keep memory consumption down. Currently I'm using it only for the scheduler "celery beat" as a web interface to cron, though I hope to use it for more in the future. I've noticed it is the biggest consumer of memory on my micro machine even though I have configured the number of workers to one. I don't have many other options set in settings.py: import djcelery djcelery.setup_loader() BROKER_BACKEND = 'djkombu.transport.DatabaseTransport' CELERYBEAT_SCHEDULER = 'djcelery.schedulers.DatabaseScheduler' CELERY_RESULT_BACKEND = 'database' BROKER_POOL_LIMIT = 2 CELERYD_CONCURRENCY = 1 CELERY_DISABLE_RATE_LIMITS = True CELERYD_MAX_TASKS_PER_CHILD = 20 CELERYD_SOFT_TASK_TIME_LIMIT = 5 * 60 CELERYD_TASK_TIME_LIMIT = 6 * 60 Here's the details via top: PID USER NI CPU% VIRT SHR RES MEM% Command 1065 wuser 10 0.0 283M 4548 85m 14.3 python manage_prod.py celeryd --beat 1025 wuser 10 1.0 577M 6368 67m 11.2 python manage_prod.py celeryd --beat 1071 wuser 10 0.0 578M 2384 62m 10.6 python manage_prod.py celeryd --beat That's about 214mb of memory (and not much shared) to run a cron job occasionally. Have I done anything wrong, or can this be reduced about ten-fold somehow? ;) Update: here's my upstart config: description "Celery Daemon" start on (net-device-up and local-filesystems) stop on runlevel [016] nice 10 respawn respawn limit 5 10 chdir /home/wuser/wuser/ env CELERYD_OPTS=--concurrency=1 exec sudo -u wuser -H /usr/bin/python manage_prod.py celeryd --beat --concurrency=1 --loglevel info --logfile /var/tmp/celeryd.log Update 2: I notice there is one root process, one user child process, and two grandchildren from that. So I think it isn't a matter of duplicate startup. root 34580 1556 sudo -u wuser -H /usr/bin/python manage_prod.py celeryd wuser 577M 67548 +- python manage_prod.py celeryd --beat --concurrency=1 wuser 578M 63784 +- python manage_prod.py celeryd --beat --concurrency=1 wuser 271M 76260 +- python manage_prod.py celeryd --beat --concurrency=1

    Read the article

  • pitfalls with mixing storage engines in mysql with django?

    - by Dave Orr
    I'm running a django system over mysql in amazon's cloud, and the database default is innodb. But now I want to put a fulltext index on a couple of tables for searching, which evidently requires myisam. The obvious solution is to just tell mysql to ALTER TABLE to myisam, but are there going to be any issues with that? One that comes to mind is that I'll have to remember to do that any time I build a new version of the database, which should theoretically be rare, but there doesn't seem to be a way to tell django to please set the storage engine at the table level. I guess I could write a migration (we use south). Any other things I might be missing? What could possibly go wrong?

    Read the article

  • REST authentication S3 like hmac sha1 signature vs symetric data encryption.

    - by coulix
    Hello stackers, I was arguing about an S3 like aproach using authorization hash with a secret key as the seed and some data on the request as the message signed with hmac sha1 (Amazon S3 way) vs an other developer supporting symetric encryption of the data with a secret key known by the emiter and the server. What are the advantage of using signed data with hmac sha1 vs symetric key other than the fact that with the former, we do not need to encrypt the username or password. What would be the hardest to break ? symetric encryption or sha1 hashing at la S3 ? If all big players are using oauth and similar without symetric key it is sure that there are obvious advantages, what are those ?

    Read the article

  • How do I get to the bottom of network latency and bandwidth issues

    - by three_cups_of_java
    I recently moved two blocks south. That move moved me from Comcast to Broadstripe (high-speed internet cable providers). Comcast was pretty good. Broadstripe sucks. I called them on the phone, and they basically brushed me off (politely). I want to come to them with some numbers, so I can say more than just "it's really slow". I still have access to my old Comcast service, so I can run the tests using both providers. Here's what I'm seeing with my new Broadstripe service: 1) When I browse to most sites, there is a long delay (5-10 seconds) before the page starts loading in my browser 2) The speed test tell me I have 12 megs down (bullshit) 3) I have a server at my office. I just downloaded some files (using scp on the command line). It said I'm getting 3.5 KB/s I'm an experienced programmer and spend most of my days on the command line and in vim. Networking, however is not a strong point. I've played around with traceroute, but I'm not sure if that's the right tool to use. I have access to servers all over the country (I would just use Amazon EC2 to set up a test server), and I prefer to use Ubuntu for my testing. How can I come up with some hard numbers to show Broadstripe how crappy their service is?

    Read the article

  • apache chokes after 300 connections

    - by john titus
    We have an apache webserver in front of Tomcat hosted on EC2, instance type is extra large with 34GB memory. Our application deals with lot of external webservices and we have a very lousy external webservice which takes almost 300 seconds to respond to requests during peak hours. During peak hours the server chokes at just about 300 httpd processes. ps -ef | grep httpd | wc -l =300 I have googled and found numerous suggestions but nothing seems to work.. following are some configuration i have done which are directly taken from online resources. I have increased the limits of max connection and max clients in both apache and tomcat. here are the configuration details: //apache <IfModule prefork.c> StartServers 100 MinSpareServers 10 MaxSpareServers 10 ServerLimit 50000 MaxClients 50000 MaxRequestsPerChild 2000 </IfModule> //tomcat <Connector port="8080" protocol="org.apache.coyote.http11.Http11NioProtocol" connectionTimeout="600000" redirectPort="8443" enableLookups="false" maxThreads="1500" compressableMimeType="text/html,text/xml,text/plain,text/css,application/x-javascript,text/vnd.wap.wml,text/vnd.wap.wmlscript,application/xhtml+xml,application/xml-dtd,application/xslt+xml" compression="on"/> //Sysctl.conf net.ipv4.tcp_tw_reuse=1 net.ipv4.tcp_tw_recycle=1 fs.file-max = 5049800 vm.min_free_kbytes = 204800 vm.page-cluster = 20 vm.swappiness = 90 net.ipv4.tcp_rfc1337=1 net.ipv4.tcp_max_orphans = 65536 net.ipv4.ip_local_port_range = 5000 65000 net.core.somaxconn = 1024 I have been trying numerous suggestions but in vain.. how to fix this? I'm sure m2xlarge server should serve more requests than 300, probably i might be going wrong with my configuration.. The server chokes only during peak hours and when there are 300 concurrent requests waiting for the [300 second delayed] webservice to respond. Please help..

    Read the article

  • How to avoid web server traffic peak resulting from iOS Newsstand app receiving a remote notification?

    - by thomers
    I'm developing an iOS Newsstand app. If it is suspended or not running and connected to a WLAN, Newsstand apps can be triggered by a Push remote notification to download the latest issue (in our case around 100MB) in the background. I'm using Urban Airship for the delivery of the Push broadcast. I'm now worrying about many many iOS devices hitting the web server for one big download more or less at the same time, because I expect the majority of the devices will receive the notification in a very short timeframe. Instead of broadcasts to all devices, should I rather send individual notifications to batches of small groups of devices, spreading them out over a longer period of time? And/or would a CDN like Amazon Cloudfront solve that issue easier/anyway?

    Read the article

  • Django: automatically import MEDIA_URL in context

    - by pistacchio
    Hi, like exposed here, one can set a MEDIA_URL in settings.py (for example i'm pointing to Amazon S3) and serve the files in the view via {{ MEDIA_URL }}. Since MEDIA_URL is not automatically in the context, one have to manually add it to the context, so, for example, the following works: #views.py from django.shortcuts import render_to_response from django.template import RequestContext def test(request): return render_to_response('test.html', {}, context_instance=RequestContext(request)) This means that in each view.py file i have to add from django.template import RequestContext and in each response i have to explicitly specify context_instance=RequestContext(request). Is there a way to automatically (DRY) add MEDIA_URL to the default context? Thanks in advance.

    Read the article

< Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >