Search Results

Search found 13788 results on 552 pages for 'instance'.

Page 283/552 | < Previous Page | 279 280 281 282 283 284 285 286 287 288 289 290  | Next Page >

  • Amazon EC2 vs Dedicated server at Hetzner, what's the use for EC2?

    - by C-Blu
    After searching the web I still can't find the reason to use EC2. What's the point to scale EC2? If you expect a huge burst in traffic, they say. OK, but what if you already have a couple of sites with good traffic, and for example medium reserved EC2 instance is not enough. You are paying $36.60(medium reserved for 1year) in EU(Ireland) + traffic + optional expenses for databases and S3 if you use them. Of course as some point when you are under $56.6-$66.1 you can optimize your hosting costs with Amazon EC2. But when you get at some point if purchase EX4 server from Hetzner, it will surpass your perfomance needs for a long time, before you get a massive traffic. (I am wrong?) CPU: i7-2600 Quadcore (3.4-3.8 Ghz) RAM: 16 GB HDD: 2x3 TB SATA (6 Gbit/s) - I think that disc performance of a dedicated is better then of Amazon EBS Traffic: 10 TiB in month included. This is what you get from Hetzner for $56(- 19% VAT) or $66 for EU residents. Please, tell me what's the reason to use Amazon? Which load won't a server from Hetzner take, but Amazon Auto Scaling will? The maintenance of dedicated vs EC2 is still the same? Or hardware failure at Amazon, won't ruin your EBS storage? I'm still not at the level when I need expensive hosting, but want to know beforehand, just to be sure if Amazon infrastructure is better then pure performance of Hetzner's hardware.

    Read the article

  • xm console command is not working in XEN

    - by stillStudent
    I have XEN 4.0.x.x rpm with CENT OS. I have set it up and have many VMs on it. But problem is when I execute 'xm console ' command from dom0, command just hangs dom0 and some 'y' comes up in next line but nothing really happens. Is it a bug in xen 4.0 and I need to upgrade it or I can tweak some configuration file in /etc/xen/ to make it work. I found following at some site but its not working: In order to be able to login to your domU from the console using: xm create {your hostname}.cfg -c (to the set root password for ssh, for instance, or to see more output than just kernel output when debugging) it may be necessary to add the following line to your /etc/xen/{your hostname}.cfg extra='xencons=tty' Is there any other way to solve it?

    Read the article

  • DRBD with MySQL

    - by tdimmig
    Question about using DRBD to provide HA for MySQL. I need to be sure that my backup MySQL instance is always going to be in a functional state when the failover occurs. What happens, for example, if the primary dies part way through committing a transaction? Are we going to end up with data copied to the secondary that mysql can't handle? Or, what if the network goes away while the two are syncing, and not all of the data makes it across. It seems like it's possible to get into a state where incomplete data on the secondary makes it impossible for mysql to start up and read the database. Am I missing something?

    Read the article

  • Lost the shift+< and shift+> key combinations

    - by REA_ANDREW
    On my Laptop I have somehow lost the shift+< and shift+ key combinations. Setup: Ubuntu 12.10 OS Windows 7 running in Virtual box Synergyc running on the Linux Host Connected to Synergys running on my desktop machine, also running Ubuntu 12.10 Info The key mappings are fine in Linux. This has happened following an install of vsvim inside of Visual Studio 2012. Now globally inside this particular Virtual Box instance I can no longer use shift+< and shift+ Funnily enough when I go to the VS Key mapping finder it shows me that: Shift+< mapped to Ctrl+Shift+Alt+Z Shift+ mapped to Ctrl+Shift+Alt+X I have asked in super user as this is also affecting the windows environment. Any help, greatly appreciated. TIA

    Read the article

  • Find out what resource is triggering bad password attempt?

    - by Craig Tataryn
    Background: Have a problem at work where I am constantly being locked out of my computer. We are in an environment that has a Domain Controller and we use Active Directory for authentication. By going through my normal workflow while on the phone with Desktop Support we were able to track the bad password attempts that were causing the lockouts to an application: "Eclipse". This is the application I use to do software development. I immediately thought it was a cached password for our SVN server that's the culprit, however the desktop support person couldn't tell me which resource the password attempt was being made against (i.e. which URL for instance). Question: Is there a way that I can monitor bad authentication requests made by an application on my desktop and find out what resource they are attempting it against?

    Read the article

  • Let's do the Time Warp again!

    - by Mike Dietrich
    Once you start reading about Daylight Saving Time changes in MyOracleSupport you'll find still a lot of notes explaining this and that and back and forth. But sometimes there seems to be a bit too much information - and lacking clear instructions. Once a customer called that the "Time Zone Spaghetti" after reading MOS notes about DST for several hours ending up with the note where he has begun to read before still not clear what to do now I'm using usually the scripts from MOS Note:977512.1 as you'll just have to exchange the DST version you are upgrading to and it has everything you need to check and adjust the time zone data in the database - for instance after applying the DST V18 patch to your database's homes. As a reminder to myself when traveling I have stored a copy of the script part of that note here - and please note that this is not an official Oracle version. Always read and check the original MOS Note:977512.1 as it may have gotten changed in between and may contain changes or corrections and as it has a lot of more explainationary information than I could cover here. And credit to Gunter Vermeir from Oracle Support, who is the owner of that MOS Note and has compiled all that useful stuff together. DST_prepare.sql DST_adjust.sql

    Read the article

  • Setting up Amazon Cloudwatch to get an alert when you server is down

    - by Saif Bechan
    I have an instance running on Amazon EC2 that I turned into a webserver. Now I have been looking at cloudwatch, but I do not know if it is the correct tool for the job. Basically I want to get informed when the server is down, for whatever reason. Maybe the server got hacked, or the server shut down for whatever reason, I want to get a notification on that. I have enabled clouwatch, and tried to set up a alert, but I only see things like network in-out or cpu usage, an d metrix. Now I do not know if these will do the trick.

    Read the article

  • How can I design my classes for a calendar based on database events?

    - by Gianluca78
    I'm developing a web calendar in php (using Symfony2) inspired by iCal for a project of mine. At this moment, I have two classes: a class "Calendar" and a class "CalendarCell". Here you are the two classes properties and method declarations. class Calendar { private $month; private $monthName; private $year; private $calendarCellList = array(); private $translator; public function __construct($month, $year, $translator) {} public function getCalendarCells() {} public function getMonth() {} public function getMonthName() {} public function getNextMonth() {} public function getNextYear() {} public function getPreviousMonth() {} public function getPreviousYear() {} public function getYear() {} private function calculateDaysPreviousMonth() {} private function calculateNumericDayOfTheFirstDayOfTheWeek() {} private function isCurrentDay(\DateTime $dateTime) {} private function isDifferentMonth(\DateTime $dateTime) {} } class CalendarCell { private $day; private $month; private $dayNameAbbreviation; private $numericDayOfTheWeek; private $isCurrentDay; private $isDifferentMonth; private $translator; public function __construct(array $parameters) {} public function getDay() {} public function getMonth() {} public function getDayNameAbbreviation() {} public function isCurrentDay() {} public function isDifferentMonth() {} } Each calendar day can includes many events stored in a database. My question is: which is the best way to manage these events in my classes? I think to add a eventList property in CalendarCell and populate it with an array of CalendarEvent objects fetched by the database. This kind of solution doesn't allow other coders to reuse the classes without db (because I should inject at least a repository services also) just to create and visualize a calendar... so maybe it could be better to extend CalendarCell (for instance in CalendarCellEvent) and add the database features? I feel like I'm missing some crucial design pattern! Any suggestion will be very appreciated!

    Read the article

  • Is wrapping a third party code the only solution to unit test its consumers? [closed]

    - by Songo
    I'm doing unit testing and in one of my classes I need to send a mail from one of the methods, so using constructor injection I inject an instance of Zend_Mail class which is in Zend framework. Now some people argue that if a library is stable enough and won't change often then there is no need to wrap it. So assuming that Zend_Mail is stable and won't change and it fits my needs entirely, then I won't need a wrapper for it. Now take a look at my class Logger that depends on Zend_Mail: class Logger{ private $mailer; function __construct(Zend_Mail $mail){ $this->mail=$mail; } function toBeTestedFunction(){ //Some code $this->mail->setTo('some value'); $this->mail->setSubject('some value'); $this->mail->setBody('some value'); $this->mail->send(); //Some } } However, Unit testing demands that I test one component at a time, so I need to mock the Zend_Mail class. In addition I'm violating the Dependency Inversion principle as my Logger class now depends on concretion not abstraction. Now is wrapping Zend_Mail the only solution or is there a better approach to this problem? The code is in PHP, but answers doesn't have to be. This is more of a design issue than a language specific feature

    Read the article

  • Prevent URLs from specific domains from being saved in Firefox history

    - by noam
    I want to prevent or block URLs of specific domains from being saved or shown in my history. I want to be able to go to these certain websites normally, just not have them saved and not have to use private or incognito mode. For instance, I don't want any of Google's search result pages to be saved in my history since then when I use the awesomebar I get a lot of Google's search results, which are of no use to me. Of course I can keep on deleting them, but I would like a way to specify that any URL starting with www.google.com shouldn't be saved.

    Read the article

  • Soft links between samba profile and profile.V2

    - by Alex Rose
    I am currently using a samba server to host windows sessions. For the moment, every machine runs windows XP, but in an upcoming hardware upgrade, I will have new Windows 7 machines installed. The problem is that the user directory changed from XP to 7 (for instance 'My Documents' became 'Documents'). To solve that, samba created a folder called profile.V2, which contains files and folders for Windows 7. Now I would like to link the two profiles folders ('profile' from windows XP, 'profile.V2' from windows 7) so that a user can logon from a XP or 7 machine and still have access to the same files. I tried to create softlinks between folder pairs ('Documents' -­­ 'My documents') and it seems to work. My question is : is it likely to create issues in the future?

    Read the article

  • Remote I/O costs with a Content Delivery Network

    - by x711Li
    As far as I know, the time complexity of scanning a directory and the amount of files in said directory are correlated due to I/O costs. Would the administrative costs of placing the files in a hashed directory tree for uploading/downloading files through a CDN API be worth it for the added efficiency? For instance, given a filename foo.mp3, the MD5 hash for this is 10ebb1120767e9de166e0f5905077cb1. Thus, storing foo.mp3 in ./10/eb/foo.mp3 would allow for less files per directory (assuming MD5 generates patterns with in Base36, this allows for 36^2 root directories with 36^2 subdirectories each and little chance of hash collision) Considering the directories themselves are not loaded, would the I/O costs of directory scanning still exist with direct uploading/downloading?

    Read the article

  • How do I fix Nginx config to work with multiple hosts of Unicorn?

    - by fred deAlmeida
    I have no problem instantiating multiple instances of unicorn on different unix sockets and ports. Works fine if I do url:port. My problem comes in correctly formatting nginx.conf to allow multipe upstream conditions. Whatever i do does not seem to work. One instance is fine works fine. Multiple gives me a ""upstream" directive is not allowed here error I am using the base nginx sample from the unicorn site. and doubling up the upstream area with differing terms. each is part of the http set. Any help would be amazing!

    Read the article

  • Do you have to recreate workspaces after upgrading a TFS 2008 server to TFS 2010?

    - by Clara Oscura
    I am just reposting this thread from a MSDN forum since it seems to be unavailable. It was very useful when I was having trouble with my folder mappings after migrating to TFS 2010. Question: I opened VS2008 and connected it to the upgraded 2010 TFS server.  Upon clicking any of our Team Projects in source control explorer I get "Team Foundation Error - The workspace MYWORKSPACE;DOMAIN\MYUsername already exists on computer MYPCNAME." Answer: The same local paths on your machine are mapped to 2 different workspaces, one on the preupgrade server and one on the postupgrade server.  It's not safe to have multiple workspaces on different servers mapped to the same local paths b/c you could pend some changes while connected to one server, and the other server would have no idea what you did.  You should either delete your conflicting workspaces from one of the servers (if you don't need them on both), or test the new TFS instance from a new workspace (on different machine). If you want to test an existing production workspace on both servers, then yes, you will have to mess around with the workspace cache. You don’t have to delete the entire cache, you just need to run "tf workspaces /remove:* /server:<serverurl>" to clear the cached workspaces from a server (the command won't delete the workspaces), and possibly "tf workspaces /server:<server>" to refresh the workspace cache for a given server.  You will also have to do back up and restore the workspace before switching servers or your local files could be inconsistent. From the “Microsoft Visual Studio Team Foundation Server 2010 Beta 1” forum (not available anymore?) Technorati Tags: TFS 2010,TFS Workspaces,Team System,Team Foundation Server 2010

    Read the article

  • Algorithms for Data Redundancy and Failover for distributed storage system?

    - by kennetham
    I'm building a distributed storage system that works with different storage sizes. For instance, my storage devices have sizes of 50GB, 70GB, 150GB, 250GB, 1000GB, 5 storage systems in one system. My application will store any files to the storage system. Question: How can I build a distributed storage with the idea of data redundancy and fail-over to store documents, videos, any type of files at the same time ensuring that should one of any storage devices fail, there would be another copy of these files on another storage device. However, the concern is, 50GB of storage can only store this maximum number of files as compared to 70GB, 150GB etc. With one storage in mind, bringing 5 storage systems like a cloud storage, is there any logical way to distribute or store the files through my application? How do I ensure data redundancy through different storage sizes? Is there any algorithm to collate multiple blob files into a single file archive? What is the best solution for one cloud storage with multiple different storage sizes? I open this topic with the objective of discussing the best way to implement this idea, assuming simplicity, what are the issues of this implementation, performance measurements and discussion of the limitations.

    Read the article

  • Run Win7 Guest (raw disk) in Ubuntu (which was installed as Dual Boot on existing Win7)

    - by kingdango
    I installed Ubuntu 12.10 on top of Win 7 as a dual boot (awesome!). I'm hoping to use VirtualBox to run my original Win7 instance as a guest OS under Ubuntu. I found this existing question and followed the directions to no avail. I can get the VMDK file created but when I run it I just get a blank black screen with no additional information and Windows never loads. I see no HD activity or anything that would indicate it's loading. I used this command to create the VMDK file: VBoxManager internalcommands createrawvmdk -filename ~/.VirtualBox/Win7Native.vmdk -rawdisk /dev/sda3 It looks like everything was created correctly but I just get a blank screen when I run the VM. I do get this warning when I boot the VM: VirtualBox - Warning The virtual machine execution may run into and error condition as described below... The medium '/home/XXX/.VirtualBox/Win7Native.vmdk' has a logical size of 583GB but the file system the medium is located on can only handle up to 16GB in theory. We strongly recommend to put all your virtual disk images and the snapshot folder on a proper file system (e.g. etc3) with a sufficient size. ErrorId: Fat Partition Detected Severity: Warning How can I get this working?

    Read the article

  • AWS VPC - why have a private subnet at all?

    - by jkim
    In Amazon VPC, the VPC creation wizard allows one to create a single "public subnet" or have the wizard create a "public subnet" and a "private subnet". Initially, the public and private subnet option seemed good for security reasons, allowing webservers to be put in the public subnet and database servers to go in the private subnet. But I've since learned that EC2 instances in the public subnet are not reachable from the Internet unless you associate an Amazon ElasticIP with the EC2 instance. So it seems with just a single public subnet configuration, one could just opt to not associate an ElasticIP with the database servers and end up with the same sort of security. Can anyone explain the advantages of a public + private subnet configuration? Are the advantages of this config more to do with auto-scaling, or is it actually less secure to have a single public subnet?

    Read the article

  • Client-server application design issue

    - by user2547823
    I have a collection of clients on server's side. And there are some objects that need to work with that collection - adding and removing clients, sending message to them, updating connection settings and so on. They should perform these actions simultaneously, so mutex or another synchronization primitive is required. I want to share one instance of collection between these objects, but all of them require access to private fields of collection. I hope that code sample makes it more clear[C++]: class Collection { std::vector< Client* > clients; Mutex mLock; ... } class ClientNotifier { void sendMessage() { mLock.lock(); // loop over clients and send message to each of them } } class ConnectionSettingsUpdater { void changeSettings( const std::string& name ) { mLock.lock(); // if client with this name is inside collection, change its settings } } As you can see, all these classes require direct access to Collection's private fields. Can you give me an advice about how to implement such behaviour correctly, i.e. keeping Collection's interface simple without it knowing about its users?

    Read the article

  • Template syntax for users - is there a right way to do it?

    - by RickM
    Ok, I'm in the middle of building a saas system, and as part of that, the hosted clients need to be able to edit certain layout templates, baqsically just html, css and javascript files. I'm obviously going to be wanting to use a template syntax here as it would be dumb to let people execute PHP code, so in this instance template syntax does need to be used. I know that in the grand scale of things, this is a very minor thing, but what template syntax do you use, and why? Is there one that's considered better than others? I've seen all sorts being used with no real consistency, for example: Smarty Style: {$someVar} {foreach from="foo" item="bar"} {$bar.food} {/foreach} ASP Style: {% someVar %} {% foreach foo as bar %} {% bar.food %} {% endforeach %} HTML Style: <someVar> <foreach from="foo" item="bar"> <bar:food> </foreach> PyroCMS/FuelPHP "LEX" Style: {{ someVar }} {{ foreach from="foo" item="bar" }} {{ bar:food }} {{ endforeach }} Obviously these arent 100% accurate (for example, LEX is used alongside PHP for loops), and are only to give you an example of what I mean. What, in your opinion would be the best one (if any) to go with. I ask this bearing in mind that people using this are likely to be novice users. I did look around at a bunch of hosted CMS and E-Commerce systems as these seem to make use of user-editable templates, and most seem to be using some form of their own syntax. I should note that whatever style I end up going with, it will be with a custom template handler due to the complexity of the system and how template files are stored. Plus I'd not want to touch the likes of Smarty with a barge pole!

    Read the article

  • How would I measure the amount of RAM needed per Glassfish domain? [closed]

    - by oligofren
    Possible Duplicate: Can you help me with my capacity planning? In our test environment we have a lot of apps spread out over a few servers and Glassfish domains. To make versioning easier I would have liked to have one Glassfish domain per customer per app (kind of like a heavyweight version of lots of jetty instances). But I have heard that Glassfish is kind of heavy on the resources, and so I would need to measure approximately how many instances would fit in the available RAM. These are low-traffic/low load testing servers, so CPU is not really an issue, though RAM might be. How would I get an approximate measure of how much RAM is needed? This is one Glassfish 3 instance with one heavy EAR application deployed. top? jvmstats? ??

    Read the article

  • Nodejs removing event listeners [migrated]

    - by JeffH
    Looking to get some help. I'm new to Nodejs and wondering if it is possible, to remove this custom event emitter. Most of this code comes from the Hand on nodejs by Pedro Teixeira. My function at the bottom is attempting to remove the custom event emitter you setup in the book. var util = require('util'); var EventEmitter = require('events').EventEmitter; // Pseudo-class named ticker that will self emit every 1 second. var Ticker = function() { var self = this; setInterval(function() { self.emit('tick'); }, 1000); }; // Bind the new EventEmitter to the sudo class. util.inherits(Ticker, EventEmitter); // call and instance of the ticker class to get the first // event started. Then let the event emitter run the infinante loop. var ticker = new Ticker(); ticker.on('tick', function() { console.log('Tick'); }); (function tock() { setInterval(function() { console.log('Tock'); EventEmitter.removeListener('Ticker',function() { console.log("Clocks Dead!"); }); }, 5000); })();

    Read the article

  • Detaching EBS Volumes (in LVM) take a lot of time

    - by Cheezo
    I have an EC2 Instance(EBS Backed-root partition) with EBS volumes configured via LVM. I have formatted it as ext4 and can mount it to store data etc. Now i want take a snapshot of the root partition, hence in that case i go and detach the other non-root EBS volumes (configured in LVM). Here a regular detach does not work, and i have "force" detach almost always. Although, i another similar setup with RAID instead of LVM and there after stopping RAID, i can easily detach. The whole setup is running Ubuntu Maverick 10.10 Please assist me in the same.

    Read the article

  • Connecte RemoteApp to Exising Session

    - by Fowl
    Due to the slightly retardedness of an app I've got to run, I've got a server auto logging in on boot with the app on auto start. Remote desktop at my leisure to check on its status, etc. Gross, but it works. Now I've tried out RemoteApp which seems to work ok (ie. remotes the notification icon, balloons...) except for the fact that it creates a new session (and therefore instance of my app) - this confuses the heck out of app and it means that if I was using 'full' remote desktop I lose all my state. "Restrict each user to a single session" doesn't work. IIRC "Windows XP Mode" uses RemoteApp and it doesn't seem to have any trouble switching between modes. So how can I connect to a running app?

    Read the article

  • So what is Active GridLink for RAC?

    - by Ruma Sanyal
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 I had referred to Active GridLink for RAC in my blog yesterday and since then got several questions on this topic. So I decided to re-visit Active GridLink. With the release of version 11g, Oracle WebLogic Server started to provide strong support for the Real Application Clusters (RAC) features in Oracle Database 11g, minimizing database access time while allowing transparent access to rich pooling management functions that maximizes both connection performance and availability. WebLogic is the only application server in the marketplace which has been fully integrated and certified with Oracle Database RAC 11g without losing any rich functionality. Active GridLink provides Fast Connection Failover (FCF), Runtime Connection Load-Balancing (RCLB), and RAC instance graceful shutdown. With the key foundation for providing deeper integration with Oracle RAC, this single data source implementation in Oracle WebLogic Server supports the full and unrestricted use of database services as the connection target for a data source. For more details and to understand how our customer NEC leverages this capability, read the whitepapers on this topic. Get in depth ‘how-to’ details from this youtube video from our resident expert, Frances Zhao.

    Read the article

  • Creating New WebSphere cluster

    - by user154561
    I need to improve WAS perfomance and want to make cluster. I have two separate machines with WebSpphere 7 on it. As I see to do this, I need to add node from my second (remote) WAS to the first one. I try to use "Add node" from console, but without success. It can't find host when I try to execute it. WAS help says about host: Specifies the host name or IP address of the node to add to the cell. A WebSphere Application Server instance must be running on this machine. Does that mean that I cannot add node from a remote machine with "add node?"

    Read the article

< Previous Page | 279 280 281 282 283 284 285 286 287 288 289 290  | Next Page >