Search Results

Search found 512 results on 21 pages for 'slave'.

Page 16/21 | < Previous Page | 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • Equalizing Agent and Master Nagios on state change alone

    - by punith
    We have a setup where there are distributed Nagios running on multiple sites and are equalizing their data to the main Nagios server. The problem is it sends back the data to main Nagios server no matter if there is a state change in host or service. Is it possible to configure the slave Nagios to check the service/Host every 5 sec but send back the data only if there is a state change. Currently it is implemented by Obsess Over Hosts/Service which always runs the command which will equalize. Nagios version is 3 I am no administrator but a developer so I don't know the exact jargon so please bare with me.

    Read the article

  • Is there any automatic Windows software to check status of website..

    - by user59280
    Is there any automatic Windows software application to check status of website and alert me through mail or message or trigger am alarm.. Example: Consider I am waiting to buy a new latest movie ticket online (through) and the ticket booking has not been informed properly (online booking is opening at a random time). In this situation I will be forced to slave for my PC to get the tickets. To avoid such situation, can you suggest me a software? So I need a software which will alert me when the online booking is open.. Can anyone please help me?

    Read the article

  • Move MySQL master

    - by Noodles
    I currently have a master db server (lets call it db1) and 6 slaves (slave1-6). I've setup a new server (db2) as a slave of db1 and it's in sync. I want to change all the slaves to use db2 instead of db1, but with minimal downtime/data loss. At the moment the only way I can think of doing it is shutting down our website (so data stops being written to db1), waiting until all the slaves are up to date, flush logs on db1, shut it down. Reset master on db2, change all slaves to point to db2 with log position = 0. Is this the right way to do it or is there a way to do it without taking the site offline?

    Read the article

  • MySQL December Webinars

    - by Bertrand Matthelié
    We'll be running 3 webinars next week and hope many of you will be able to join us: MySQL Replication: Simplifying Scaling and HA with GTIDs Wednesday, December 12, at 15.00 Central European TimeJoin the MySQL replication developers for a deep dive into the design and implementation of Global Transaction Identifiers (GTIDs) and how they enable users to simplify MySQL scaling and HA. GTIDs are one of the most significant new replication capabilities in MySQL 5.6, making it simple to track and compare replication progress between the master and slave servers. Register Now MySQL 5.6: Building the Next Generation of Web/Cloud/SaaS/Embedded Applications and Services Thursday, December 13, at 9.00 am Pacific Time As the world's most popular web database, MySQL has quickly become the leading cloud database, with most providers offering MySQL-based services. Indeed, built to deliver web-based applications and to scale out, MySQL's architecture and features make the database a great fit to deliver cloud-based applications. In this webinar we will focus on the improvements in MySQL 5.6 performance, scalability, and availability designed to enable DBA and developer agility in building the next generation of web-based applications. Register Now Getting the Best MySQL Performance in Your Products: Part IV, Partitioning Friday, December 14, at 9.00 am Pacific Time We're adding Partitioning to our extremely popular "Getting the Best MySQL Performance in Your Products" webinar series. Partitioning can greatly increase the performance of your queries, especially when doing full table scans over large tables. Partitioning is also an excellent way to manage very large tables. It's one of the best ways to build higher performance into your product's embedded or bundled MySQL, and particularly for hardware-constrained appliances and devices. Register Now We have live Q&A during all webinars so you'll get the opportunity to ask your questions!

    Read the article

  • Laptop HDD not mounting.

    - by D3X
    I have a laptop with broken (shorted out) motherboard and a 640 GB HDD. I want to recover my data from the hard disk. Every time I connect the hard disk using an external casing, it is being detected in the disk management service but not visible in my computer, nor accessible through command prompt. The Hard Disk is functioning as I can feel the hard disk rotor working properly when connected using the casing. Also the LED on the hard disk is blinking, the data is there but i am unable to access the data. Someone please suggest me ways of using the hard-disk as a slave disk using casing.

    Read the article

  • How to pass user-defined structs using boost mpi

    - by lava
    I am trying to send a user-defined structure named ABC using boost::mpi::send () call. The given struct contains a vector "data" whose size is determined at runtime. Objects of struct ABC are sent by master to slaves. But the slaves need to know the size of vector "data" so that the sufficient buffer is available on the slave to receive this data. I can work around it by sending the size first and initialize sufficient buffer on the slave before receiving the objects of struct ABC. But that defeats the whole purpose of using STL containers. Does anyone know of a better way to do handle this ? Any suggestions are greatly appreciated. Here is a sample code that describes the intent of my program. This code fails at runtime due to above mentioned reason. struct ABC { double cur_stock_price; double strike_price; double risk_free_rate; double option_price; std::vector <char> data; }; namespace boost { namespace serialization { template<class Archive> void serialize (Archive &ar, struct ABC &abc, unsigned int version) { ar & abc.cur_stock_price; ar & abc.strike_price; ar & abc.risk_free_rate; ar & abc.option_price; ar & bopr.data; } } } BOOST_IS_MPI_DATATYPE (ABC); int main(int argc, char* argv[]) { mpi::environment env (argc, argv); mpi::communicator world; if (world.rank () == 0) { ABC abc_obj; abc.cur_stock_price = 1.0; abc.strike_price = 5.0; abc.risk_free_rate = 2.5; abc.option_price = 3.0; abc_obj.data.push_back ('a'); abc_obj.data.push_back ('b'); world.send ( 1, ANY_TAG, abc_obj;); std::cout << "Rank 0 OK!" << std::endl; } else if (world.rank () == 1) { ABC abc_obj; // Fails here because abc_obj is not big enough world.recv (0,ANY_TAG, abc_obj;); std::cout << "Rank 1 OK!" << std::endl; for (int i = 0; i < abc_obj;.data.size(); i++) std::cout << i << "=" << abc_obj.data[i] << std::endl; } MPI_Finalize(); return 0; }

    Read the article

  • Is there any replication standard or concept for application server data replication

    - by Naga
    Hi friends, Say, I have a server that handles file based mass data and can process thousands of read requests and hundreds of provisioning requests(Add, modify, delete) per second. This is not SQL based database. Now i planned to implement replication. There should be master- master replication, master slave replication, partial replication(entries matching a criteria) and fractional replication(part of an entry). Is there any standard for implementing such replication mechanisms? Can you please suggest some solution overview? Thanks, Naga

    Read the article

  • Get svn revision without an appropriate svn binary installed

    - by Sridhar Ratnakumar
    For some reason, we can't update the SVN in some build machines. Installed svn version is 1.3.x. But Hudson slave used 1.6 to create a checkout. This means we can't run "svn info" on those checkouts: $ svnversion subversion/libsvn_wc/questions.c:110: (apr_err=155021) svn: This client is too old to work with working copy '.'; please get a newer Subversion client $ svn info subversion/libsvn_wc/questions.c:110: (apr_err=155021) svn: This client is too old to work with working copy '.'; please get a newer Subversion client $ My question, is there a way to access the revision number without having to invoking the svn binary? You know, like trying to look into the .svn/ directory? Assume that the checkout is using latest svn version (1.6).

    Read the article

  • How do I configure the binary log file for auditing in MySQL?

    - by Parth
    How to use Binary Log file for Auditing in MySQL? I want to track the change in a DB using Binary Log so that I can replicate these changes to other DB please do not give me hyperlinks for MySQL website. please direct me to find the solution EDIT I have looked for auditing options and created a script using Triggers for that, but due toi the Joomla DB structure it did'nt worked for me, hence I have to move on to Binary Log file concept now i am stucked in initiating the concept as I am not getting the concept of making the server master/slave, so can any body guide me how to actually initiate it via PHP?

    Read the article

  • Make MySQL database replication always use the most free node?

    - by Chad Johnson
    We started using Multi-Master Replication Manager for MySQL, and I am wondering whether it is possible to to treat this setup like multi-symmetric processing: a process pops off the process queue, and the node (in this case a server) that is most free is selected for the job. It seems that what happens is, the service switches to a slave ONLY when it mysqld crashes or goes away. Is there a way to make database replication for MySQL act in more of a distributed manner? Maybe there is other software besides MMM that can do this? Is there a way to switch the reader role to another server whene mysqld slows down (rather than just when it fails)?

    Read the article

  • Is there a C# open-source search app which scales cheaply?

    - by domspurling
    I need to quickly replace a listings website which has the following characteristics: smallish database (10,000 items, < 1GB) < 10% of the items updated/created/removed daily most common activity is searching the whole dataset, returning 1-1000 items traffic peaks at 1m page impressions per day Scaling strategy for the existing app has been to separate read-only and read/write activity. Multiple slave databases are used for searching and writes are done to a master, which update the slaves using MS SQL replication. Since read activity is more common than write, this has proved to be a cheap way to do database load balancing, without true clustering. I now need to replace the app - are there any C# open-source apps which scale as neatly as this?

    Read the article

  • How to Avoid Maven builds stall on ssh host authenticity problem?

    - by Peter Kahn
    What's the right way to keep ssh host authenticity from being a problem for maven and hudsno builds? I have hudson building my maven project on a VM. When the ESX server with my VMs on it is taxed some of my jobs will stall out stuck in a loop of ssh host authenticity problems. The hosts were in the known hosts file, but during these times the clocks on the slave VMs have drifted far from those of my maven repo. [INFO] Retrieving previous build number from snapshots The authenticity of host 'maven.mycorp.com' can't be established. DSA key fingerprint is 6d:....83. Are you sure you want to continue connecting? (yes/no): The authenticity of host 'maven.mycorp.com' can't be established. Is there something other than disabling host checking (CheckHostIP no)?

    Read the article

  • Trouble using genericra to integrate activemq and glassfish when using failover protocol

    - by Kyle
    Hi, I'm attempting to use activemq in glassfish using the genericra resource adapter provided with glassfish 2.1. I have found a few pages with helpful information including http://activemq.apache.org/sjsas-with-genericjmsra.html. I have actually had success and been able to get MDBs to use activemq as their JMS provider, but I'm running into an issue as I'm trying to do some more complicated configuration. I want to set up a master-slave configuration, which would require my clients to use a brokerURL of failover:(tcp://broker1:61616,tcp://broker2:61616). In order to do this, I set the following property when calling asadmin create-resource-adapter-config (I have to escape '=' and ':'): ConnectionFactoryProperties=brokerURL\=failover\:(tcp\://127.0.0.1\:61616,tcp://127.0.0.1\:61617) However, I am now getting a StringIndexOutOfBoundsException when my application starts up. I suspect the comma in between the two URLs is the culprit, since this works fine: brokerURL\=failover\:(tcp\://127.0.0.1\:61616) Just wondering if anyone has dealt with this issue before. Also wondering if there is a better way to integrate with glassfish than using the generic resource adapter.

    Read the article

  • Should I trust Redis for data integrity?

    - by Jiaji
    In my current project, I have PostgreSQL as my master DB, and Redis as kind of a slave, e.g., when some user adds another as a friend, first the relationship will be stored in PostgreSQL and then a friend list in Redis will be updated. When some user's friend list is requested, it will be pulled out of Redis instead of PostgreSQL. The question is: when I update the friend list in Redis, should I get a fresh copy outof PostgreSQL, and replace the old list in Redis with the new one or should I keep the old list and simply SADD the userid into the list? The latter is of course best for performance, but intuitively the former does a better job in keep the data integrity? And if something like Celery is used, is the second method worth the risk?

    Read the article

  • Dynamic resize with MPlayer and PyGTK

    - by alex
    Hi everyone; I've wrote a piece of code in python and pygtk for an embeded mplayer in a gui. I assume I use GtkSocket and the slave mode of mplayer with the -wid option. But I've got an issue, when the size of my GTK window is smaller than my stream, the stream appears to be cropped. And when the size of my window is bigger than my stream, the stream appear centred inside the widget which embed MPlayer. (a gtk.Frame but I've also try with a gtk.DrawingArea) I would like to know how I can get my stream resize dynamically depending on the window's size. I don't want to use Glade or any GUI builder. Thanks in advance for any help, and please excuse my poor english.

    Read the article

  • When compiling programs to run inside a VM, what should march and mtune be set to?

    - by Russ
    With VMs being slave to whatever the host machine is providing, what compiler flags should be provided to gcc? I would normally think that -march=native would be what you would use when compiling for a dedicated box, but the fine detail that -march=native is going to as indicated in this article makes me extremely wary of using it. So... what to set -march and -mtune to inside a VM? For a specific example... My specific case right now is compiling python (and more) in a linux guest inside a KVM-based "cloud" host that I have no real control over the host hardware (aside from 'simple' stuff like CPU GHz m CPU count, and available RAM). Currently, cpuinfo tells me I've got an "AMD Opteron(tm) Processor 6176" but I honestly don't know (yet) if that is reliable and whether the guest can get moved around to different architectures on me to meet the host's infrastructure shuffling needs (sounds hairy/unlikely). All I can really guarantee is my OS, which is a 64-bit linux kernel where uname -m yields x86_64.

    Read the article

  • Manage groups of build configurations in Hudson

    - by Lóránt Pintér
    I'm using Hudson to build my application. I have several branches that come and go. Whenever there's a new branch, I have to set up the following builds for it: a continuous build that runs after every change in SVN a nightly build a nightly site generation (I'm using Maven under the hood) and a weekly integration build for some branches currently this means I need to copy four template configurations and set them up with the branch URL. I don't like this for two reasons: It's redundant, so modifying something is error-prone and takes a lot of time. I need four full checkouts of the product per branch on every build slave, plus four separate private Maven repository, not to mention the built artifacts. This is a lot of space wasted. What I'd like instead is to have one workspace and one configuration for allthese builds. Is this possible with Hudson?

    Read the article

  • Is it possible to build a Mac binary on a non-Mac unix machine?

    - by nbolton
    I would like to set up a Mac buildbot slave, but unfortunately it's not possible to install Mac OS X 10.5 on my XenServer hypervisor. So, I've had an idea, but not quite sure whether or not it'll work. The application is C++, and on Mac it's compile using GNU Make. I have a Mac desktop PC, and I was hoping I could copy the .h and .lib files on to a Linux box, and try to build against the Mac headers: #include <mach-o/dyld.h> #include <AvailabilityMacros.h>

    Read the article

  • Does MVCScriptManager from CodePlex work with ViewUserControls?

    - by RonnBlack
    I tried the MVCScriptManager from CodePlex and it seems to work well until you try to use it in conjunction with a ViewUserControl. When it is used in this type of scenario it gives the following error. A ScriptManager with RenderMode set to Master is not present. Such ScriptManager must precede one with RenderMode set to Slave. There is a ScriptManager with render mode set to "Master" in the header of the Site.Master page but it appears that the partial views are rendered first. Is there any way to work around this problem?

    Read the article

  • Export DB Tables via phpMyAdmin In Non-Alphabetical Order

    - by dosboy
    I have a MySQL database from a Joomla MultiSite installation where it has a set of tables with different prefixes for each Joomla site. When I export the db via phpMyAdmin it creates a SQL file where the tables are created and populated in alphabetical order. The problem is that the tables for the slave sites have dependencies on the tables for the master site, but alphabetically their prefixes are ahead of the master site. So the export works fine but when I try importing I get error after error and have to manually move sections around in the SQL file to make sure that the dependent tables are created/populated first. So, is it possible to export a db via phpMyAdmin with the tables in a specific order?

    Read the article

  • how to write T-SQL to compare and copy data?

    - by George2
    Hello everyone, I have two SQL Server 2008 Enterprise databases (on two machines), and one of the databases is master database and another database is slave database. I want to transfer update from a table in source database to a table in destination database (two tables are of the same schema, both of them are using a single column as unique primary key). The transfer rule is (in short, the rule is keeping the destination database the same as source database because of the update of the source database), if there is a new row in source database but not in destination database, insert the row in destination database; if a row not exists in source database but exists in destination database, delete the row in destination database; if a row's content (i.e. columns other than primary key columns) changes in source database, update the new content into destination database. thanks in advance, George

    Read the article

  • low latency data link pc to android

    - by steveh
    can anyone recommend a method for low latency bi-directional com link between my pc app and android slave app. the app i have works now via wifi but the latency is too slow (about 300mS), i'm looking to get it down to 10mS or so. the android is acting like a glorified remote control to the game on the pc. the apk displays a low res image and sends button presses back to the game and the round trip need to be quick i'm thinking the only option beside the network, is to connect a usb cable but i don't see a lot of support for that path and not even sure it would be lower latency than wifi any ideas please?

    Read the article

  • Making bash script to check connectivity and change connection if necessary. Help me improve it?

    - by cypherpunks
    My connection is flaky, however I have a backup one. I made some bash script to check for connectivity and change connection if the present one is dead. Please help me improve them. The scripts almost works, except for not waiting long enough to receive an IP (it cycles to next step in the until loop too quick). Here goes: #!/bin/bash # Invoke this script with paths to your connection specific scripts, for example # ./gotnet.sh ./connection.sh ./connection2.sh until [ -z "$1" ] # Try different connections until we are online... do if eval "ping -c 1 google.com" then echo "we are online!" && break else $1 # Runs (next) connection-script. echo fi shift done echo # Extra line feed. exit 0 And here is an example of the slave scripts: #!/bin/bash ifconfig wlan0 down ifconfig wlan0 up iwconfig wlan0 key 1234567890 iwconfig wlan0 essid example sleep 1 dhclient -1 -nw wlan0 sleep 3 exit 0

    Read the article

  • How can I kill MySQL queries every 60 seconds in Windows?

    - by Ethan Allen
    I want to check my MySQL server every minute and kill queries that have run longer than 150 seconds. The main reason I want to do this is because I don't want queries from certain people to lock up the DB for everyone else. I know this is not the ultimate solution to the problem, but at least it's a fallback in case something goes wrong with a query. I don't have a slave DB (this is just an at-home project). I'd like to schedule a script to run that does this for me. I'm unfamiliar with Perl or Ruby and I need it done on my Windows 2008 Server box. I've looked into creating a simple cmd line script, but that doesn't seem to be possible. I know currently I can do something like this but I have to do it manually: mysqladmin processlist mysqladmin kill Anyone have any ideas or examples on how I could do this?

    Read the article

  • MPI_Bsend and MPI_Isend. How do they work ?

    - by GBBL
    Hi, using buffered send and non blocking send I was wondering how and if they implement a new level of parallelism in my application eventually generating a thread. Imagine that a slave process generates a large amount of data and want to send it to the master. My idea was to start a buffered or non blocking send then immediately begin to compute the next result. Just when I would have to send the new data I wold check if I can reuse the buffer. This would introduce a new level of parallelism in my application between CPU and communication. Does anybody knows how this is done in MPI ? Does MPI generate a new thread to handle the Bsend or Isend ? Thanks.

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21  | Next Page >