Search Results

Search found 413 results on 17 pages for 'robin i knight'.

Page 2/17 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Weighted round robins via TTL - possible?

    - by Joe Hopfgartner
    I currently use DNS round robin for load balancing, which works great. The records look like this (I have a ttl of 120 seconds) ;; ANSWER SECTION: orion.2x.to. 116 IN A 80.237.201.41 orion.2x.to. 116 IN A 87.230.54.12 orion.2x.to. 116 IN A 87.230.100.10 orion.2x.to. 116 IN A 87.230.51.65 I learned that not every ISP / device treats such a response the same way. For example some DNS servers rotate the addresses randomly or always cycle them through. Some just propagate the first entry, others try to determine which is best (regionally near) by looking at the ip address. However if the userbase is big enough (spreads over multiple ISPs etc) it balances pretty well. The discrepancies from highest to lowest loaded server hardly every exceeds 15%. However now I have the problem that I am introducing more servers into the systems, that not all have the same capacities. I currently only have 1gbps servers, but I want to work with 100mbit and also 10gbps servers too. So what I want is I want to introduce a server with 10 GBps with a weight of 100, a 1 gbps server with a weight of 10 and a 100 mbit server with a weight of 1. I used to add servers twice to bring more traffic to them (which worked nice. the bandwidth doubled almost.) But adding a 10gbit server 100 times to DNS is a bit rediculous. So I thought about using the TTL. If I give server A 240 seconds ttl and server B only 120 seconds (which is about about the minimum to use for round robin, as a lot of dns servers set to 120 if a lower ttl is specified.. so i have heard) I think something like this should occour in an ideal scenario: first 120 seconds 50% of requests get server A -> keep it for 240 seconds. 50% of requests get server B -> keep it for 120 seconds second 120 seconds 50% of requests still have server A cached -> keep it for another 120 seconds. 25% of requests get server A -> keep it for 240 seconds 25% of requests get server B -> keep it for 120 seconds third 120 seconds 25% will get server A (from the 50% of Server A that now expired) -> cache 240 sec 25% will get server B (from the 50% of Server A that now expired) -> cache 120 sec 25% will have server A cached for another 120 seconds 12.5% will get server B (from the 25% of server B that now expired) -> cache 120sec 12.5% will get server A (from the 25% of server B that now expired) -> cache 240 sec fourth 120 seconds 25% will have server A cached -> cache for another 120 secs 12.5% will get server A (from the 25% of b that now expired) -> cache 240 secs 12.5% will get server B (from the 25% of b that now expired) -> cache 120 secs 12.5% will get server A (from the 25% of a that now expired) -> cache 240 secs 12.5% will get server B (from the 25% of a that now expired) -> cache 120 secs 6.25% will get server A (from the 12.5% of b that now expired) -> cache 240 secs 6.25% will get server B (from the 12.5% of b that now expired) -> cache 120 secs 12.5% will have server A cached -> cache another 120 secs ... i think i lost something at this point but i think you get the idea.... As you can see this gets pretty complicated to predict and it will for sure not work out like this in practice. But it should definitely have an effect on the distribution! I know that weighted round robin exists and is just controlled by the root server. It just cycles through dns records when responding and returns dns records with a set propability that corresponds to the weighting. My DNS server does not support this, and my requirements are not that precise. If it doesnt weight perfectly its okay, but it should go into the right direction. I think using the TTL field could be a more elegant and easier solution - and it deosnt require a dns server that controls this dynamically, which saves resources - which is in my opinion the whole point of dns load balancing vs hardware load balancers. My question now is... are there any best prectices / methos / rules of thumb to weight round robin distribution using the TTL attribute of DNS records? Edit: The system is a forward proxy server system. The amount of Bandwidth (not requests) exceeds what one single server with ethernet can handle. So I need a balancing solution that distributes the bandwidth to several servers. Are there any alternative methods than using DNS? Of course I can use a load balancer with fibre channel etc, but the costs are rediciulous and it also increases only the width of the bottleneck and does not eliminate it. The only thing i can think of are anycast (is it anycast or multicast?) ip addresses, but I don't have the means to set up such a system.

    Read the article

  • How to make a Round Robin? or Is there an easier way other than Round Robin?

    - by candies
    The problem that I face is in what way if there is issue like the example below: Codes 1000, 2000, 3000, 4000, 5000 ID 1, 2, 3 ======================================== This: ID number 1 has codes 1000, 2000, 3000, 4000 ID number 2 has codes 2000, 4000, 3000 ID number 3 has codes 3000, 4000, 5000 ======================================== When all the fields are connected, each ID has found the same codes. From the example above, I want to produce fair result and adjusted to the code that it had before on each ID as below: ======================================== To be: ID number 1 has codes 1000, 2000 (1000 must be on number 1 cause only it has than other) ID number 2 has codes 3000, 4000 ID number 3 has codes 5000 (5000 must be on number 3 cause only it has than other) ======================================== Some say using Round Robin, but I never heard Round Robin before and I don't have idea how to use it, such a blank mind. Is there another easier way like to use PHP may be? I'm lost. Thanks.

    Read the article

  • Knight movement.... " how to output all possible moves. "

    - by josh kant
    The following is the code i wrote.. I have to write it for nXn but for easyness i tried to test it for 5X5. It does not display my output... could anybody tell me whats wrong with the following code: { #include <iostream> #include <iomanip> using namespace std; void knight ( int startx, int starty, int n, int p[][5], int used [][5], int &count); int main ( ) { const int n = 5; // no. of cloumns and rows int startx = 0; int starty = 0; int p[5][5]; int used[5][5]; int count = 1; int i= 0; int j = 0; //initializing the array for ( i = 0; i < 5; i++) { for ( j = 0; j < 5; j++) { p[i][j] = 0; used [i][j] = 0; } } //outputting the initialized array.. i=0; while ( i< 5) { for ( j = 0; j < 5; j++) { cout << setw(3) << p[i][j]; } i++; cout << endl; } knight (startx,starty,n,p,used,count); return 0; } void knight ( int x, int y, int n, int p[][5], int used [][5], int &count) { int i = 0; //knight (x,y,n,p,used,count) for ( i = 0; i < n*n; i++) { if ( used [x][y] == 0 ) { used[x][y] = 1; // mark it used; p[x][y] += count; //inserting step no. into the solution //go for the next possible steps; //move 1 //2 squares up and 1 to the left if (x-1 < 0 && y+2 < n && p[x-1][y+2] == 0) { used[x-1][y+2] = 1; p[x-1][y+2] += count; knight (x-1,y+2,n,p,used,count); used[x-1][y+2] = 0; } //move 2 //2 squares up and 1 to the right if ( x+1 < n && y+2 < n && p[x+1][y+2] == 0 ) { used[x+1][y+2] = 1; p[x+1][y+2] += count; knight (x+1,y+2,n,p,used,count); used[x+1][y+2] = 0; } //move 3 //1 square up and 2 to the right if ( x+2 < n && y+1 < n && p[x+2][y+1] == 0 ) { used[x+2][y+1] = 1; p[x+2][y+1] += count; knight (x+2,y+1,n,p,used,count); used[x+2][y+1] = 0; } //move 4 //1 square down and 2 to the right if ( x+2 < n && y-1 < n && p[x+2][y-1] == 0 ) { used[x+2][y-1] = 1; p[x+2][y-1] += count; knight (x+2,y-1,n,p,used,count); used[x+2][y-1] = 0; } //move 5 //2 squares down and 1 to the right if ( x+1 < n && y-2 < n && p[x+1][y-2] == 0 ) { used[x+1][y-2] = 1; p[x+1][y-2] += count; knight (x+1,y-2,n,p,used,count); used[x+1][y-2] = 0; } //move 6 //2 squares down and 1 to the left if ( x-1 < n && y-2 < n && p[x-1][y-2] == 0 ) { used[x-1][y-2] = 1; p[x-1][y-2] += count; knight (x-1,y-2,n,p,used,count); used[x-1][y-2] = 0; } //move 7 //1 square down and 2 to the left if ( x-2 < n && y-1 < n && p[x-2][y-1] == 0 ) { used[x-2][y-1] = 1; p[x-2][y-1] += count; knight (x-2,y-1,n,p,used,count); used[x-2][y-1] = 0; } //move 8 //one square up and 2 to the left if ( x-2 < n && y+1< n && p[x-2][y+1] == 0 ) { used[x-2][y+1] = 1; p[x-2][y+1] += count; knight (x-2,y+1,n,p,used,count); used[x-2][y+1] = 0; } } } if ( x == n-1 && y == n-1) { while ( i != n) { for ( int j = 0; j < n; j++) cout << setw(3) << p[i][j]; i++; } } } Thank you!!

    Read the article

  • Does RabbitMq do round-robin from the exchange to the queues

    - by Lancelot
    Hi, I am currently evaluating message queue systems and RabbitMq seems like a good candidate, so I'm digging a little more into it. To give a little context I'm looking to have something like one exchange load balancing the message publishing to multiple queues. I don't want to replicate the messages, so a fanout exchange is not an option. Also the reason I'm thinking of having multiple queues vs one queue handling the round-robin w/ the consumers, is that I don't want our single point of failure to be at the queue level. Sounds like I could add some logic on the publisher side to simulate that behavior by editing the routing key and having the appropriate bindings in place. But that's kind of a passive approach that wouldn't take the pace of the message consumption on each queue into account, potentially leading to fill up one queue if the consumer applications for that queue are dead. I was looking for a more pro-active way from the exchange entity side, that would decide where to send the next message based on each queue size or something of that nature. I read about Alice and the available RESTful APIs but that seems kind of a heavy duty solution to implement fast routing decisions. Anyone knows if round-robin between the exchange the queues is feasible w/ RabbitMQ then? Thanks.

    Read the article

  • PowerDNS CNAME with multiple A records produces unexpected results

    - by bwight
    This problem from what i can tell is isolated to PowerDNS. The servers are running two packages pdns-static-3.0.1-1.i386.rpm and pdns-recursor-3.3-1.i386.rpm on the most recent version of Amazon Linux. The amazon ec2 loadbalancers are assigned a CNAME with multiple hosts. Below is an example of the actual behavior. Notice how the hosts are always in the same order. [root@localhost ~]# host cache.domain.com cache.domain.com is an alias for xxxxx.us-east-1.elb.amazonaws.com. xxxxx.us-east-1.elb.amazonaws.com has address aaa.aaa.aaa.aaa xxxxx.us-east-1.elb.amazonaws.com has address bbb.bbb.bbb.bbb [root@localhost ~]# host cache.domain.com cache.domain.com is an alias for xxxxx.us-east-1.elb.amazonaws.com. xxxxx.us-east-1.elb.amazonaws.com has address aaa.aaa.aaa.aaa xxxxx.us-east-1.elb.amazonaws.com has address bbb.bbb.bbb.bbb [root@localhost ~]# host cache.domain.com cache.domain.com is an alias for xxxxx.us-east-1.elb.amazonaws.com. xxxxx.us-east-1.elb.amazonaws.com has address aaa.aaa.aaa.aaa xxxxx.us-east-1.elb.amazonaws.com has address bbb.bbb.bbb.bbb Expected behavior is round robin for the hosts [root@localhost ~]# host cache.domain.com cache.domain.com is an alias for xxxxx.us-east-1.elb.amazonaws.com. xxxxx.us-east-1.elb.amazonaws.com has address aaa.aaa.aaa.aaa xxxxx.us-east-1.elb.amazonaws.com has address bbb.bbb.bbb.bbb [root@localhost ~]# host cache.domain.com cache.domain.com is an alias for xxxxx.us-east-1.elb.amazonaws.com. xxxxx.us-east-1.elb.amazonaws.com has address bbb.bbb.bbb.bbb xxxxx.us-east-1.elb.amazonaws.com has address aaa.aaa.aaa.aaa [root@localhost ~]# host cache.domain.com cache.domain.com is an alias for xxxxx.us-east-1.elb.amazonaws.com. xxxxx.us-east-1.elb.amazonaws.com has address aaa.aaa.aaa.aaa xxxxx.us-east-1.elb.amazonaws.com has address bbb.bbb.bbb.bbb The addresses eventually do swap but it seems to be on a 30 minute cache timer changing the TTL of the record doesn't appear to affect anything. It appears as though the resolver has a cache of the response. This adversely affects my application because all of the load is only being sent to one of the loadbalancers (Availability Zones) so if I have servers in two zones then only one zone is under load at a time. Do you know how I can fix this so that each time the host is resolved the order of the addresses is alternating.

    Read the article

  • Plex won't enter my home directory or other partitions

    - by RobinJ
    I just installed the Plex media server from the Ubuntu Software Center, and opened the web interface. I wanted to start by adding a collection. When it gave me a file browser, I wanted to go to /home/robin/Videos. /home is as far as I got. It showed robin, with an arrow in front of it, but when I tried to expand the directory tree it was empty. The same happened when trying to access /media/Data. For me it's quite useless like this, as all of my media files are inside those 2 directories. Help would be much appreciated. My first guess seemed to be a correct one; It is, as always, a permissions problem. How do I give plex access to my home folder without also giving other users access to it? My home folder is encrypted by the way, so that'll probably complicate things a little. robin@RobinJ:~$ sudo -u plex bash [sudo] password for robin: bash: /home/robin/.bashrc: Permission denied plex@RobinJ:~$ ls -al ls: cannot open directory .: Permission denied plex@RobinJ:~$ cd /home plex@RobinJ:/home$ cd robin bash: cd: robin: Permission denied plex@RobinJ:/home$ ls -al robin ls: cannot open directory robin: Permission denied

    Read the article

  • Thread scheduling Round Robin / scheduling dispatch

    - by MRP
    #include <pthread.h> #include <stdio.h> #include <stdlib.h> #include <semaphore.h> #define NUM_THREADS 4 #define COUNT_LIMIT 13 int done = 0; int count = 0; int quantum = 2; int thread_ids[4] = {0,1,2,3}; int thread_runtime[4] = {0,5,4,7}; pthread_mutex_t count_mutex; pthread_cond_t count_threshold_cv; void * inc_count(void * arg); static sem_t count_sem; int quit = 0; ///////// Inc_Count//////////////// void *inc_count(void *t) { long my_id = (long)t; int i; sem_wait(&count_sem); /////////////CRIT SECTION////////////////////////////////// printf("run_thread = %d\n",my_id); printf("%d \n",thread_runtime[my_id]); for( i=0; i < thread_runtime[my_id];i++) { printf("runtime= %d\n",thread_runtime[my_id]); pthread_mutex_lock(&count_mutex); count++; if (count == COUNT_LIMIT) { pthread_cond_signal(&count_threshold_cv); printf("inc_count(): thread %ld, count = %d Threshold reached.\n", my_id, count); } printf("inc_count(): thread %ld, count = %d, unlocking mutex\n",my_id, count); pthread_mutex_unlock(&count_mutex); sleep(1) ; }//End For sem_post(&count_sem); // Next Thread Enters Crit Section pthread_exit(NULL); } /////////// Count_Watch //////////////// void *watch_count(void *t) { long my_id = (long)t; printf("Starting watch_count(): thread %ld\n", my_id); pthread_mutex_lock(&count_mutex); if (count<COUNT_LIMIT) { pthread_cond_wait(&count_threshold_cv, &count_mutex); printf("watch_count(): thread %ld Condition signal received.\n", my_id); printf("watch_count(): thread %ld count now = %d.\n", my_id, count); } pthread_mutex_unlock(&count_mutex); pthread_exit(NULL); } ////////////////// Main //////////////// int main (int argc, char *argv[]) { int i; long t1=0, t2=1, t3=2, t4=3; pthread_t threads[4]; pthread_attr_t attr; sem_init(&count_sem, 0, 1); /* Initialize mutex and condition variable objects */ pthread_mutex_init(&count_mutex, NULL); pthread_cond_init (&count_threshold_cv, NULL); /* For portability, explicitly create threads in a joinable state */ pthread_attr_init(&attr); pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_JOINABLE); pthread_create(&threads[0], &attr, watch_count, (void *)t1); pthread_create(&threads[1], &attr, inc_count, (void *)t2); pthread_create(&threads[2], &attr, inc_count, (void *)t3); pthread_create(&threads[3], &attr, inc_count, (void *)t4); /* Wait for all threads to complete */ for (i=0; i<NUM_THREADS; i++) { pthread_join(threads[i], NULL); } printf ("Main(): Waited on %d threads. Done.\n", NUM_THREADS); /* Clean up and exit */ pthread_attr_destroy(&attr); pthread_mutex_destroy(&count_mutex); pthread_cond_destroy(&count_threshold_cv); pthread_exit(NULL); } I am trying to learn thread scheduling, there is a lot of technical coding that I don't know. I do know in theory how it should work, but having trouble getting started in code... I know, at least I think, this program is not real time and its not meant to be. Some how I need to create a scheduler dispatch to control the threads in the order they should run... RR FCFS SJF ect. Right now I don't have a dispatcher. What I do have is semaphores/ mutex to control the threads. This code does run FCFS... and I have been trying to use semaphores to create a RR.. but having a lot of trouble. I believe it would be easier to create a dispatcher but I dont know how. I need help, I am not looking for answers just direction.. some sample code will help to understand a bit more. Thank you.

    Read the article

  • Setting up multiple servers for one domain

    - by Joseph Torraca
    So I am starting up a new website and I was wondering how to set up 5 servers to host the site. I have already purchased 5 Apple XServes, one will be used as a test server and the other 4 will be for the live site. So I have read some website on the internet and they all reference using one server and installing software onto it and have that server do the load balancing. I have also read that you could use a hardware, rack-mounted system and plug the servers into that. The load balancer would then distribute the load. So I have a few questions about each: 1) How do you set up the software version and have the other servers as "slaves" and have one "master" to direct traffic? 2) Which of the two options above are more reliable, and better suited for a startup that doesn't have many users per month, yet(hopefully)? 3) Is there a theoretical max limit of servers that can be connected to a software load balancing system? Note: Obviously this will change from software to software, but in terms of the server being able to handle it? 4) In your own opinion, what are you using for your sites? Have you had any problems setting up that system or operating it once its running? Are there any things you would stay away from if you had to start over? 5) I also purchased a Apple RAID system, so if you are familiar with it, is there any way to connect it to multiple Xserves so they all serve the same data? I'm a little confused on this, so thanks for all your help and being patient with me. Note: Take it easy on me, I am learning this as I go along, so I may have used terms incorrectly or explained things that don't really make sense. Sorry. P.S. If you need me to supply the specs on the servers to determine which system makes the most sense, I can post them for you.

    Read the article

  • how to move and print squares visited by a knight in following order?

    - by josh kant
    Lets say you have a 5/5 board and you place and the knight started moving form the middle square. Every time knight moves it has to follow the following pattern. its easy if you draw what i mean. I am trying to code for the possible moves and print out the no. of moves in the nXn (in this case 5X5) board. The print out should be the no. of order in which the knight moved. { //move 1 //2 squares up and 1 to the left //move 2 //2 squares up and 1 to the right //move 3 //1 square up and 2 to the right //move 4 //1 square down and 2 to the right //move 5 //2 squares down and 1 to the right //move 6 //2 squares down and 1 to the left //move 7 //1 square down and 2 to the left //move 8 //one square up and 2 to the left } Any help would be appreciated. Thank you in advance..

    Read the article

  • How to keep groups when pulling with git

    - by mimrock
    I have a staging site that is a working directory of a git repository. How to set up git to let a developer pull out a branch or release without changing the group of the modified files? An example. Let's say I have two developers, robin and david. They are both in git-users group, so initially they can both have write permissions on site.php. -rw-rw-r-- 1 robin git-users 46068 Nov 16 12:12 site.php drwxrwxr-x 8 robin git-users 4096 Nov 16 14:11 .git After robin-server1$ git pull origin master: -rw-rw-r-- 1 robin robin 46068 Nov 16 12:35 site.php drwxrwxr-x 8 robin git-users 4096 Nov 16 14:11 .git And david do not have write permissions on site.php, because the group changed from 'git-users' to 'robin'. From now on, david will get a permission denied, when he tries to pull to this repository.

    Read the article

  • Multiple data centers and HTTP traffic: DNS Round Robin is the ONLY way to assure instant fail-over?

    - by vmiazzo
    Hi, Multiple A records pointing to the same domain seem to be used almost exclusively to implement DNS Round Robin as a cheap load balancing technique. The usual warning against DNS RR is that it is not good for high availability. When 1 IP goes down clients will continue to use it for minutes. A load balancer is often suggested as a better choice. Both claims are not completely true: When the traffic is HTTP then, most of the HTML browsers are able to automatically try the next A record if the previous is down, without a new DNS look-up. Read here chapter 3.1 and here. When multiple data centers are involved then, DNS RR is the only option to distribute traffic across them. So, is it true that, with multiple data centers and HTTP traffic, the use of DNS RR is the ONLY way to assure instant fail-over when one data center goes down? Thanks, Valentino Edit: Off course each data center has a local Load Balancer with hot spare. It's OK to sacrifice session affinity for an instant fail-over. AFAIK the only way for a DNS to suggest a data center instead of another is to reply with just the IP (or IPs) associated to that data center. If the data center becomes unreachable then all those IP are also unreachables. This means that, even if smart HTML browsers are able to instantly try another A record , all the attempts will fail until the local cache entry expires and a new DNS lookup is done, fetching the new working IPs (I assume DNS automatically suggests to a new data center when one fail). So, "smart DNS" cannot assure instant fail-over. Conversely a DNS round-robin permits it. When one data center fail, the smart HTML browsers (most of them) instantly try the other cached A records jumping to another (working) data center. So, DNS round-robin doesn't assure session affinity or the lowest RTT but seems to be the only way to assure instant fail-over when the clients are "smart" HTML browsers. Edit 2: Some people suggest TCP Anycast as a definitive solution. In this paper (chapter 6) is explained that Anycast fail-over is related to BGP convergence. For this reason Anycast can employ from 15 minutes to 20 seconds to complete. 20 seconds are possible on networks where the topology was optimized for this. Probably just CDN operators can grant such fast fail-overs. Edit 3:* I did some DNS look-ups and traceroutes (maybe some expert can double check) and: The only CDN using TCP Anycast seems to be CacheFly, other operators like CDN networks and BitGravity use CacheFly. Seems that their edges cannot be used as reverse proxies. Therefore, they cannot be used to grant instant failover. Akamai and LimeLight seems to use geo-aware DNS. But! They return multiple A records. From traceroutes seems that the returned IPs are on the same data center. So, I'm puzzled on how they can offer a 100% SLA when one data center goes down.

    Read the article

  • Load Balancing is unusual in Apache in round-robin mode when one of the tomcat is brought down

    - by srayker
    We are facing a unusual behavior with round-robin load balancing on apache when one of the tomcat server is brought down. Our Setup: we have 2 apache web servers on the front end using mod_jk module for load balancing using round robin for load distribution. We also have enabled session stickyness. This is followed by 4 tomcat servers on which the applications are running. Sometimes under heavy load, if there is a slowness in our DB tier we find that eventually one of the tomcat goes into a hung state and would need a restart. The moment we bounce the tomcat we see a spike in requests in one of the server and this would also go into hung state and need a restart. Eventually all the server will face similar condition. What beats me is why does the Apache transfer the whole load to one server instead of distributing the load. We are now trying the worker.balancer.method=B to see if this helps to resolve our issue. Any thoughts on resolving the issue is greatly appreciated. regards, srayker

    Read the article

  • Knight movement.... " how to output all possible moves. "

    - by josh kant
    hi tried the following code and is still not working. it is having problem on backtracking. it just fills the squares of a board with numbers but not in expected order. The code is as follows : include include using namespace std; int i=0; int permuteno = 0; bool move(int *p[], int *used[] ,int x, int y,int n, int count); bool knights (int *p[], int *used[],int x,int y,int n, int count); void output(int *p[],int n); int main(char argc, char *argv[]) { int count = 1; int n; //for size of board int x,y; // starting pos int **p; // to hold no. of combinations int **used; // to keep track of used squares on the board if ( argc != 5) { cout << "Very few arguments. Please try again."; cout << endl; return 0; } n = atoi(argv[2]); if( argv[1] <= 0 ) { cout << " Invalid board size. "; return 0; } x = atoi(argv[4]); y = atoi(argv[4]); cout << "board size: " << n << ", "<< n << endl; cout << "starting pos: " << x << ", " << y << endl; //dynamic allocation of arrays to hold permutation p = new int *[n]; for (int i = 0; i < n; i++) p[i] = new int [n]; //dynamic allocation of used arrays used = new int*[n]; for (int i = 0; i < n; i++) used[i] = new int [n]; //initializing board int i, j; for (i=0; i output(p,n); if (knights(p,used,x, y, n, count)) { cout << "solution found: " << endl < int i, j; for (i=0; i else { cout << "Solution not found" << endl; output (p, n); } knights (p,used, x, y, n, 1); //knights (p,used,x, y, n, count); cout << "no. perm " << permuteno << endl; return 0; } void output(int *p[],int n) { int i = 0,j; while ( i !=n) { for ( j=0; j bool move(int *p[], int *used[] ,int x, int y,int n,int count) { if (x < 0 || x = n) { return false; } if ( y < 0 || y = n) { return false; } if( used[x][y] != 0) { return false; } if( p[x][y] != 0) { return false; } count++; return true; } bool knights (int *p[], int *used[], int x,int y,int n ,int count) { //used[x][y] = 1; if (!move(p,used,x,y,n, count)) { return false; } if (move(p,used,x,y,n, count)) { i++; } p[x][y] = count; used[x][y] = 1; cout << "knight moved " << x << ", " << y << " " << count << endl; if(n*n == count) { return true; } //move 1 if (!knights (p,used, x-1, y-2, n, count+1)) { used[x][y] = 0; //p[x][y] = 0; } //move 2 if (!knights (p,used, x+1, y-2, n, count+1)) { used[x][y] = 0; //p[x][y] = 0; } //move 3 if (!knights (p,used, x+2, y-1, n, count+1)) { used[x][y] = 0; //p[x][y] = 0; } //move 4 if (!knights (p,used, x+2, y+1, n, count+1)) { used[x][y] = 0; //p[x][y] = 0; } //move 5 if (!knights (p,used, x+1, y+2, n, count+1)) { used[x][y] = 0; //p[x][y] = 0; } //move 6 if (!knights (p,used, x-1, y+2, n, count+1)) { used[x][y] = 0; //p[x][y] = 0; } //move 7 if (!knights (p,used, x-2, y+1, n, count+1)) { used[x][y] = 0; //p[x][y] = 0; } //move 8 if (!knights (p,used, x-2, y-1, n, count+1)) { used[x][y] = 0; //p[x][y] = 0; } permuteno++; //return true; //}while ( x*y != n*n ); return false; } I has to output all the possible combinations of the knight in a nXn board.. any help would be appreciated...

    Read the article

  • Ubuntu 10.04 & IBM DS3524 with FC multipath, inactive path is [failed][faulty] instead of [active][ghost]

    - by Graeme Donaldson
    OK, this is my setup: FC Switches IBM/Brocade, Switch1 and Switch2, independent fabrics. Server IBM x3650 M2, 2x QLogic QLE2460, 1 connected to each FC Switch. Storage IBM DS3524, 2x controllers with 4x FC ports each, but only 2x connected on each. +-----------------------------------------------------------------------+ | HBA1 Server HBA2 | +-----------------------------------------------------------------------+ | | | | | | +-----------------------------+ +------------------------------+ | Switch1 | | Switch2 | +-----------------------------+ +------------------------------+ | | | | | | | | | | | | | | | | | | | | +-----------------------------------+-----------------------------------+ | Contr A, port 3 | Contr A, port 4 | Contr B, port 3 | Contr B, port 4 | +-----------------------------------+-----------------------------------+ | Storage | +-----------------------------------------------------------------------+ My /etc/multipath.conf is from the IBM redbook for the DS3500, except I use a different setting for prio_callout, IBM uses /sbin/mpath_prio_tpc, but according to http://changelogs.ubuntu.com/changelogs/pool/main/m/multipath-tools/multipath-tools_0.4.8-7ubuntu2/changelog, this was renamed to /sbin/mpath_prio_rdac, which I'm using. devices { device { #ds3500 vendor "IBM" product "1746 FAStT" hardware_handler "1 rdac" path_checker rdac failback 0 path_grouping_policy multibus prio_callout "/sbin/mpath_prio_rdac /dev/%n" } } multipaths { multipath { wwid xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx alias array07 path_grouping_policy multibus path_checker readsector0 path_selector "round-robin 0" failback "5" rr_weight priorities no_path_retry "5" } } The output of multipath -ll with controller A as the preferred path: root@db06:~# multipath -ll sdg: checker msg is "directio checker reports path is down" sdh: checker msg is "directio checker reports path is down" array07 (xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx) dm-2 IBM ,1746 FASt [size=4.9T][features=1 queue_if_no_path][hwhandler=0] \_ round-robin 0 [prio=2][active] \_ 5:0:1:0 sdd 8:48 [active][ready] \_ 5:0:2:0 sde 8:64 [active][ready] \_ 6:0:1:0 sdg 8:96 [failed][faulty] \_ 6:0:2:0 sdh 8:112 [failed][faulty] If I change the preferred path using IBM DS Storage Manager to Controller B, the output swaps accordingly: root@db06:~# multipath -ll sdd: checker msg is "directio checker reports path is down" sde: checker msg is "directio checker reports path is down" array07 (xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx) dm-2 IBM ,1746 FASt [size=4.9T][features=1 queue_if_no_path][hwhandler=0] \_ round-robin 0 [prio=2][active] \_ 5:0:1:0 sdd 8:48 [failed][faulty] \_ 5:0:2:0 sde 8:64 [failed][faulty] \_ 6:0:1:0 sdg 8:96 [active][ready] \_ 6:0:2:0 sdh 8:112 [active][ready] According to IBM, the inactive path should be "[active][ghost]", not "[failed][faulty]". Despite this, I don't seem to have any I/O issues, but my syslog is being spammed with this every 5 seconds: Jun 1 15:30:09 db06 multipathd: sdg: directio checker reports path is down Jun 1 15:30:09 db06 kernel: [ 2350.282065] sd 6:0:2:0: [sdh] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE Jun 1 15:30:09 db06 kernel: [ 2350.282071] sd 6:0:2:0: [sdh] Sense Key : Illegal Request [current] Jun 1 15:30:09 db06 kernel: [ 2350.282076] sd 6:0:2:0: [sdh] <<vendor>> ASC=0x94 ASCQ=0x1ASC=0x94 ASCQ=0x1 Jun 1 15:30:09 db06 kernel: [ 2350.282083] sd 6:0:2:0: [sdh] CDB: Read(10): 28 00 00 00 00 00 00 00 08 00 Jun 1 15:30:09 db06 kernel: [ 2350.282092] end_request: I/O error, dev sdh, sector 0 Jun 1 15:30:10 db06 multipathd: sdh: directio checker reports path is down Jun 1 15:30:14 db06 kernel: [ 2355.312270] sd 6:0:1:0: [sdg] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE Jun 1 15:30:14 db06 kernel: [ 2355.312277] sd 6:0:1:0: [sdg] Sense Key : Illegal Request [current] Jun 1 15:30:14 db06 kernel: [ 2355.312282] sd 6:0:1:0: [sdg] <<vendor>> ASC=0x94 ASCQ=0x1ASC=0x94 ASCQ=0x1 Jun 1 15:30:14 db06 kernel: [ 2355.312290] sd 6:0:1:0: [sdg] CDB: Read(10): 28 00 00 00 00 00 00 00 08 00 Jun 1 15:30:14 db06 kernel: [ 2355.312299] end_request: I/O error, dev sdg, sector 0 Does anyone know how I can get the inactive path to show "[active][ghost]" instead of "[failed][faulty]"? I assume that once I can get that right then the spam in my syslog will end as well. One final thing worth mentioning is that the IBM redbook doc targets SLES 11 so I'm assuming there's something a little different under Ubuntu that I just haven't figured out yet. Update: As suggested by Mitch, I've tried removing /etc/multipath.conf, and now the output of multipath -ll looks like this: root@db06:~# multipath -ll sdg: checker msg is "directio checker reports path is down" sdh: checker msg is "directio checker reports path is down" xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxdm-1 IBM ,1746 FASt [size=4.9T][features=0][hwhandler=0] \_ round-robin 0 [prio=1][active] \_ 5:0:2:0 sde 8:64 [active][ready] \_ round-robin 0 [prio=1][enabled] \_ 5:0:1:0 sdd 8:48 [active][ready] \_ round-robin 0 [prio=0][enabled] \_ 6:0:1:0 sdg 8:96 [failed][faulty] \_ round-robin 0 [prio=0][enabled] \_ 6:0:2:0 sdh 8:112 [failed][faulty] So its more or less the same, with the same message in the syslog every 5 minutes as before, but the grouping has changed.

    Read the article

  • Can't upload project to PPA using Quickly

    - by RobinJ
    I can't get Quickly to upload my project into my PPA. I've set up my PGP key and used it so sign the code of conduct, and the PPA exists. I don't know what other usefull information I can supply. robin@RobinJ:~/Ubuntu One/Python/gtkreddit$ quickly share --ppa robinj/gtkredditGet Launchpad Settings Launchpad connection is ok gpg: WARNING: unsafe permissions on configuration file `/home/robin/.gnupg/gpg.conf' gpg: WARNING: unsafe enclosing directory permissions on configuration file `/home/robin/.gnupg/gpg.conf' gpg: WARNING: unsafe permissions on configuration file `/home/robin/.gnupg/gpg.conf' gpg: WARNING: unsafe enclosing directory permissions on configuration file `/home/robin/.gnupg/gpg.conf' Traceback (most recent call last): File "/usr/share/quickly/templates/ubuntu-application/share.py", line 138, in <module> license.licensing() File "/usr/share/quickly/templates/ubuntu-application/license.py", line 284, in licensing {'translatable': 'yes'}) File "/usr/share/quickly/templates/ubuntu-application/internal/quicklyutils.py", line 166, in change_xml_elem xml_tree.find(parent_node).insert(0, new_node) AttributeError: 'NoneType' object has no attribute 'insert' ERROR: share command failed Aborting I reported this as a bug on Launchpad, because I assume that it is a bug. If you know a quick workaround, please let me know. https://bugs.launchpad.net/ubuntu/+source/quickly/+bug/1018138

    Read the article

  • HR According to Batman

    - by D'Arcy Lussier
    Any idea who that guy is running alongside the Caped Crusader? That’s Nightwing, but you may know him as Robin…well, the first Robin anyway. There were actually like 5 Robin’s according to Wikipedia: Dick Grayson, the original, who’s parents were circus performers killed by a gangster. Jason Todd, who was caught trying to steal tires off of the Batmobile. Tim Drake, who saw Dick’s parents die and figured out who Batman and Robin were. and a few others that get into recent time travel/altered reality storylines. What does this have to do with HR? Well, it somewhat ties in with an article by Alex Papadimoulis from 2008. In the article he talks about the “Cravath System”. The Craveth system was developed by a law firm called Cravath, Swaine & Moore back in the 19th century. In a nutshell, they believed in hiring the best and brightest straight out of school. These aspiring lawyers would then begin a fight for survival in the firm, with the strong surviving. In what’s termed the “Up and Out” rule, employees needed to be promoted within 3 years or leave the company. They should achieve partner within 7 – 8 years and no later than 10 after initially coming on board (read all about the system on Wikipedia here). Back to Alex’s article, he quotes from a book published in 1947 about the lawfirm: Under the “Cravath system” of taking a substantial number of men annually and keeping a current constantly moving up in the office, and its philosophy of tenure, men are constantly leaving… it is often difficult to keep the best men long enough to determine whether they shall be made partners, for Cravath-trained men are always in demand, usually at premium salaries. And so we see a pattern forming here: 1. Hire a whole whack of smart college graduates 2. Put them to work 3. The ones that stick around should move up the ladder. The ones that don’t stick around served the company well and left to expound the quality of the Cravath firm. Those that didn’t fall into either of those categories were just let go. There’s some interesting undercurrents to these ideas. If you stick around, you better keep your feet moving! I was at a Microsoft shindig a few months back, and was talking to a Microsoft employee. He shared that at MS you have 5 years to achieve a “senior” position within the company. Once you hit that mark, you can stay there for the rest of your career (he told about a guy who’s a “senior” developer and has been for the last 20+ years working on audio drivers for Windows), but you *must* hit that mark within the timeframe. What we see with Microsoft is Cravath’s system in action, whether intentional or not: bring in smart young people and see which ones stick. You need to give people something to work towards. Saying “You must reach this level or else!” is one way to look at it. The other way is to see achieving a higher rank in the organization as something for ambitious employees to reach towards. It’s important for an organization to always have the next generation of executives waiting in the wings, and unless you’re encouraging that early on you may find yourself in a position of needing to fill positions that nobody has been working towards. Now, you might suggest that this isn’t that big of a deal because you could just hire someone from outside the organization, but the Cravath system holds to the tenet of promoting internally; develop your own talent, since your business is the best place for the future leadership to learn teh business from. It’s OK for people to quit. Alex’s article really drives this point home, but its worth noting here also: its OK for your people to quit. In fact its inevitable…and more inevitable that it’ll be good people that leave. Some will stay and work towards the internal awards of promotion, but a number will get experience, serve the organization well, and then move on to something else. This should be expected and treated as a natural business occurrence. The idea of an alumni of an organization begins to come into play here: “That guy used to work for <insert company here>”. There’s a benefit in that: those best and brightest will be drawn to your organization and your reputation will permeate your market through former staff that are sought after because of how well you nurtured them. The Batman Hook All of this brings us back to Batman and his HR practice: when Dick decided he’d had enough of the Robin schtick, he quit and became his own…but he was always associated with Batman and people understood where his training had come from. To the Dark Knight’s credit, he continued training partners under the Robin brand. Luckily he didn’t have to worry about firing any of them (the ship sort of sails when you reveal a secret identity), although there was that unfortunate “quitting” of the second Robin when the Joker blew him up…but regardless, we see the Cravath system at work: bring in talent, expect great things, and be ok with whatever they decide for their careers. It’s an interesting way to approach HR, and luckily for us our business isn’t as dangerous or over-the-top as the caped crusader’s.

    Read the article

  • Seperation of game- and rendering logic

    - by Qua
    What is the best way to seperate rendering code from the actually game engine/logic code? And is it even a good idea to seperate those? Let's assume we have a game object called Knight. The Knight has to be rendered on the screen for the user to see. We're now left with two choices. Either we give the Knight a Render/Draw method that we can call, or we create a renderer class that takes care of rendering all knights. In the scenario where the two is seperated the Knight should the knight still contain all the information needed to render him, or should this be seperated as well? In the last project we created we decided to let all the information required to render an object be stored inside the object itself, but we had a seperate component to actually read those informations and render the objects. The object would contain information such as size, rotation, scale, and which animation was currently playing and based on this the renderer object would compose the screen. Frameworks such as XNA seem to think joining the object and rendering is a good idea, but we're afraid to get tied up to a specific rendering framework, whereas building a seperate rendering component gives us more freedom to change framework at any given time.

    Read the article

  • Squid on windows loadbalancing only to one server

    - by Martin L.
    After thousands of googles and trying days i cant get the load balancer/failover in squid on windows to work. Iam using squid 2.7. My webservers are 2 single NIC lighttpd and one dual nic lighttpd. server1 in this example is running squid on port 80 and lighttpd on port 8080 (just to test) Requirements: All 3 webservers running lighttpd should be balanced two option for load balancing: Best would be if server1 is busy server2 takes over, if server2 is busy server3 takes over, etc.. Round robin style evenly distributed load. Eg server1 takes first call, server2 second etc.. All requests should be treated the same way (no url rewriting or so on) Sent host headers have to be redirected to every server as http host header, speaking of "server1", "server1.company.internal" and "10.211.1.1". My approach: acl all src all acl manager proto cache_object http_port 80 accel defaultsite=server1.company.internal vhost #reverse proxy entries cache_peer 10.211.2.1 parent 8080 0 no-query originserver round-robin login=PASS name=server1_nic1 cache_peer 10.211.1.2 parent 80 0 no-query originserver round-robin login=PASS name=server2_nic1 cache_peer 10.211.2.3 parent 8080 0 no-query originserver round-robin login=PASS name=server3_nic1 cache_peer 10.211.2.4 parent 8080 0 no-query originserver round-robin login=PASS name=server3_nic2 #decl of names of squid host acl registered_name_hostdomain dstdomain server1.company.internal acl registered_name_host dstdomain server1 #ip of squid host acl registered_name_ip dstdomain 10.211.2.1 # access: redirects the correct squid hostname http_access allow registered_name_hostdomain http_access allow registered_name_host http_access allow registered_name_ip http_access deny all cache_peer_access server1_nic1 allow registered_name_hostdomain cache_peer_access server1_nic1 allow registered_name_host cache_peer_access server1_nic1 allow registered_name_ip cache_peer_access server2_nic1 allow registered_name_hostdomain cache_peer_access server2_nic1 allow registered_name_host cache_peer_access server2_nic1 allow registered_name_ip cache_peer_access server3_nic1 allow registered_name_hostdomain cache_peer_access server3_nic1 allow registered_name_host cache_peer_access server3_nic1 allow registered_name_ip cache_peer_access server3_nic2 allow registered_name_hostdomain cache_peer_access server3_nic2 allow registered_name_host cache_peer_access server3_nic2 allow registered_name_ip cache_peer_access server1_nic1 deny all cache_peer_access server2_nic1 deny all cache_peer_access server3_nic1 deny all cache_peer_access server3_nic2 deny all never_direct allow all Problems: Load balancer does not load balance other than to first server. Only if the first server is killed in any way the second will take over. I have seen the others working at some point, but definitely not as the intended load balancing described above. If the cache_peer_access is not defined sometimes the wrong hostname is sent to the backend webserver and this always depends on the defaultsite= parameter. Probably because the host header on the request to squid is not set and its replaced by defaultsite. Leaving out defaultsite didnt solve the problem. The only workaround i found for this is the current approach with cache_peer_access. Questions: Does the cache_peer_access influence the round-robin? Is there a better workaround to pass the host header to the backed webservers? Which parameters do increase the speed of load balancing or does anyone have a better approach? -Martin

    Read the article

  • How much a website like bytes.com earning

    - by robin das
    i am running a website similar to bytes.com (IT QnA site) my site attract 600 unique visitors daily. we are planning our self as bytes.com. Atleast any one can tell me that whether we can earn some serious money with this kind of website. websiteoutlook estimate Daily Pageview - 700636 to bytes.com and its using google ads. Can any one please let me know what kind of earning we can expect for dotcom like bytes.com. Please give some light on this topic as a lot of energy, time and money goes in building this kind of website thanks robin Das

    Read the article

  • How much a website like bytes.com earn

    - by robin das
    i am running a website similar to bytes.com (IT QnA site) my site attract 600 unique visitors daily. we are planning our self as bytes.com. Atleast any one can tell me that whether we can earn some serious money with this kind of website. websiteoutlook estimate Daily Pageview - 700636 to bytes.com Can any one please let me know what kind of earning we can expect for dotcom like bytes.com. Please give some light on this topic as a lot of energy, time and money goes in building this kind of website thanks robin Das

    Read the article

  • Round Table - Minimum Cost Algorithm

    - by 7Aces
    Problem Link - http://www.iarcs.org.in/zco2013/index.php/problems/ROUNDTABLE It's dinner time in Castle Camelot, and the fearsome Knights of the Round Table are clamouring for dessert. You, the chef, are in a soup. There are N knights, including King Arthur, each with a different preference for dessert, but you cannot afford to make desserts for all of them. You are given the cost of manufacturing each Knight's preferred dessert-since it is a round table, the list starts with the cost of King Arthur's dessert, and goes counter-clockwise. You decide to pick the cheapest desserts to make, such that for every pair of adjacent Knights, at least one gets his dessert. This will ensure that the Knights do not protest. What is the minimum cost of tonight's dinner, given this condition? I used the Dynamic Programming approach, considering the smallest of i-1 & i-2, & came up with the following code - #include<cstdio> #include<algorithm> using namespace std; int main() { int n,i,j,c,f; scanf("%d",&n); int k[n],m[n][2]; for(i=0;i<n;++i) scanf("%d",&k[i]); m[0][0]=k[0]; m[0][1]=0; m[1][0]=k[1]; m[1][1]=1; for(i=2;i<n;++i) { c=1000; for(j=i-2;j<i;++j) { if(m[j][0]<c) { c=m[j][0]; f=m[j][1];} } m[i][0]=c+k[i]; m[i][1]=f; } if(m[n-2][0]<m[n-1][0] && m[n-2][1]==0) printf("%d\n",m[n-2][0]); else printf("%d\n",m[n-1][0]); } I used the second dimension of the m array to store from which knight the given sequence started (1st or 2nd). I had to do this because of the case when m[n-2]<m[n-1] but the sequence started from knight 2, since that would create two adjacent knights without dessert. The problem arises because of the table's round shape. Now an anomaly arises when I consider the case - 2 1 1 2 1 2. The program gives an answer 5 when the answer should be 4, by picking the 1st, 3rd & 5th knight. At this point, I started to doubt my initial algorithm (approach) itself! Where did I go wrong?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >