Search Results

Search found 26158 results on 1047 pages for 'google panda algorithm'.

Page 568/1047 | < Previous Page | 564 565 566 567 568 569 570 571 572 573 574 575  | Next Page >

  • Fair 2-combinations

    - by Tometzky
    I need to fairly assign 2 experts from x experts (x is rather small - less than 50) for every n applications, so that: each expert has the same number of applications (+-1); each pair of experts (2-combination of x) has the same number of applications (+-1); It is simple to generate all 2-combinations: for (i=0; i<n; i++) { for (j=i+1; j<n; j++) { combinations.append(tuple(i,j)); } } But to assign experts fairly I need to assign a combination to an application i correct order, for example: experts: 0 1 2 3 4 fair combinations: counts 01234 01 11000 23 11110 04 21111 12 22211 34 22222 02 32322 13 33332 14 34333 03 44343 24 44444 I'm unable to come up with a good algorithm for this (the best I came up with is rather complicated and with O(x4) complexity). Could you help me?

    Read the article

  • What happens when several Java servlet apps running on the same port ?

    - by Frank
    Something strange happened to my servlets and I think I've figured out why, yet I'm more confused. I used Netbean6.7 to develop a Paypal IPN (Instant Payment Notification) message servlet, it listens on port 8080 by default for Paypal IPN messages. I used some sample Java code from it's web site, but when it ran, only about 1 in 10 messages came through, and they looked correct, but why 1 in 10 ? Not 100% or none ? So I asked some questions here and got some advices, one in particular points me to Google's App Engine, so I downloaded it and ran the demo guestbook while my IPN servlet is still running on Netbeans, the strange thing happened, after I entered "appengine-java-sdk-1.3.2\bin\dev_appserver.cmd appengine-java-sdk-1.3.2\demos\guestbook\war" from the command prompt, I went to the following url on my browser "http://localhost:8080/", I thought I would see the Google demo guestbook page, NO, what I saw was another servlet I developed 2 years ago : "Web Academy", online course registration app. How can that happen ? I never started it, and I haven't touch that project for years. I guess because it's also listening on port 8080, so now I understand why the IPN messages only came through 1 in 10 times, because another servlet was also listening on that port and could have got the messages intended for IPN, or some how those two servlets' processes mixed up and therefore couldn't respond to Paypal properly, and failed. In order to verify some of my guesses, I turn off Netbeans, and ran the Google guestbook again at the prompt, this time on my browser http://localhost:8080/ points to the demo guestbook page. My Urls look like this : [A] Paypal IPN : http://localhost:8080/PayPal_App/PayPal_Servlet [B] Web Academy : http://localhost:8080/ So now, my questions are : <1> Why the "Web Academy" servlet was auto started when I ran the Paypal servlet ? <2> If I change the IPN listening port to 8083, would that mean I can run both of them on my PC at the same time without affecting each other ? <3> But I still don't understand, [A] and [B] look different, if a page for [A] is refreshed, it should show the Paypal content, and another page looking at [B] should show the Web Academy content, and that's exactly what happens when I started Netbeans to run the Paypal servlet, both pages show their respective content correctly side by side without interfering with each other, how come the IPN messages couldn't get through 100% of the time ? <4> In Netbeans how to assign 8080 to servlet [A] and assign port 8083 to servlet [B] ? <5> How to turn off auto start of Web Academy by Netbeans ?

    Read the article

  • Are there well-known examples of web products that were killed by slow service?

    - by Jeremy Wadhams
    It's a basic tenet of UX design that users prefer fast pages. http://www.useit.com/alertbox/response-times.html http://www.nytimes.com/2012/03/01/technology/impatient-web-users-flee-slow-loading-sites.html?pagewanted=all It's supposedly even baked into Google's ranking algorithm now: fast sites rank higher, all else being equal. But are there well known examples of web services where the popular narrative is "it was great, but it was so slow people took their money elsewhere"? I can pretty easily think of example problems with scale (Twitter's fail whale) or reliability (Netflix and Pinterest outages caused by a single datacenter in a storm). But can (lack of) speed really kill?

    Read the article

  • Proper way to measure the scalability of web Application

    - by Jorge
    Let's say that I have a web Application where i'm going to have 300 users and each one have to see data on real time, imagine that each client make an ajax call to the server to see in real time what's happens with the changes of the data, this calls are made each 300 ms per user. I know that i can run a simulation to see if the hardware of my server supports this example. But what happen's if the number of users start to grow up. Is there a way that i can measure the hardware needed to handle this growing behavior, a software, a formula, algorithm or maybe recommend me if i need to implement an distributed application with multiplies servers and balance the loads.

    Read the article

  • Shortest Common Superstring: find shortest string that contains all given string fragments

    - by occulus
    Given some string fragments, I would like to find the shortest possible single string ("output string") that contains all the fragments. Fragments can overlap each other in the output string. Example: For the string fragments: BCDA AGF ABC The following output string contains all fragments, and was made by naive appending: BCDAAGFABC However this output string is better (shorter), as it employs overlaps: ABCDAGF ^ ABC ^ BCDA ^ AGF I'm looking for algorithms for this problem. It's not absolutely important to find the strictly shortest output string, but the shorter the better. I'm looking for an algorithm better than the obvious naive one that would try appending all permutations of the input fragments and removing overlaps (which would appear to be NP-Complete). I've started work on a solution and it's proving quite interesting; I'd like to see what other people might come up with. I'll add my work-in-progress to this question in a while.

    Read the article

  • Game Institute Math Courses

    - by W3Geek
    I'm 21 years old and I suck at math, I mean really bad. I don't have the necessary logic to apply it towards programming. I would like to learn the math and logic of applying it. I found Game Institute (http://www.gameinstitute.com) awhile back and heard a lot of praise about them. Are there Math courses any good? Thank you. Edit: My high school was terrible and did not prepare me for any math. I am fairly decent at programming, I just don't have the logic to apply any mathematics to programming, as an example I don't understand the algorithm of finding the size of a user's screen. Yes I have heard of KhanAcademy (http://www.khanacademy.org/) and I have completed a lot of maths on his website but I still don't have the logic to apply any of it to programming.

    Read the article

  • How to handle multiple openIDs for the same user

    - by Sinan
    For my site I am using a login system much like the one on SO. A user can login with his Facebook, Google (Gmail openID), Twitter account. This question is not about specific oAuth or openID implementations. The question is how to know if the same user logins with different providers. Let me give an example: Bobo comes to site logins to site by clicking on "Login with Facebook". Because this is his first visit we create an account for him. Later Bobo comes to the site. This time he clicks on "Login with Google". So how do I know if this is the same person so I can add this provider to his account instead of creating a new (and duplicate) account. Can I trust solely on email? What is the best way to handle this. How does SO do it? Any ideas?

    Read the article

  • Open source / commercial alternative for jiffy.js?

    - by Marcel
    Hi, we'd like to measure "client side web site performance". i.e. we would like to have answers to the following questions: how long did it takt to load and render a webpage (including alle graphics scripts etc.) this bundled with additional informations like user-agent, operating system etc. a graphical tool to analyze the data would be perfect I know jiffy ( http://code.google.com/p/jiffy-web/wiki/Jiffy_js ) but that isn't maintained anymore. I would prefer either a hosted solution (like google analytics) or a java based solution that we deploy for ourselves. Do you know something like that? Thanks, Marc

    Read the article

  • What's my best bet for replacing plain text links with anchor tags in a string? .NET

    - by Craig Bovis
    What is my best option for converting plain text links within a string into anchor tags? Say for example I have "I went and searched on http://www.google.com/ today". I would want to change that to "I went and searched on http://www.google.com/ today". The method will need to be safe from any kind of XSS attack also since the strings are user generated. They will be safe before parsing so I just need to make sure that no vulnerabilities are introduced through parsing the URLs.

    Read the article

  • add target="_blank" to all link with a certain a div.

    - by kakkalo
    so lets say i got these following codes <div id="link_other"> <ul> <li><a href="http://www.google.com/">google</a></li> <li><div class="some_class">dsalkfnm sladkfm <a href="http://www.yahoo.com/">yahoo</a></div> </li> </ul> </div> so in this case script will add target="_blank" to all links within "link_other" div. how can i do that? thank you

    Read the article

  • How does refer(r)er work technically?

    - by NoozNooz42
    I don't understand: how are webserver and trackers like Google Analytics able to track referrals? Is it part of HTTP? Is it some (un)specified behavior of the browsers? Apparently every time you click on a link on a web page, the original web page is passed along the request. What is the exact mechanism behind that? Is it specified by some spec? I've read a few docs and I've played with my own Tomcat server and my own Google Analytics account, but I don't understand how the "magic" happens. Bonus (totally related) question: if, on my own website (served by Tomcat), I put a link to another site, does the other site see my website as the "referrer" without me doing anything special in Tomcat?

    Read the article

  • Speed up lighting in deferred shading

    - by kochol
    I implemented a simple deferred shading renderer. I use 3 G-Buffer for storing position (R32F), normal (G16R16F) and albedo (ARGB8). I use sphere map algorithm to store normals in world space. Currently I use inverse of view * projection matrix to calculate the position of each pixel from stored depth value. First I want to avoid per pixel matrix multiplication for calculating the position. Is there another way to store and calculate position in G-Buffer without the need of matrix multiplication Store the normal in view space Every lighting in my engine is in world space and I want do the lighting in view space to speed up my lighting pass. I want an optimized lighting pass for my deferred engine.

    Read the article

  • Detect duplicate in a subset from a set of elements

    - by Abhinav Shrivastava
    I have a set of numbers say : 1 1 2 8 5 6 6 7 8 8 4 2... I want to detect the duplicate element in subsets(of given size say k) of the above numbers... For example : Consider the increasing subsets(for example consider k=3) Subset 1 :{1,1,2} Subset 2 :{1,2,8} Subset 3 :{2,8,5} Subset 4 :{8,5,6} Subset 5 :{5,6,6} Subset 6 :{6,6,7} .... .... So my algorithm should detect that subset 1,5,6 contains duplicates.. My approach : 1)Copy the 1st k elements to a temporary array(vector) 2) using #include file in C++ STL...using unique() I would determine if there's any change in size of vector.. Any other clue how to approach this problem..

    Read the article

  • Seeking some advice on pursuing MS in CS from Stanford or Carnegie Mellon or Caltech

    - by avi
    What kinds of projects are given preference in top notch colleges like Stanford, Caltech, etc to get admission into MS programme in Computer Science? I have an average academic portfolio. I'm pursuing Btech from a not so popular university in India with an aggregate of 67%. I'm good at designing algorithms and possess good knowledge of core subjects but helpless with my percentage. So, I think the only way I can impress them is with my project(s). Can anyone please suggest me the kinds of projects that are given preference by such top level institutes? Could you please also suggest some good projects? My area of interest would be Artificial Intelligence or any application/software/algorithm design which could be of some help to common people. Or if you have any other random idea for my project then please share it with me. Note: Web based projects and management projects like lib management wouldn't be my priority.

    Read the article

  • why the hell does x,y = zip(*zip(a,b)) work in Python?

    - by Mike Dewar
    OK I love Python's zip() function. Use it all the time, it's brilliant. Every now and again I want to do the opposite of zip(), think "I used to know how to do that", then google python unzip, then remember that one uses this magical * to unzip a zipped list of tuples. Like this: x = [1,2,3] y = [4,5,6] zipped = zip(x,y) unzipped_x, unzipped_y = zip(*zipped) unzipped_x Out[30]: (1, 2, 3) unzipped_y Out[31]: (4, 5, 6) What on earth is going on? What is that magical asterisk doing? Where else can it be applied and what other amazing awesome things in Python are so mysterious and hard to google?

    Read the article

  • parsing FireFox bookmarks using regular expression

    - by SIFE
    I tried to parse firefox bookmark(JSON exported version), using this efforts: cat boo.json | grep '\"uri\"\:\"^http\://[a-zA-Z0-9\-\.]+\.[a-zA-Z]{2,3}\"' cat boo.json | grep '"uri"\:"^http\://[a-zA-Z0-9\-\.]+\.[a-zA-Z]{2,3}' cat boo.json | grep '"uri"\:"^http\://[a-zA-Z0-9\-\.]+\.[a-zA-Z]{2,3}"' And few others but all fails, json bookmarked file will look like this: .........."uri":"http://www.google.com/?"......"uri":"http://stackoverflow.com/" So, the output should be like this: "uri":"http://www.google.com/?" "uri":"http://stackoverflow.com/" What is the missing part on my regular expression?

    Read the article

  • getaddrinfo appears to return different results between Windows and Ubuntu?

    - by MrDuk
    I have the following two sets of code: Windows #undef UNICODE #include <winsock2.h> #include <ws2tcpip.h> #include <stdio.h> // link with Ws2_32.lib #pragma comment (lib, "Ws2_32.lib") int __cdecl main(int argc, char **argv) { //----------------------------------------- // Declare and initialize variables WSADATA wsaData; int iResult; INT iRetval; DWORD dwRetval; argv[1] = "www.google.com"; argv[2] = "80"; int i = 1; struct addrinfo *result = NULL; struct addrinfo *ptr = NULL; struct addrinfo hints; struct sockaddr_in *sockaddr_ipv4; // struct sockaddr_in6 *sockaddr_ipv6; LPSOCKADDR sockaddr_ip; char ipstringbuffer[46]; DWORD ipbufferlength = 46; /* // Validate the parameters if (argc != 3) { printf("usage: %s <hostname> <servicename>\n", argv[0]); printf("getaddrinfo provides protocol-independent translation\n"); printf(" from an ANSI host name to an IP address\n"); printf("%s example usage\n", argv[0]); printf(" %s www.contoso.com 0\n", argv[0]); return 1; } */ // Initialize Winsock iResult = WSAStartup(MAKEWORD(2, 2), &wsaData); if (iResult != 0) { printf("WSAStartup failed: %d\n", iResult); return 1; } //-------------------------------- // Setup the hints address info structure // which is passed to the getaddrinfo() function ZeroMemory( &hints, sizeof(hints) ); hints.ai_family = AF_UNSPEC; hints.ai_socktype = SOCK_STREAM; // hints.ai_protocol = IPPROTO_TCP; printf("Calling getaddrinfo with following parameters:\n"); printf("\tnodename = %s\n", argv[1]); printf("\tservname (or port) = %s\n\n", argv[2]); //-------------------------------- // Call getaddrinfo(). If the call succeeds, // the result variable will hold a linked list // of addrinfo structures containing response // information dwRetval = getaddrinfo(argv[1], argv[2], &hints, &result); if ( dwRetval != 0 ) { printf("getaddrinfo failed with error: %d\n", dwRetval); WSACleanup(); return 1; } printf("getaddrinfo returned success\n"); // Retrieve each address and print out the hex bytes for(ptr=result; ptr != NULL ;ptr=ptr->ai_next) { printf("getaddrinfo response %d\n", i++); printf("\tFlags: 0x%x\n", ptr->ai_flags); printf("\tFamily: "); switch (ptr->ai_family) { case AF_UNSPEC: printf("Unspecified\n"); break; case AF_INET: printf("AF_INET (IPv4)\n"); sockaddr_ipv4 = (struct sockaddr_in *) ptr->ai_addr; printf("\tIPv4 address %s\n", inet_ntoa(sockaddr_ipv4->sin_addr) ); break; case AF_INET6: printf("AF_INET6 (IPv6)\n"); // the InetNtop function is available on Windows Vista and later // sockaddr_ipv6 = (struct sockaddr_in6 *) ptr->ai_addr; // printf("\tIPv6 address %s\n", // InetNtop(AF_INET6, &sockaddr_ipv6->sin6_addr, ipstringbuffer, 46) ); // We use WSAAddressToString since it is supported on Windows XP and later sockaddr_ip = (LPSOCKADDR) ptr->ai_addr; // The buffer length is changed by each call to WSAAddresstoString // So we need to set it for each iteration through the loop for safety ipbufferlength = 46; iRetval = WSAAddressToString(sockaddr_ip, (DWORD) ptr->ai_addrlen, NULL, ipstringbuffer, &ipbufferlength ); if (iRetval) printf("WSAAddressToString failed with %u\n", WSAGetLastError() ); else printf("\tIPv6 address %s\n", ipstringbuffer); break; case AF_NETBIOS: printf("AF_NETBIOS (NetBIOS)\n"); break; default: printf("Other %ld\n", ptr->ai_family); break; } printf("\tSocket type: "); switch (ptr->ai_socktype) { case 0: printf("Unspecified\n"); break; case SOCK_STREAM: printf("SOCK_STREAM (stream)\n"); break; case SOCK_DGRAM: printf("SOCK_DGRAM (datagram) \n"); break; case SOCK_RAW: printf("SOCK_RAW (raw) \n"); break; case SOCK_RDM: printf("SOCK_RDM (reliable message datagram)\n"); break; case SOCK_SEQPACKET: printf("SOCK_SEQPACKET (pseudo-stream packet)\n"); break; default: printf("Other %ld\n", ptr->ai_socktype); break; } printf("\tProtocol: "); switch (ptr->ai_protocol) { case 0: printf("Unspecified\n"); break; case IPPROTO_TCP: printf("IPPROTO_TCP (TCP)\n"); break; case IPPROTO_UDP: printf("IPPROTO_UDP (UDP) \n"); break; default: printf("Other %ld\n", ptr->ai_protocol); break; } printf("\tLength of this sockaddr: %d\n", ptr->ai_addrlen); printf("\tCanonical name: %s\n", ptr->ai_canonname); } freeaddrinfo(result); WSACleanup(); return 0; } Ubuntu /* ** listener.c -- a datagram sockets "server" demo */ #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <errno.h> #include <string.h> #include <sys/types.h> #include <sys/socket.h> #include <netinet/in.h> #include <arpa/inet.h> #include <netdb.h> #define MYPORT "4950" // the port users will be connecting to #define MAXBUFLEN 100 // get sockaddr, IPv4 or IPv6: void *get_in_addr(struct sockaddr *sa) { if (sa->sa_family == AF_INET) { return &(((struct sockaddr_in*)sa)->sin_addr); } return &(((struct sockaddr_in6*)sa)->sin6_addr); } int main(void) { int sockfd; struct addrinfo hints, *servinfo, *p; int rv; int numbytes; struct sockaddr_storage their_addr; char buf[MAXBUFLEN]; socklen_t addr_len; char s[INET6_ADDRSTRLEN]; memset(&hints, 0, sizeof hints); hints.ai_family = AF_UNSPEC; // set to AF_INET to force IPv4 hints.ai_socktype = SOCK_DGRAM; hints.ai_flags = AI_PASSIVE; // use my IP if ((rv = getaddrinfo(NULL, MYPORT, &hints, &servinfo)) != 0) { fprintf(stderr, "getaddrinfo: %s\n", gai_strerror(rv)); return 1; } // loop through all the results and bind to the first we can for(p = servinfo; p != NULL; p = p->ai_next) { if ((sockfd = socket(p->ai_family, p->ai_socktype, p->ai_protocol)) == -1) { perror("listener: socket"); continue; } if (bind(sockfd, p->ai_addr, p->ai_addrlen) == -1) { close(sockfd); perror("listener: bind"); continue; } break; } if (p == NULL) { fprintf(stderr, "listener: failed to bind socket\n"); return 2; } freeaddrinfo(servinfo); printf("listener: waiting to recvfrom...\n"); addr_len = sizeof their_addr; if ((numbytes = recvfrom(sockfd, buf, MAXBUFLEN-1 , 0, (struct sockaddr *)&their_addr, &addr_len)) == -1) { perror("recvfrom"); exit(1); } printf("listener: got packet from %s\n", inet_ntop(their_addr.ss_family, get_in_addr((struct sockaddr *)&their_addr), s, sizeof s)); printf("listener: packet is %d bytes long\n", numbytes); buf[numbytes] = '\0'; printf("listener: packet contains \"%s\"\n", buf); close(sockfd); return 0; } When I attempt www.google.com, I don't get the ipv6 socket returned on Windows - why is this? Outputs: (ubuntu) caleb@ub1:~/Documents/dev/cs438/mp0/MP0$ ./a.out www.google.com IP addresses for www.google.com: IPv4: 74.125.228.115 IPv4: 74.125.228.116 IPv4: 74.125.228.112 IPv4: 74.125.228.113 IPv4: 74.125.228.114 IPv6: 2607:f8b0:4004:803::1010 Outputs: (win) Calling getaddrinfo with following parameters: nodename = www.google.com servname (or port) = 80 getaddrinfo returned success getaddrinfo response 1 Flags: 0x0 Family: AF_INET (IPv4) IPv4 address 74.125.228.114 Socket type: SOCK_STREAM (stream) Protocol: Unspecified Length of this sockaddr: 16 Canonical name: (null) getaddrinfo response 2 Flags: 0x0 Family: AF_INET (IPv4) IPv4 address 74.125.228.115 Socket type: SOCK_STREAM (stream) Protocol: Unspecified Length of this sockaddr: 16 Canonical name: (null) getaddrinfo response 3 Flags: 0x0 Family: AF_INET (IPv4) IPv4 address 74.125.228.116 Socket type: SOCK_STREAM (stream) Protocol: Unspecified Length of this sockaddr: 16 Canonical name: (null) getaddrinfo response 4 Flags: 0x0 Family: AF_INET (IPv4) IPv4 address 74.125.228.112 Socket type: SOCK_STREAM (stream) Protocol: Unspecified Length of this sockaddr: 16 Canonical name: (null) getaddrinfo response 5 Flags: 0x0 Family: AF_INET (IPv4) IPv4 address 74.125.228.113 Socket type: SOCK_STREAM (stream) Protocol: Unspecified Length of this sockaddr: 16 Canonical name: (null)

    Read the article

  • Page zoom slows down page rendering

    - by Alex
    I'm building a map viewer much like Google maps and i've run into an interesting performance problem when a page is zoomed (i.e ctrl + OR ctrl -). It seems to affect all major browsers but Firefox has the worst problems as far as I can tell. The problem is that when the page is zoomed panning by dragging the mouse seems really sluggish. This can even be seen on Google maps. Pan the map left and right and note how smooth it is. Now press ctrl+ (3 or 4 times). Now pan the map left and right in the same way. Notice the difference? Does anyone know how I can minimize this problem?

    Read the article

  • PHP API to trade products from eshop through REST/xml

    - by Donatas Veikutis
    I need algorithm, or PHP api example, or existing decision how to make system for trade big information for B2B xml with goods information. Now I try to use Slim framework to do that system. But for me need some documentation what architecture have to be in here. System requiments is simple: User have autentification username and password Then he can see which product groups assigned to it Then he can see all product with information (price, title, description, images, specifications etc.). Its will the easiest way to get a free php api for that I think, and try too edit some code. But I did not found anything.

    Read the article

  • Accomplishing boost::shared_from_this() in constructor via boost::shared_from_raw(this)

    - by Kyle
    Googling and poking around the boost code, it appears that it's now possible to construct a shared_ptr to this in a constructor, by inheriting from enable_shared_from_raw and calling shared_from_raw(this) Is there any documentation or examples of this? I'm finding nothing with google. Why am I not finding any useful buzz on this on google? I would have thought using shared_from_this in a constructor would be a hot/desirable item. Should I be inheriting from both enable_shared_from_raw and enable_shared_from_this, and restricting my usage of enable_shared_from_raw when I have to? If so, why? Is there a performance hit with shared_from_raw?

    Read the article

  • What HTTP redirect status should I use to link offsite?

    - by Matthew Scharley
    While randomly browsing my website via Google, I noticed that it was showing up remote redirects as local files. Now this can be both good and bad, but I'm wondering how can I fix this so it doesn't happen? I'm currently using PHP and header('Location: ... which sends a 302 redirect. Looking over the list of HTTP status codes, I'd take a guess that I should be using 303 redirects to redirect offsite. Is anyone able to help me here, and either confirm/deny this, or tell me what I should be doing instead? Obviously, due to me not being able to tell Google to reindex my site on command, there's issues with being able to test this myself...

    Read the article

< Previous Page | 564 565 566 567 568 569 570 571 572 573 574 575  | Next Page >