Search Results

Search found 11587 results on 464 pages for 'pseudo random numbers'.

Page 392/464 | < Previous Page | 388 389 390 391 392 393 394 395 396 397 398 399  | Next Page >

  • iOS MapKit: Selected MKAnnotation coordinates.

    - by Oh Danny Boy
    Using the code at the following tutorial, http://www.zenbrains.com/blog/en/2010/05/detectar-cuando-se-selecciona-una-anotacion-mkannotation-en-mapa-mkmapview/, I was able to add an observer to each MKAnnotation and receive a notification of selected/deselected states. I am attempting to add a UIView on top of the selection annotation to display relevant information about the location. This information cannot be conveyed in the 2 lines allowed (Title/Subtitle) for the pin's callout. - (void)observeValueForKeyPath:(NSString *)keyPath ofObject:(id)object change:(NSDictionary *)change context:(void *)context { Annotation *a = (Annotation *)object; // Alternatively attempted using: //Annotation *a = (Annotation *)[mapView.selectedAnnotations objectAtIndex:0]; NSString *action = (NSString *)context; if ([action isEqualToString:ANNOTATION_SELECTED_DESELECTED]) { BOOL annotationSelected = [[change valueForKey:@"new"] boolValue]; if (annotationSelected) { // Actions when annotation selected CGPoint origin = a.frame.origin; NSLog(@"origin (%f, %f) ", origin.x, origin.y); // Test UIView *v = [[UIView alloc] init]; [v setBackgroundColor:[UIColor orangeColor]]; [v setFrame:CGRectMake(origin.x, origin.y , 300, 300)]; [self.view addSubview:v]; [v release]; }else { // Accions when annotation deselected } } } Results using Annotation *a = (Annotation *)object origin (154373.000000, 197135.000000) origin (154394.000000, 197152.000000) origin (154445.000000, 197011.000000) Results using Annotation *a = (Annotation *)[mapView.selectedAnnotations objectAtIndex:0]; origin (0.000000, 0.000000) origin (0.000000, 0.000000) origin (0.000000, 0.000000) The numbers are large. They are not relative to the view (1024 x 768). I believe they are relative to the entire map. How would I be able to detect the exact coordinates relative to the entire view so that I can appropriately position my view?

    Read the article

  • Why doesn't 'Q' unify in this PROLOG program

    - by inspectorG4dget
    Hello SO, I am writing a PROLOG program in which the variable of interest (Q) refuses to unify. I have gotten around this with a hacky solution (include a write statement). But there has to be a way to make this unify, but for the love of me, I am not able to figure it out. I'd really appreciate any help. Thanks in advance. Here is my code (I have annotated wherever I have excluded code for brevity) :- use_module(library(bounds)). :- use_module(library(lists)). solve([17],Q,_,_,_):- write(Q). %this is the hacky workaround solve(L,Q,1,3,2) :- jump(L,Q,N,1,3,2,R), solve(N,R,S,D,M), member([S|[D|[M|[]]]],[[1, 3, 2], [1, 9, 4], [2, 10, 5] this list contains 76 items - all of which are lists of length 3. I have omitted them here for the sake of brevity]). % there are about 75 other definitions for solve, all of which are structured exactly the same. The only difference is that the numbers in the input parameters will be different in each definition jump(L,Q,N,S,D,M,R):- member(S,L), not(member(D,L)), member(M,L), delete(L,S,X), delete(X,M,Y), append(Y,[D],N), append(Q,[[S,D]],R). cross_sol(Q) :- solve([5,9,10,11,17,24],[],S,D,M), member([S,D,M], [ I have edited out this list here for the sake of brevity. It is the same list found in the definition of solve ]). For some reason, Q does not unify. Please help!

    Read the article

  • Displaying Plist data into UItableview

    - by Christien
    I have a plist with Dictionary and numbers of strings per dictionary.show into the url below.and this list of items is in thousands in the plist. I need to display these plist data into the UItableview . How to do this? My Code: - (void)viewWillAppear:(BOOL)animated { // get paths from root direcory NSArray *documentPaths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString *documentsDirectory = [documentPaths objectAtIndex:0]; NSString *documentPlistPath = [documentsDirectory stringByAppendingPathComponent:@"p.plist"]; NSDictionary *dict = [NSDictionary dictionaryWithContentsOfFile:documentPlistPath]; valueArray = [dict objectForKey:@"title"]; self.mySections=[valueArray copy]; NSLog(@"value array %@",self.mySections); } - (NSInteger)numberOfSectionsInTableView:(UITableView *)tableView { return [self.mySections count]; } -(NSString *)tableView:(UITableView *)tableView titleForHeaderInSection:(NSInteger)section { NSString *key = [[self.mySections objectAtIndex:section]objectForKey:@"pass"]; return [NSString stringWithFormat:@"%@", key]; } - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { return 5; } - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"Cell"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[UITableViewCell alloc] initWithStyle:UITableViewCellStyleValue1 reuseIdentifier:CellIdentifier]; } // Configure the cell... NSUInteger section = [indexPath section]; NSUInteger row = [indexPath row]; cell.textLabel.text = [[self.mySections objectAtIndex:row] objectForKey:@"title"]; cell.detailTextLabel.text=[[self.mySections objectAtIndex:section] objectForKey:[allKeys objectAtIndex:1]]; return cell; } Error is 2012-12-17 16:21:07.372 Project[2076:11603] * Terminating app due to uncaught exception 'NSRangeException', reason: '-[__NSCFArray objectAtIndex:]: index (4) beyond bounds (4)' * First throw call stack: (0x1703052 0x1523d0a 0x16aba78 0x16ab9e9 0x16fcc60 0x1d03a 0x391e0f 0x392589 0x37ddfd 0x38c851 0x337301 0x1704e72 0x277592d 0x277f827 0x2705fa7 0x2707ea6 0x2707580 0x16d79ce 0x166e670 0x163a4f6 0x1639db4 0x1639ccb 0x1505879 0x150593e 0x2f8a9b 0x2158 0x20b5) terminate called throwing an exceptionkill

    Read the article

  • How can I check if the mouse button is released, and THEN execute a procedure once in Borland Pascal

    - by Robert
    Hi! I use Borland Pascal 7.0, and I would like to make a slots game (If 3 random numbers are the same, you win). The problem is that when I click on the start (Inditas) button on the menu, the procedure executes many times until I release the mouse button. I was told that I should check if the mouse button is released before executing the procedure once. How can I do that? Here's what the menu looks like: procedure eger; begin; mouseinit; mouseon; menu; repeat getmouse(m); if (m.left) and (m.x60) ANd (m.x<130) and (m.y120) and (m.y<150) then teglalap(90,90,300,300,blue); if (m.left) and (m.x60) AND (m.x<130) and (m.y160) and (m.y<190) then jatek(a,b,c,coin,coins); until ((m.left) and (m.x60) ANd (m.x<130) and (m.y240) and (m.y<270)); end; Thanks, Robert

    Read the article

  • Project Euler problem 214, How can i make it more efficient?

    - by Once
    I am becoming more and more addicted to the Project Euler problems. However since one week I am stuck with the #214. Here is a short version of the problem: PHI() is Euler's totient function, i.e. for any given integer n, PHI(n)=numbers of k<=n for which gcd(k,n)=1. We can iterate PHI() to create a chain. For example starting from 18: PHI(18)=6 = PHI(6)=2 = PHI(2)=1. So starting from 18 we get a chain of length 4 (18,6,2,1) The problem is to calculate the sum of all primes less than 40e6 which generate a chain of length 25. I built a function that calculates the chain length of any number and I tested it for small values: it works well and fast. sum of all primes<=20 which generate a chain of length 4: 12 sum of all primes<=1000 which generate a chain of length 10: 39383 Unfortunately my algorithm doesn't scale well. When I apply it to the problem, it takes several hours to calculate... so I stop it because the Project Euler problems must be solved in less than one minute. I thought that my prime detection function might be slow so I fed the program with a list of primes <40e6 to avoid the primality test... The code runs now a little bit faster, but there is still no way to get a solution in less than a few hours (and I don't want this). So is there any "magic trick" that I am missing here ? I really don't understand how to be more efficient on this one... I am not asking for the solution, because fighting with optimization is all the fun of Project Euler. However, any small hint that could put me on the right track would be welcome. Thanks !

    Read the article

  • Looking for fast, minimal, preferrably free disc cloning software [closed]

    - by Dave
    We have to test our application installation and functionality on many Windows operating system versions and languages (XP, Vista, Win7; English, Spanish, Portuguese, etc; 32-bit & b4-bit.) While we can do much of this in virtual machines, we have noticed that VM's sometimes hide problems, or raise false bugs. So, we need to do "bare metal" OS installation for much of our testing. I have been using Acronis True Image for the past year, and am not impressed. It often gives random errors which require a reboot, and is really slow. For example, when trying to restore an image, it goes through a "Locking partition" cycle about three times (once after you click OK on each step of the wizard), each of which can take 5 minutes to complete. This all happens BEFORE it actually starts the image copy, which is sometimes quick (3-5 minutes), sometimes long (hours). The size of all of our images are roughly the same, so that is not related. So, anyway, I'm looking to switch to something else: I only need very basic functionality--just creating images of entire discs, and then restoring those images onto the exact same hard drive at a later date. That's it. I'm not opposed to paying for a good piece of software, but if there is something free out there that does the job well, that would be a preference. My OS on which the imaging software would run is Windows Vista, but a bootable media (into a Linux flavor) would be fine also, as long as its quick to use and reliable. Recommendations? (Also, moderators, if this should be a CW, I'll be happy to mark it as such; unclear about the rules there.)

    Read the article

  • Deterministic and non uniform long string generation from seed

    - by Limonup
    I had this weird idea for an encryption that I wanted to try out, it may be bad, and it may have done before, but I'm just doing it for fun. The short version of the question is: Is it possible to generate a long, deterministic and non-uniformly distributed string/sequence of numbers from a small seed? Long(er) version: I was thinking to encrypt a text by changing encoding. The new encoding would be generated via Huffman algorithm. To work well, the Huffman algorithm would need a fairly long text with non uniform distribution. Then characters can have different bit-lengths which would be the primary strength of this encryption. The problem is that its impractical to enter in/remember a long text each time you want to decrypt the text. So I was wondering if it was possible to generate a text from password seed? It doesn't matter what the text is, as long as it has non uniform distribution of characters and that the exact same sequence can be recreated each time you give it the same seed. Preferably, are there any functions/extensions in Python that can do this? EDIT: To expand on the "strength" of varying bit length: if I have a string "test", ASCII values 116, 101, 115, 116, which gives bit values of 1110100 1100101 1110011 1110100 Then, say my Huffman algorithm generates encoding like t = 101 e = 1100111 s = 10001 The final string is 101 1100111 10001 101, if we encode this back to ASCII, we get 1011100 1111000 1101000, which is 3 entirely different characters. Obviously its impossible to perform any kind of frequency analysis or something like that on this.

    Read the article

  • SQL Distinct keyword in assignment statement

    - by Brandi
    I have a query that works: DECLARE @ProductID int SET @ProductID = '1234' SELECT DISTINCT TOP 12 a.ProductID FROM A a WHERE a.CategoryID IN (SELECT b.CategoryID FROM B b WHERE b.ProductID = @ProductID) AND a.ProductID != @ProductID It returns a list of 12 product numbers, all unique. I need to store these results in a variable, comma separated, because that's what 3rd party stored procedure needs. So I have this: DECLARE @ProductID int DECLARE @relatedprods varchar(8000) SET @ProductID = '1234' SET @relatedprods = '' SELECT TOP 12 @relatedprods = @relatedprods + CONVERT(VARCHAR(20), a.ProductID) + ', ' FROM A a WHERE a.CategoryID IN (SELECT b.CategoryID FROM B b WHERE B.ProductID = @ProductID) AND a.ProductID != @ProductID SELECT @relatedprods Now, none of these are distinct, but it is returning 12 rows. Now I add the 'distinct' back in, like in the first query: DECLARE @ProductID int DECLARE @relatedprods varchar(8000) SET @ProductID = '1234' SET @relatedprods = '' SELECT DISTINCT TOP 12 @relatedprods = @relatedprods + CONVERT(VARCHAR(20), a.ProductID) + ', ' FROM A a WHERE a.CategoryID IN (SELECT b.CategoryID FROM B b WHERE B.ProductID = @ProductID) AND a.ProductID != @ProductID SELECT @relatedprods Only one product is returned in the comma separated list! Does 'distinct' not work in assignment statements? What did I do wrong? Or is there a way to get around this? Thanks in advance!

    Read the article

  • Collapsing data frame by selecing one row per group

    - by jkebinger
    I'm trying to collapse a data frame by removing all but one row from each group of rows with identical values in a particular column. In other words, the first row from each group. For example, I'd like to convert this > d = data.frame(x=c(1,1,2,4),y=c(10,11,12,13),z=c(20,19,18,17)) > d x y z 1 1 10 20 2 1 11 19 3 2 12 18 4 4 13 17 Into this: x y z 1 1 11 19 2 2 12 18 3 4 13 17 I'm using aggregate to do this currently, but the performance is unacceptable with more data: > d.ordered = d[order(-d$y),] > aggregate(d.ordered,by=list(key=d.ordered$x),FUN=function(x){x[1]}) I've tried split/unsplit with the same function argument as here, but unsplit complains about duplicate row numbers. Is rle a possibility? Is there an R idiom to convert rle's length vector into the indices of the rows that start each run, which I can then use to pluck those rows out of the data frame?

    Read the article

  • What techniques can be used to detect so called "black holes" (a spider trap) when creating a web crawler?

    - by Tom
    When creating a web crawler, you have to design somekind of system that gathers links and add them to a queue. Some, if not most, of these links will be dynamic, which appear to be different, but do not add any value as they are specifically created to fool crawlers. An example: We tell our crawler to crawl the domain evil.com by entering an initial lookup URL. Lets assume we let it crawl the front page initially, evil.com/index The returned HTML will contain several "unique" links: evil.com/somePageOne evil.com/somePageTwo evil.com/somePageThree The crawler will add these to the buffer of uncrawled URLs. When somePageOne is being crawled, the crawler receives more URLs: evil.com/someSubPageOne evil.com/someSubPageTwo These appear to be unique, and so they are. They are unique in the sense that the returned content is different from previous pages and that the URL is new to the crawler, however it appears that this is only because the developer has made a "loop trap" or "black hole". The crawler will add this new sub page, and the sub page will have another sub page, which will also be added. This process can go on infinitely. The content of each page is unique, but totally useless (it is randomly generated text, or text pulled from a random source). Our crawler will keep finding new pages, which we actually are not interested in. These loop traps are very difficult to find, and if your crawler does not have anything to prevent them in place, it will get stuck on a certain domain for infinity. My question is, what techniques can be used to detect so called black holes? One of the most common answers I have heard is the introduction of a limit on the amount of pages to be crawled. However, I cannot see how this can be a reliable technique when you do not know what kind of site is to be crawled. A legit site, like Wikipedia, can have hundreds of thousands of pages. Such limit could return a false positive for these kind of sites. Any feedback is appreciated. Thanks.

    Read the article

  • [php] Cookies only changing value every two page refreshes?

    - by Gazillion
    Hello, I'm trying to implement some pixel tracking where I will save certain values in a cookie to then forward users to another page. If users purchase a product after being forwarded to the online store by us the store adds an image tag in the page with our php script included. With the values set in the cookie we would like to track conversions. I understand this tracking technique has some limitations (like if a user has cookies turned off or if they do not load images but that's the direction my client wanted to go in). The problem I'm having is that the cookie's behaviour is extremely... random. I've been trying to track their values (with a var_dump so I don't have to wait for a page reload to view the cookie's value) but it seems the value for one field only gets refreshed every two page reloads. setcookie("tracking[cn]", $cn, time()+3600*24*7,'/','mydomain.com'); setcookie("tracking[t]", $t, time()+3600*24*7,'/','mydomain.com'); setcookie("tracking[kid]", $kid, time()+3600*24*7,'/','mydomain.com'); redirectTo($redirect_url); the values of cn, t are fine but for some reason kid is always wrong (having taken the value of the previous kid) Any help would be extremely appreciated I've been at this all evening! :)

    Read the article

  • What is a good way to assign order #s to ordered rows in a table in Sybase

    - by DVK
    I have a table T (structure below) which initially contains all-NULL values in an integer order column: col1 varchar(30), col2 varchar(30), order int NULL I also have a way to order the "colN" columns, e.g. SELECT * FROM T ORDER BY some_expression_involving_col1_and_col2 What's the best way to assign - IN SQL - numeric order values 1-N to the order table, so that the order values match the order of rows returned by the above ORDER BY? In other words, I would like a single query (Sybase SQL syntax so no Oracle's rowcount) which assigns order values so that SELECT * FROM T ORDER BY order returns 100% same order of rows as the query above. The query does NOT necessarily need to update the table T in place, I'm OK with creating a copy of the table T2 if that'll make the query simpler. NOTE1: A solution must be real query or a set of queries, not involving a loop or a cursor. NOTE2: Assume that the data is uniquely orderable according to the order by above - no need to worry about situation when 2 rows can be assigned the same order at random. NOTE3: I would prefer a generic solution, but if you wish a specific example of ordering expression, let's say: SELECT * FROM T ORDER BY CASE WHEN col1="" THEN "AAAAAA" ELSE col1 END, ISNULL(col2, "ZZZ")

    Read the article

  • innerHTML in IE?

    - by froufrou
    I'm having trouble using innerHTML with my radio type button. <table align="center"> <div class='main'> <span id="js" class='info'> <label><input type="radio" name="js" value="0" size="<?php echo $row['size']; ?>" onclick="js(this.value, this.size);" /><img src="arrowup.png"/></label> <br /> <label><input type="radio" name="js" value="1" size="<?php echo $row['size']; ?>" onclick="js(this.value, this.size);" /><img src="arrowdown.png"/></label> </span> </div> </table> My .js looks like this: var xmlhttp; function getVote(a,b) { xmlhttp=GetXmlHttpObject(); if (xmlhttp==null) { alert ("Browser does not support HTTP Request"); return; } var url="js.php"; url=url+"?js="+a; url=url+"&id="+b; url=url+"&sid="+Math.random(); xmlhttp.onreadystatechange=stateChanged; xmlhttp.open("GET",url,true); xmlhttp.send(null); } function stateChanged() { if (xmlhttp.readyState==4) { document.getElementById("js").innerHTML=xmlhttp.responseText; } } function GetXmlHttpObject() { var objXMLHttp=null; if (window.XMLHttpRequest) { objXMLHttp=new XMLHttpRequest(); } else if (window.ActiveXObject) { objXMLHttp=new ActiveXObject("Microsoft.XMLHTTP"); } return objXMLHttp; } This doesn't work in IE only! Any help?

    Read the article

  • Helping Rails Newbies identify version-specific information on web pages

    - by corprew
    I am trying to help some people getting started programming on rails identify which version that advice found on web pages corresponds to, and am seeking advice and/or guides on how to do it so they don't have to rely on me and/or waste time trying outdated advice. Narrative: I am helping some people get up to speed on rails development, and their stock response to running into problems is searching google for advice. They're using 2.3.5 and thinking of moving to 3. The problem they're running into is that there's a lot of advice out there specific to older rails versions (2.2 for example being popular) that isn't identified. I can usually figure out when the pages are old pretty easily, but they can't (yet.) It seems like random web page authors don't identify which version they're using when they're using the current version, and not all pages are dated. This seems to be a general problem that will get worse -- current unadorned advice is usually 2.3.5 and older unadorned advice is 2.2.x at this point, but people are moving / will be moving to version 3 over the next while and newbies will be stuck looking at a bunch of deprecated/incompatible 2.3.x advice without realizing which version it is. Any advice / pointers / telltales?

    Read the article

  • mysql/algorithm: Weighting an average to accentuate differences from the mean

    - by Sai Emrys
    This is for a new feature on http://cssfingerprint.com (see /about for general info). The feature looks up the sites you've visited in a database of site demographics, and tries to guess what your demographic stats are based on that. All my demgraphics are in 0..1 probability format, not ratios or absolute numbers or the like. Essentially, you have a large number of data points that each tend you towards their own demographics. However, just taking the average is poor, because it means that by adding in a lot of generic data, the number goes down. For example, suppose you've visited sites S0..S50. All except S0 are 48% female; S0 is 100% male. If I'm guessing your gender, I want to have a value close to 100%, not just the 49% that a straight average would give. Also, consider that most demographics (i.e. everything other than gender) does not have the average at 50%. For example, the average probability of having kids 0-17 is ~37%. The more a given site's demographics are different from this average (e.g. maybe it's a site for parents, or for child-free people), the more it should count in my guess of your status. What's the best way to calculate this? For extra credit: what's the best way to calculate this, that is also cheap & easy to do in mysql?

    Read the article

  • DLL Exports: not all my functions are exported

    - by carmellose
    I'm trying to create a Windows DLL which exports a number of functions, howver all my functions are exported but one !! I can't figure it out. The macro I use is this simple one : __declspec(dllexport) void myfunction(); It works for all my functions except one. I've looked inside Dependency Walker and here they all are, except one. How can that be ? What would be the cause for that ? I'm stuck. Edit: to be more precise, here is the function in the .h : namespace my { namespace great { namespace namespaaace { __declspec(dllexport) void prob_dump(const char *filename, const double p[], int nx, const double Q[], const double xlow[], const char ixlow[], const double xupp[], const char ixupp[], const double A[], int my, const double bA[], const double C[], int mz, const double clow[], const char iclow[], const double cupp[], const char icupp[] ); }}} And in the .cpp file it goes like this: namespace my { namespace great { namespace namespaaace { namespace { void dump_mtx(std::ostream& ostr, const double *mtx, int rows, int cols, const char *ind = 0) { /* some random code there, nothing special, no statics whatsoever */ } } // end anonymous namespace here // dump the problem specification into a file void prob_dump( const char *filename, const double p[], int nx, const double Q[], const double xlow[], const char ixlow[], const double xupp[], const char ixupp[], const double A[], int my, const double bA[], const double C[], int mz, const double clow[], const char iclow[], const double cupp[], const char icupp[] ) { std::ofstream fout; fout.open(filename, std::ios::trunc); /* implementation there */ dump_mtx(fout, Q, nx, nx); } }}} Thanks

    Read the article

  • Version Control: multiple version hell, file synchronization

    - by SigTerm
    Hello. I would like to know how you normally deal with this situation: I have a set of utility functions. Say..5..10 files. And technically they are static library, cross-platform - SConscript/SConstruct plus Visual Studio project (not solution). Those utility functions are used in multiple small projects (15+, number increases over time). Each project has a copy of a few files or of an entire library, not a link into one central place. Sometimes project uses one file, two files, some use everything. Normally, utility functions are included as a copy of every file and SConscript/SConstruct or Visual Studio Project (depending on the situation). Each project has a separate git repository. Sometimes one project is derived from other, sometimes it isn't. You work on every one of them, in random order. There are no other people (to make things simpler) The problem arises when while working on one project you modify those utility function files. Because each project has a copy of file, this introduces new version, which leads to the mess when you try later (week later, for example) to guess which version has a most complete functionality (i.e. you added a function to a.cpp in one project, and added another function to a.cpp in another project, which created a version fork) How would you handle this situation to avoid "version hell"? One way I can think of is using symbolic links/hard links, but it isn't perfect - if you delete one central storage, it will all go to hell. And hard links won't work on dual-boot system (although symbolic links will). It looks like what I need is something like advanced git repository, where code for the project is stored in one local repository, but is synchronized with multiple external repositories. But I'm not sure how to do it or if it is possible to do this with git. So, what do you think?

    Read the article

  • What is the right pattern for a async data fetching method in .net async/await

    - by s093294
    Given a class with a method GetData. A few other clients call GetData, and instead of it fetching data each time, i would like to create a pattern where the first call starts the task to get the data, and the rest of the calls wait for the task to complete. private Task<string> _data; private async Task<string> _getdata() { return "my random data from the net"; //get_data_from_net() } public string GetData() { if(_data==null) _data=_getdata(); _data.wait(); //are there not a problem here. cant wait a task that is already completed ? if(_data.status != rantocompletion) _data.wait() is not any better, it might complete between the check and the _data.wait? return _data.Result; } How would i do the pattern correctly? (Solution) private static object _servertime_lock = new object(); private static Task<string> _servertime; private static async Task<string> servertime() { try { var thetvdb = new HttpClient(); thetvdb.Timeout = TimeSpan.FromSeconds(5); // var st = await thetvdb.GetStreamAsync("http://www.thetvdb.com/api/Updates.php?type=none"); var response = await thetvdb.GetAsync("http://www.thetvdb.com/api/Updates.php?type=none"); response.EnsureSuccessStatusCode(); Stream stream = await response.Content.ReadAsStreamAsync(); XDocument xdoc = XDocument.Load(stream); return xdoc.Descendants("Time").First().Value; } catch { return null; } } public static async Task<string> GetServerTime() { lock (_servertime_lock) { if (_servertime == null) _servertime = servertime(); } var time = await _servertime; if (time == null) _servertime = null; return time; }

    Read the article

  • Globalize/Localize Excel Reports Using Spreadsheet

    - by mga911
    My company has new customers in Brazil and we realized that our excel reports are not working when our Brazilian customers tried to open the reports in their Brazilian versions of excel. For excel output we use spreadsheet gear in our vb.net web application. Our excel worksheets are fairly simple. Mostly outputted text/numbers/dates, a couple of formulas (sum, if) and formatting on the currency and dates. I've tried several methods to get my excel reports to work: First I left the excel workbook in the "en-US" culture and tried simply chaging the number format for Brazil to: _-[$R$-416] * #.##0,00_-;-[$R$-416] * #.##0,00_-;_-[$R$-416] * "-"??_-;_-@_- And this formatted the regular cells but the formulas still failed to show a value. Instead they showed a 0 value. Next I tried changing the workbook to the "pt-BR" culture and that also forced me to translate the formula names (Sum - Soma, If - Se) but they still wouldn't should a value and instead showed a #Name/#Nome error. Interestingly enough the formulas would work if I edited the cell and hit enter. The formula wouldn't change but it would some how fix that cell. I need to be able to out excel reports that can format dates/currencies and apply simple formulas (IF, Sum) for other excel cultures. Anyone have any advice?

    Read the article

  • [Cocoa] Core Animation with an NSView and subviews

    - by ndg
    I've subclassed NSView to create a 'container' view (which I've called TRTransitionView) which is being used to house two subviews. At the click of a button, I want to transition one subview out of the parent view and transition the other in, using the Core Animation transition type: kCATransitionPush. For the most part, I have this working as you'd expect (here's a basic test project I threw together). The issue I'm seeing relates to resizing my window and then toggling between my two views. After resizing a window, my subviews will appear at seemingly random locations within my TRTransitionView. Additionally, it appears as if the TRTransitionView hasn't stretched correctly and is clipping the contents of its subviews. Ideally, I would like subviews anchored to the top-left of their parent view at all times, and to also grow to expand the size of the parent view. The second issue relates to an NSTableView I've placed in my first subview. When my window is resized, and my TRTransitionView resizes to match its new dimensions, my TableView seems to resize its content quite awkwardly (the entire table seems to jolt around) and the newly expanded space that the table now occupies seems to 'flash' (as if in the process of being animated). Extremely difficult to describe, but is there any way to stop this? Here's my TRTransitionView class: -(void) awakeFromNib { [self setWantsLayer:YES]; [self addSubview:[self currentView]]; transition = [CATransition animation]; [transition setType:kCATransitionPush]; [transition setSubtype:kCATransitionFromLeft]; [self setAnimations: [NSDictionary dictionaryWithObject:transition forKey:@"subviews"]]; } - (void)setCurrentView:(NSView*)newView { if (!currentView) { currentView = newView; return; } [[self animator] replaceSubview:currentView with:newView]; currentView = newView; } -(IBAction) switchToViewOne:(id)sender { [transition setSubtype:kCATransitionFromLeft]; [self setCurrentView:viewOne]; } -(IBAction) switchToViewTwo:(id)sender { [transition setSubtype:kCATransitionFromRight]; [self setCurrentView:viewTwo]; }

    Read the article

  • Python: Beginning problems

    - by Blogger
    ok so basically i very new to programming and have no idea how to go about these problems help if you will ^^ Numerologists claim to be able to determine a person’s character traits based on the “numeric value” of a name. The value of a name is determined by summing up the values of the letters of the name, where ‘a’ is 1, ‘b’ is 2, ‘c’ is 3 etc., up to ‘z’ being 26. For example, the name “Zelle” would have the value 26 + 5 + 12 + 12 + 5 = 60 (which happens to be a very suspicious number, by the way). Write a program that calculates the numeric value of a single name provided as input. Word count. A common utility on Unix/Linux systems is a small program called “wc”. This program counts the number of lines, words (strings of characters separated by blanks, tabs, or new lines), and characters in a file. Write your own version of this program. The program should accept a file name as input and then print three numbers showing the count of lines, words, and characters in the file.

    Read the article

  • Magi squares, recursive

    - by user310827
    Hi, my problem is, I'm trying to permute all posibilities for 3x3 square and check if the combination is magic. I've added a tweak with (n%3==0) if statement that if the sum of numbers in row differs from 15 it breaks the creation of other two lines... but it doesn't work, any suggestions, I call the function with Permute(1). public static class Global { //int[] j = new int[6]; public static int[] a= {0,0,0,0,0,0,0,0,0}; public static int[] b= {0,0,0,0,0,0,0,0,0}; public static int count = 0; } public static void Permute(int n) { int tmp=n-1; for (int i=0;i<9;i++){ if (Global.b[i]==0 ) { Global.b[i]=1; Global.a[n-1]=i+1; if ((n % 3) == 0) { if (Global.a[0+tmp]+Global.a[1+tmp]+Global.a[2+tmp] == 15) { if (n<9) { Permute(n+1); } else { isMagic(Global.a); } } else break; } else { Permute(n+1); } Global.b[i]=0; } } }

    Read the article

  • Refreshing WEB-INF/lib in Google App Engine (with Eclipse)

    - by Adrian Petrescu
    Hi, I've created a new Google App Engine project within Eclipse. I copied several JARs that I need for my application into the WEB-INF/lib directory, and add them to the build path. I make some random calls to these JARs from within the handler, deploy, and everything works fine. However, if I then change one of the JARs outside the project, and copy the new version to WEB-INF/lib (with the same name) and re-deploy, it doesn't seem to be sending the new JAR; everything is still linking to the old one even though it's not even in my WEB-INF/lib anymore. I'm guessing it's being cached by the server or Eclipse is not even realizing something has changed in order to upload the new version. If I just create a new project with the new JAR, everything is fine again (until I have to make another change...) but of course I don't want to have to create a new project for every change to a dependency I make. My question is, how can I make GAE re-upload all the JARs I have from within Eclipse? Thanks in advance, guys :) -Adrian

    Read the article

  • Select all points in a matrix within 30m of another point

    - by pinnacler
    So if you look at my other posts, it's no surprise I'm building a robot that can collect data in a forest, and stick it on a map. We have algorithms that can detect tree centers and trunk diameters and can stick them on a cartesian XY plane. We're planning to use certain 'key' trees as natural landmarks for localizing the robot, using triangulation and trilateration among other methods, but programming this and keeping data straight and efficient is getting difficult using just Matlab. Is there a technique for sub-setting an array or matrix of points? Say I have 1000 trees stored over 1km (1000m), is there a way to say, select only points within 30m radius of my current location and work only with those? I would just use a GIS, but I'm doing this in Matlab and I'm unaware of any GIS plugins for Matlab. I forgot to mention, this code is going online, meaning it's going on a robot for real-time execution. I don't know if, as the map grows to several miles, using a different data structure will help or if calculating every distance to a random point is what a spatial database is going to do anyway. I'm thinking of mirroring two arrays, one sorted by X and the other by Y. Then bubble sorting to determine the 30m range in that. I do the same for both arrays, X and Y, and then have a third cross link table that will select the individual values. But I don't know, what that's called, how to program that and I'm sure someone already has so I don't want to reinvent the wheel. Cartesian Plane GIS

    Read the article

  • Performing calculations by subsets of data in R

    - by Vivi
    I want to perform calculations for each company number in the column PERMNO of my data frame, the summary of which can be seen here: > summary(companydataRETS) PERMNO RET Min. :10000 Min. :-0.971698 1st Qu.:32716 1st Qu.:-0.011905 Median :61735 Median : 0.000000 Mean :56788 Mean : 0.000799 3rd Qu.:80280 3rd Qu.: 0.010989 Max. :93436 Max. :19.000000 My solution so far was to create a variable with all possible company numbers compns <- companydataRETS[!duplicated(companydataRETS[,"PERMNO"]),"PERMNO"] And then use a foreach loop using parallel computing which calls my function get.rho() which in turn perform the desired calculations rhos <- foreach (i=1:length(compns), .combine=rbind) %dopar% get.rho(subset(companydataRETS[,"RET"],companydataRETS$PERMNO == compns[i])) I tested it for a subset of my data and it all works. The problem is that I have 72 million observations, and even after leaving the computer working overnight, it still didn't finish. I am new in R, so I imagine my code structure can be improved upon and there is a better (quicker, less computationally intensive) way to perform this same task (perhaps using apply or with, both of which I don't understand). Any suggestions?

    Read the article

< Previous Page | 388 389 390 391 392 393 394 395 396 397 398 399  | Next Page >