Search Results

Search found 21817 results on 873 pages for 'last child'.

Page 698/873 | < Previous Page | 694 695 696 697 698 699 700 701 702 703 704 705  | Next Page >

  • Package caption Error: Continued 'figure' after 'table'

    - by Eduardo
    Hello I am having a problem with the numbering of figures using Latex, I am getting this error message: Package caption Error: Continued 'figure' after 'table' This is my code: \begin{table} \centering \subfloat[Tabla1\label{tab:Tabla1}]{ \small \begin{tabular}{ | c | c | c | c | c |} \hline \multicolumn{5}{|c|}{\textbf{Tabla 1}} \\ \hline ... ... \end{tabular} } \qquad \subfloat[Tabla2\label{tab:Tabla2}]{ \small \begin{tabular}{ | c | c | c | c | c |} \hline \multicolumn{5}{|c|}{\textbf{Tabla 2}} \\ \hline ... ... \end{tabular} } \caption{These are tables} \label{tab:Tables} \end{table} \begin{figure} \centering \subfloat[][Figure 1]{\label{fig:fig1}\includegraphics[width = 14cm]{fig1}} \qquad \subfloat[][Figure 2]{\label{fig:fig2}\includegraphics[width = 14cm]{fig2}} \end{figure} \begin{figure}[t] \ContinuedFloat \subfloat[][Figure 2]{\label{fig:fig3}\includegraphics[width = 14cm]{fig3}} \caption{Those are figures} \label{fig:Figures} \end{figure} \newpage What I want to do, it is to have this configuration: Table Table Figure 1 Figure 2 Figure 3 Since Figure 1 and Figure 2 are too big to fit vertically I want the Figure 3 to be alone in another page that's why I have the \ContinuedFloat. Externally looks fine but the problem is the numbering, I am getting for the Figures the number 5.2, that is the same number that a Figure I have before (The correct number should be 5.3). However if I try to reference the figures: \ref{fig:fig1}, \ref{fig:fig2} y \ref{fig:fig2} I get: 5.3a, 5.3b y 5.2c The two first right the last one wrong. I have been stuck with this for hours any ideas?. Thans a lot in advance

    Read the article

  • Zend Sessions problem with IE8

    - by Emil
    I'm running a Zend Framework powered website and it seems to have serious problems with sessions. I have a 5 step process where I save the form data in the session between the steps and then save it into the database on the last step. When we built the site sometimes the session just went away and forced us to restart. Now it seems to work again but recently we discovered an issue with Internet Explorer 8. It fails between step 2 - 3 and forgets the session. It works fine in IE6, IE7, FF, Chrome, Safari and even in my mobile web browser (SE P1). We're storing our sessions in the database and if I deactivate the session db handler it works. What's the difference between using the database and not using it for sessions? Do I loose something if I switch back? Bootstrap: /* Start session */ $saveHandler = new Zend_Session_SaveHandler_DbTable(array( 'name' => 'sessions', 'primary' => 'id', 'modifiedColumn' => 'modified', 'dataColumn' => 'data', 'lifetimeColumn' => 'lifetime' )); Zend_Session::rememberMe((int) $config->session->lifetime); $saveHandler->setLifetime((int) $config->session->lifetime) ->setOverrideLifetime(true); Zend_Session::setSaveHandler($saveHandler); Zend_Session::start(); and in my step controller $session = new Zend_Session_Namespace('wizard'); Then I'm just working with $session saving data in a stdClass in $session.

    Read the article

  • Solr associations

    - by Tom
    Hi all, The last couple of days we are thinking of using Solr as our search engine of choice. Most of the features we need are out of the box or can be easily configured. There is however one feature that we absolutely need that seems to be well hidden (or missing) in Solr. I'll try to explain with an example. We have lots of documents that are actually businesses: <document> <name>Apache</name> <cat>1</cat> ... </document> <document> <name>McDonalds</name> <cat>2</cat> ... </document> In addition we have another xml file with all the categories and synonyms: <cat id=1> <name>software</name> <synonym>IT<synonym> </cat> <cat id=2> <name>fast food</name> <synonym>restaurant<synonym> </cat> We want to associate both businesses and categories so we can search using the name and/or synonyms of the category. But we do not want to merge these files at indexing time because we should update the categories (adding.remioving synonyms...) without indexing all the businesses again. Is there anything in Solr that does this kind of associations or do we need to develop some specific pieces? All feedback and suggestions are welcome. Thanks in advance, Tom

    Read the article

  • F# exercise help - Beginner

    - by bobjink
    I hope you can help me with these exercises I have been stuck with. I am not in particular looking for the answers. Tips that help me solve them myself are just as good :) 1. Write a function implode : char list - string so that implode s returns the characters concatenated into a string: let implode (cArray:char list) = List.foldBack (+) 0 cArray;; Error. I am thinking that i need to cast every char to strings before I do the above. I bet however, that there is a much simpler and better solution. 2. Write a function palindrome : string - bool, so that palindrome s returns true if the string s is a palindrome let isIdentical cArray1 cArray2 = (cArray1 = cArray2);; let palinDrome (word:string) = isIdentical(word.ToCharArray(), Array.rev(word.ToCharArray()));; I get the value: val palinDrome : string - (char [] * char [] - bool), which is not what I want. 3. Write a function combinePair : int list - int list so that combinePair xs returns the list with elements from xs combined into pairs. If xs contains an odd number of elements, then the last element is thrown away: I have no idea for this one.

    Read the article

  • Is there a java library / package analogous to <stdio.h>?

    - by Roboprog
    I have been doing Java on and off for about 14 years, and almost nothing else the last 6 years or so. I really hate the java.io package -- its legion of subclasses and adapters. I do like exceptions, rather than having to always poll "errno" and the like, but I could surely live without declared exceptions. Is there anything that functions like the Unix/ANSI stdio.h routines in C? I know we will never be rid of java.io and its conventions until java itself is retired, as they have metastasized throughout the many frameworks that have accreted to java. That said, I would like something that works kind of like this (let's call it package javax.stdio): Have a main utility class, perhaps FileStar, that can read and write files (or pipes), either text or binary, either sequentially or random access, with constructors that mimic fopen() and popen(). This class should have a load of useful methods that do things like fread(), fwrite(), fgets(), fputs(), fseek(), and whatever else (fprintf()?). Methods that are incompatible with the open/construct mode simply throw up (just like some of the collections classes/methods do when restricted). Then, have a bunch of interfaces that suggest how you intend to use the stream once you have created it: Sequential, RandomAccess, ReadOnly, WriteOnly, Text, Binary, plus combinations of these that make sense. Perhaps even have methods to return the appropriate type-cast (interface), throwing up if you have asked for something incompatible. For extra flavor, skip the declared exceptions -- e.g. - javax.stdio.IOException extends RuntimeException. Is there an open source project like this floating around?

    Read the article

  • How to find the latest row for each group of data

    - by Jason
    Hi All, I have a tricky problem that I'm trying to find the most effective method to solve. Here's a simplified version of my View structure. Table: Audits AuditID | PublicationID | AuditEndDate | AuditStartDate 1 | 3 | 13/05/2010 | 01/01/2010 2 | 1 | 31/12/2009 | 01/10/2009 3 | 3 | 31/03/2010 | 01/01/2010 4 | 3 | 31/12/2009 | 01/10/2009 5 | 2 | 31/03/2010 | 01/01/2010 6 | 2 | 31/12/2009 | 01/10/2009 7 | 1 | 30/09/2009 | 01/01/2009 There's 3 query's that I need from this. I need to one to get all the data. The next to get only the history data (that is, everything but exclude the latest data item by AuditEndDate) and then the last query is to obtain the latest data item (by AuditEndDate). There's an added layer of complexity that I have a date restriction (This is on a per user/group basis) where certain user groups can only see between certain dates. You'll notice this in the where clause as AuditEndDate<=blah and AuditStartDate=blah Foreach publication, select all the data available. select * from Audits Where auditEndDate<='31/03/10' and AuditStartDate='06/06/2009'; foreach publication, select all the data but Exclude the latest data available (by AuditEndDate) select * from Audits left join (select AuditId as aid, publicationID as pid and max(auditEndDate) as pend from Audit where auditenddate <= '31/03/2009' /* user restrict / group by pid) Ax on Ax.pid=Audit.pubid where pend!=Audits.auditenddate AND auditEndDate<='31/03/10' and AuditStartDate='06/06/2009' / user restrict */ Foreach publication, select only the latest data available (by AuditEndDate) select * from Audits left join (select AuditId as aid, publicationID as pid and max(auditEndDate) as pend from Audit where auditenddate <= '31/03/2009'/* user restrict / group by pid) Ax on Ax.pid=Audit.pubid where pend=Audits.auditenddate AND auditEndDate<='31/03/10' and AuditStartDate='06/06/2009' / user restrict */ So at the moment, query 1 and 3 work fine, but query 2 just returns all the data instead of the restriction. Can anyone help me? Thanks jason

    Read the article

  • Troubleshooting failover cluster problem in W2K8 / SQL05

    - by paulland
    I have an active/passive W2K8 (64) cluster pair, running SQL05 Standard. Shared storage is on a HP EVA SAN (FC). I recently expanded the filesystem on the active node for a database, adding a drive designation. The shared storage drives are designated as F:, I:, J:, L: and X:, with SQL filesystems on the first 4 and X: used for a backup destination. Last night, as part of a validation process (the passive node had been offline for maintenance), I moved the SQL instance to the other cluster node. The database in question immediately moved to Suspect status. Review of the system logs showed that the database would not load because the file "K:\SQLDATA\whatever.ndf" could not be found. (Note that we do not have a K: drive designation.) A review of the J: storage drive showed zero contents -- nothing -- this is where "whatever.ndf" should have been. Hmm, I thought. Problem with the server. I'll just move SQL back to the other server and figure out what's wrong.. Still no database. Suspect. Uh-oh. "Whatever.ndf" had gone into the bit bucket. I finally decided to just restore from the backup (which had been taken immediately before the validation test), so nothing was lost but a few hours of sleep. The question: (1) Why did the passive node think the whatever.ndf files were supposed to go to drive "K:", when this drive didn't exist as a resource on the active node? (2) How can I get the cluster nodes "re-syncd" so that failover can be accomplished? I don't know that there wasn't a "K:" drive as a cluster resource at some time in the past, but I do know that this drive did not exist on the original cluster at the time of resource move.

    Read the article

  • python 'with' statement

    - by Stephen
    Hi, I'm using Python 2.5. I'm trying to use this 'with' statement. from __future__ import with_statement a = [] with open('exampletxt.txt','r') as f: while True: a.append(f.next().strip().split()) print a The contents of 'exampletxt.txt' are simple: a b In this case, I get the error: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/tmp/python-7036sVf.py", line 5, in <module> a.append(f.next().strip().split()) StopIteration And if I replace f.next() with f.read(), it seems to be caught in an infinite loop. I wonder if I have to write a decorator class that accepts the iterator object as an argument, and define an __exit__ method for it? I know it's more pythonic to use a for-loop for iterators, but I wanted to implement a while loop within a generator that's called by a for-loop... something like def g(f): while True: x = f.next() if test(x): a = x elif test(x): b = f.next() yield [a,x,b] a = [] with open(filename) as f: for x in g(f): a.append(x)

    Read the article

  • Music meta missing from Facebook /home

    - by Peter Watts
    When somebody shares a Spotify playlist, the attachment is missing from the Graph API. What is shown in Facebook: What is returned by the Graph API: { "id": "********_******", "from": { "name": "*****", "id": "*****" }, "message": "Refused's setlist from last night's secret show in Sweden...", "icon": "http://photos-c.ak.fbcdn.net/photos-ak-snc1/v85005/74/174829003346/app_2_174829003346_5511.gif", "actions": [ { "name": "Comment", "link": "http://www.facebook.com/*****/posts/*****" }, { "name": "Like", "link": "http://www.facebook.com/*****/posts/*****" }, { "name": "Get Spotify", "link": "http://www.spotify.com/redirect/download-social" } ], "type": "link", "application": { "name": "Spotify", "canvas_name": "get-spotify", "namespace": "get-spotify", "id": "174829003346" }, "created_time": "2012-03-01T22:24:28+0000", "updated_time": "2012-03-01T22:24:28+0000", "likes": { "data": [ { "name": "***** *****", "id": "*****" } ], "count": 1 }, "comments": { "count": 0 }, "is_published": true } There's absolutely no reference to an attachment, other than the fact the type is 'link' and the application is Spotify. If you want to test, Spotify's page (http://graph.facebook.com/spotify/feed) usually has a playlist or two embedded (and missing from Graph API). Also if you filter your home feed to just Spotify stories (http://graph.facebook.com/me/home?filter=app_174829003346), you'll get a bunch of useless stories without attachments (assuming your friends shared music recently) Anyone have any ideas how to access the playlist details, or is it unavailable to third party developers (if so, this is a very a bad user experience, because the story makes no sense without the attachment). I am able to fetch scrobbles without any trouble using the user_actions.listens. Also, if there is a recent activity story, e.g. "Peter listened to The Shins", I am able to get information about the band. The problem only happens on attachments.

    Read the article

  • Optimize CUDA with Thrust in a loop

    - by macs
    Given the following piece of code, generating a kind of code dictionary with CUDA using thrust (C++ template library for CUDA): thrust::device_vector<float> dCodes(codes->begin(), codes->end()); thrust::device_vector<int> dCounts(counts->begin(), counts->end()); thrust::device_vector<int> newCounts(counts->size()); for (int i = 0; i < dCodes.size(); i++) { float code = dCodes[i]; int count = thrust::count(dCodes.begin(), dCodes.end(), code); newCounts[i] = dCounts[i] + count; //Had we already a count in one of the last runs? if (dCounts[i] > 0) { newCounts[i]--; } //Remove thrust::detail::normal_iterator<thrust::device_ptr<float> > newEnd = thrust::remove(dCodes.begin()+i+1, dCodes.end(), code); int dist = thrust::distance(dCodes.begin(), newEnd); dCodes.resize(dist); newCounts.resize(dist); } codes->resize(dCodes.size()); counts->resize(newCounts.size()); thrust::copy(dCodes.begin(), dCodes.end(), codes->begin()); thrust::copy(newCounts.begin(), newCounts.end(), counts->begin()); The problem is, that i've noticed multiple copies of 4 bytes, by using CUDA visual profiler. IMO this is generated by The loop counter i float code, int count and dist Every access to i and the variables noted above This seems to slow down everything (sequential copying of 4 bytes is no fun...). So, how i'm telling thrust, that these variables shall be handled on the device? Or are they already? Using thrust::device_ptr seems not sufficient for me, because i'm not sure whether the for loop around runs on host or on device (which could also be another reason for the slowliness).

    Read the article

  • How to setup Lucene search for a B2B web app?

    - by Bill Paetzke
    Given: 5000 databases (spread out over 5 servers) 1 database per client (so you can infer there are 1000 clients) 2 to 2000 users per client (let's say avg is 100 users per client) Clients (databases) come and go every day (let's assume most remain for at least one year) Let's stay agnostic of language or sql brand, since Lucene (and Solr) have a breadth of support The Question: How would you setup Lucene search so that each client can only search within its database? How would you setup the index(es)? Would you need to add a filter to all search queries? If a client cancelled, how would you delete their (part of the) index? (this may be trivial--not sure yet) Possible Solutions: Make an index for each client (database) Pro: Search is faster (than one-index-for-all method). Indices are relative to the size of the client's data. Con: I'm not sure what this entails, nor do I know if this is beyond Lucene's scope. Have a single, gigantic index with a database_name field. Always include database_name as a filter. Pro: Not sure. Maybe good for tech support or billing dept to search all databases for info. Con: Search is slower (than index-per-client method). Flawed security if query filter removed. For Example: Joel Spolsky said in Podcast #11 that his hosted web app product, FogBugz On-Demand, uses Lucene. He has thousands of on-demand clients. And each client gets their own database. His situation is quite similar to mine. Although, he didn't elaborate on the setup (particularly indices); hence, the need for this question. One last thing: I would also accept an answer that uses Solr (the extension of Lucene). Perhaps it's better suited for this problem. Not sure.

    Read the article

  • Need help in tuning a sql-query

    - by Viper
    Hello, i need some help to boost this SQL-Statement. The execution time is around 125ms. During the runtime of my program this sql (better: equally structured sqls for different tables) will be called 300.000 times. The average row count in the tables lies around 10.000.000 rows and new rows (updates/inserts) will be added with a timestamp each day. Data which are interesting for this particular export-program lies in the last 1-3 days. Maybe this is helpful for an index to create. The data i need is the current valid row for a given id and the forerunner datarow to get the updates (if exists). We use a Oracle 11g database and Dot.Net Framework 3.5 SQL-Statement to boost: select ID_SOMETHING, -- Number(12) ID_CONTRIBUTOR, -- Char(4 Byte) DATE_VALID_FROM, -- DATE DATE_VALID_TO -- DATE from TBL_SOMETHING XID where ID_SOMETHING = :ID_INSTRUMENT and ID_CONTRIBUTOR = :ID_CONTRIBUTOR and DATE_VALID_FROM <= :EXPORT_DATE and DATE_VALID_TO >= :EXPORT_DATE order by DATE_VALID_FROM asc; Here i uploaded the current Explain-Plan for this query. I'm not a database expert so i don't know which index-type would fit best for this requirement. I have seen that there are many different possible index-types which could be applied. Maybe Oracle Optimizer Hints are helpful, too. Does anyone has a good idea for tuning this sql or can point me in a right direction?

    Read the article

  • javafx doesnt repaint label till method has finished, why?

    - by jeff porter
    Hi all, I have a JavaFX app with a some code like this... public class MainListener extends EventListener{ override public function event (arg0 : String) : Void { statusText.content = arg0; } } statusText is defined like this... var statusText = Text { x: 30 y: stageHeight - 40 font: Font { name: "Bitstream Vera Sans Bold" size: 10 } wrappingWidth: 420 fill: Color.WHITE textAlignment: TextAlignment.CENTER content: "Status: awaiting DBF file." }; I also have some other Javacode that is load data, much like this.. public ArrayList<CustomerRecord> read(EventListener listener) { ArrayList<CustomerRecord> listOfCustomerRecords = new ArrayList<CustomerRecord>(); listener.event("Status: Starting read"); // ** takes a while... List<Map<String, CustomerField>> customerRecords = new Reader(file).readData(listener); // ** long running method over. listener.event("Status: Loaded all customers, count:" + listOfCustomerRecords.size()); return listOfCustomerRecords; } Now while the last method is in its long running call, I would expect to see my statusText updated to have 'Status: Starting read', but its doesn't. Its only when the read() method returns that the text is updated. If its was 'straight' java I would presume that the long running job is hogging the CPU, or the statusText needed to have repaint() called on it. Can anyone give me any ideas? Thanks Jeff Porter

    Read the article

  • How can I speed up Subversion checkins? (Using ANKH, latest, Visual Studio 2010)

    - by Timothy Khouri
    I've started working on a new web project with some friends... we are using the latest Subversion server (installed last week), the latest version of ANKH. My web project is a whapping 1.5 megabytes (that's with all images, css files, dll's after compiling, pdb files... etc). Checking in even super small changes (literally adding the letter "x" to a few files for testing)... takes FOREVER! (about 10 seconds - I almost killed myself). The ANKH client is measuring in BYTES PER SECOND ... BYTES? per second... I must be doing something wrong. Does anyone what config file has a joke totallyMessWithPeople=true so that I can turn that off or something? Oh, also, changing one "big" file of a super 10k gains speed up to nearly the speed of light (which is apparently 857 bytes per second). Help me obi wan kenobi, your my only hope! EDIT: As a note... my real work project that uses Visual Source Safe 2005 (I know, ouch) uploads files at about 200-500kbps from this very same computer/internet connection.

    Read the article

  • RFC: Whitespace's Assembly Mnemonics

    - by Noctis Skytower
    Request For Comment regarding Whitespace's Assembly Mnemonics What follows in a first generation attempt at creating mnemonics for a whitespace assembly language. STACK ===== push number copy copy number swap away away number MATH ==== add sub mul div mod HEAP ==== set get FLOW ==== part label call label goto label zero label less label back exit I/O === ochr oint ichr iint In the interest of making improvements to this small and simple instruction set, this is a second attempt. hold N Push the number onto the stack copy Duplicate the top item on the stack copy N Copy the nth item on the stack (given by the argument) onto the top of the stack swap Swap the top two items on the stack drop Discard the top item on the stack drop N Slide n items off the stack, keeping the top item add Addition sub Subtraction mul Multiplication div Integer Division mod Modulo save Store load Retrieve L: Mark a location in the program call L Call a subroutine goto L Jump unconditionally to a label if=0 L Jump to a label if the top of the stack is zero if<0 L Jump to a label if the top of the stack is negative return End a subroutine and transfer control back to the caller exit End the program print chr Output the character at the top of the stack print int Output the number at the top of the stack input chr Read a character and place it in the location given by the top of the stack input int Read a number and place it in the location given by the top of the stack What is the general consensus on the following revised list for Whitespace's assembly instructions? They definitely come from thinking outside of the box and trying to come up with a better mnemonic set than last time. When the previous python interpreter was written, it was completed over two contiguous, rushed evenings. This rewrite deserves significantly more time now that it is the summer. Of course, the next version of Whitespace (0.4) may have its instructions revised even more, but this is just a redesign of what originally was done in a few hours. Hopefully, the instructions make more sense to those new to programming jargon.

    Read the article

  • Rails emails not sending in staging environment

    - by jrdioko
    In a Rails application I set up a new staging environment with the following parameters in its environments/ file: config.action_mailer.perform_deliveries = true config.action_mailer.raise_delivery_errors = true config.action_mailer.delivery_method = :smtp However, when the system generates an email, it gets printed to the staging.log file instead of being sent. My SMTP settings work fine in other environments. What configuration am I missing to get the emails to actually send? Edit: Yes, the staging box is set up with valid configuration for an SMTP server it has access to. It seems like the problem isn't with the SMTP settings (if it was, wouldn't I get errors in the logs?), but with the Rails configuration. The application is still redirecting emails to the log file (saying "Sent mail: ...") as opposed to actually going through SMTP. Edit #2: It looks like the emails actually have been sending correctly, they just happen to print to the log as well. I'm trying to use the sanitize_email gem to redirect the mail to another address, and that doesn't seem to be working, which is why I thought the emails weren't going out. So I think that solves my problem, although I'm still curious what in ActionMailer's settings controls whether emails are sent, logged to the log file, or both. Edit #3: The problem with sanitize_email boiled down to me needing to add the new staging environment to ActionMailer::Base.local_environments. I'll keep this question open to see if anyone can answer my last question (what determines whether ActionMailer's emails get sent out, logged to the log file, or both?)

    Read the article

  • Capturing time intervals when somebody was online? How would you impement this feature?

    - by Kirzilla
    Hello, Our aim is to build timelines saying about periods of time when user was online. (It really doesn't matter what user we are talking about and where he was online) To get information about onliners we can call API method, someservice.com/api/?call=whoIsOnline whoIsOnline method will give us a list of users currently online. But there is no API method to get information about who IS NOT online. So, we should build our timelines using information we got from whoIsOnline. Of course there will be a measurement error (we can't track information in realtime). Let's suppose that we will call whoIsOnline method every 2 minutes (yes, we will run our script by cron every 2 minutes). For example, calling whoIsOnline at 08:00 will return Peter_id Michal_id Andy_id calling whoIsOnline at 08:02 will return Michael_id Andy_id George_id As you can see, Peter has gone offline, but we have new onliner - George. Available instruments are Db(MySQL) / text files / key-value storage (Redis/memcache); feel free to choose any of them (or even all of them). So, we have to get information like this George_id was online... 12 May: 08:02-08:30, 12:40-12:46, 20:14-22:36 11 May: 09:10-12:30, 21:45-23:00 10 May: was not online And now question... How would you store information to implement such timelines? How would you query/calculate information about periods of time when user was online? Additional information.. You cannot update information about offline users, only users who are "currently" online. Solution should be flexible: timeline information could be represented relating to any timezone. We should keep information only for last 7 days. Every user seen online is automatically getting his own identifier in our database. Uff.. it was really hard for me to write it because my English is pretty bad, but I hope my question will be clear for you. Thank you.

    Read the article

  • Getting an "i" from GEvent

    - by Cosizzle
    Hello, I'm trying to add an event listener to each icon on the map when it's pressed. I'm storing the information in the database and the value that I'm wanting to retrive is "i" however when I output "i", I get it's last value which is 5 (there are 6 objects being drawn onto the map) Below is the code, what would be the best way to get the value of i, and not the object itself. var drawLotLoc = function(id) { var lotLoc = new GIcon(G_DEFAULT_ICON); // create icon object lotLoc.image = url+"images/markers/lotLocation.gif"; // set the icon image lotLoc.shadow = ""; // no shadow lotLoc.iconSize = new GSize(24, 24); // set the size var markerOptions = { icon: lotLoc }; $.post(opts.postScript, {action: 'drawlotLoc', id: id}, function(data) { var markers = new Array(); // lotLoc[x].description // lotLoc[x].lat // lotLoc[x].lng // lotLoc[x].nighbourhood // lotLoc[x].lot var lotLoc = $.evalJSON(data); for(var i=0; i<lotLoc.length; i++) { var spLat = parseFloat(lotLoc[i].lat); var spLng = parseFloat(lotLoc[i].lng); var latlng = new GLatLng(spLat, spLng) markers[i] = new GMarker(latlng, markerOptions); myMap.addOverlay(markers[i]); GEvent.addListener(markers[i], "click", function() { console.log(i); // returning 5 in all cases. // I _need_ this to be unique to the object being clicked. console.log(this); }); } });

    Read the article

  • Why is cell phone software is still so primitive?

    - by Tomislav Nakic-Alfirevic
    I don't do mobile development, but it strikes me as odd that features like this aren't available by default on most phones: full text search: searches all address book contents, messages, anything else being a plus better call management: e.g. a rotating audio call log, meaning you always have the last N calls recorded for your listening pleasure later (your little girl just said her first "da-da" while you were on a business trip, you had a telephone job interview, you received complex instructions to do something etc.) bluetooth remote control (like e.g. anyRemote, but available by default on a bluetooth phone) no multitasking capabilities worth mentioning and in general no e.g. weekly software updates, making the phone much more usable (even if it had to be done over USB, rather than over the network). I'm sure I was dumbfounded by the lack or design of other features as well, but they don't come to mind right now. To clarify, I'm not talking about smartphones here: my plain, 2-year old phone has a CPU an order of magnitude faster than my first PC, about as much storage space and it's ridiculous how bad (slow, unwieldy) the software is and it's not one phone or one manufacturer. What keeps the (to me) obvious software functionality vacuum on a capable hardware platform from being filled up?

    Read the article

  • Haskell math performance

    - by Travis Brown
    I'm in the middle of porting David Blei's original C implementation of Latent Dirichlet Allocation to Haskell, and I'm trying to decide whether to leave some of the low-level stuff in C. The following function is one example—it's an approximation of the second derivative of lgamma: double trigamma(double x) { double p; int i; x=x+6; p=1/(x*x); p=(((((0.075757575757576*p-0.033333333333333)*p+0.0238095238095238) *p-0.033333333333333)*p+0.166666666666667)*p+1)/x+0.5*p; for (i=0; i<6 ;i++) { x=x-1; p=1/(x*x)+p; } return(p); } I've translated this into more or less idiomatic Haskell as follows: trigamma :: Double -> Double trigamma x = snd $ last $ take 7 $ iterate next (x' - 1, p') where x' = x + 6 p = 1 / x' ^ 2 p' = p / 2 + c / x' c = foldr1 (\a b -> (a + b * p)) [1, 1/6, -1/30, 1/42, -1/30, 5/66] next (x, p) = (x - 1, 1 / x ^ 2 + p) The problem is that when I run both through Criterion, my Haskell version is six or seven times slower (I'm compiling with -O2 on GHC 6.12.1). Some similar functions are even worse. I know practically nothing about Haskell performance, and I'm not terribly interested in digging through Core or anything like that, since I can always just call the handful of math-intensive C functions through FFI. But I'm curious about whether there's low-hanging fruit that I'm missing—some kind of extension or library or annotation that I could use to speed up this numeric stuff without making it too ugly.

    Read the article

  • Sort and limit queryset by comment count and date using queryset.extra() (django)

    - by thornomad
    I am trying to sort/narrow a queryset of objects based on the number of comments each object has as well as by the timeframe during which the comments were posted. Am using a queryset.extra() method (using django_comments which utilizes generic foreign keys). I got the idea for using queryset.extra() (and the code) from here. This is a follow-up question to my initial question yesterday (which shows I am making some progress). Current Code: What I have so far works in that it will sort by the number of comments; however, I want to extend the functionality and also be able to pass a time frame argument (eg, 7 days) and return an ordered list of the most commented posts in that time frame. Here is what my view looks like with the basic functionality in tact: import datetime from django.contrib.comments.models import Comment from django.contrib.contenttypes.models import ContentType from django.db.models import Count, Sum from django.views.generic.list_detail import object_list def custom_object_list(request, queryset, *args, **kwargs): '''Extending the list_detail.object_list to allow some sorting. Example: http://example.com/video?sort_by=comments&days=7 Would get a list of the videos sorted by most comments in the last seven days. ''' try: # this is where I started working on the date business ... days = int(request.GET.get('days', None)) period = datetime.datetime.utcnow() - datetime.timedelta(days=int(days)) except (ValueError, TypeError): days = None period = None sort_by = request.GET.get('sort_by', None) ctype = ContentType.objects.get_for_model(queryset.model) if sort_by == 'comments': queryset = queryset.extra(select={ 'count' : """ SELECT COUNT(*) AS comment_count FROM django_comments WHERE content_type_id=%s AND object_pk=%s.%s """ % ( ctype.pk, queryset.model._meta.db_table, queryset.model._meta.pk.name ), }, order_by=['-count']).order_by('-count', '-created') return object_list(request, queryset, *args, **kwargs) What I've Tried: I am not well versed in SQL but I did try just to add another WHERE criteria by hand to see if I could make some progress: SELECT COUNT(*) AS comment_count FROM django_comments WHERE content_type_id=%s AND object_pk=%s.%s AND submit_date='2010-05-01 12:00:00' But that didn't do anything except mess around with my sort order. Any ideas on how I can add this extra layer of functionality? Thanks for any help or insight.

    Read the article

  • Literal ampersands in System.Uri query string

    - by Nathan Baulch
    I'm working on a client app that uses a restful service to look up companies by name. It's important that I'm able to include literal ampersands in my queries since this character is quite common in company names. However whenever I pass %26 (the URI escaped ampersand character) to System.Uri, it converts it back to a regular ampersand character! On closer inspection, the only two characters that aren't converted back are hash (%23) and percent (%25). Lets say I want to search for a company named "Pierce & Pierce": var endPoint = "http://localhost/companies?where=Name eq '{0}'"; var name = "Pierce & Pierce"; Console.WriteLine(new Uri(string.Format(endPoint, name))); Console.WriteLine(new Uri(string.Format(endPoint, Uri.EscapeUriString(name)))); Console.WriteLine(new Uri(string.Format(endPoint, Uri.EscapeDataString(name)))); All three of the above combinations return: http://localhost/companies?where=Name eq 'Pierce & Pierce' This causes errors on the server side since the ampersand is (correctly) interpreted as a query arg delimiter. What I really need it to return is the original string: http://localhost/companies?where=Name eq 'Pierce %26 Pierce' How can I work around this behavior without discarding System.Uri entirely? I can't replace all ampersands with %26 at the last moment because there will usually be multiple query args involved and I don't want to destroy their delimiters. Note: A similar problem was discussed in this question but I'm specifically referring to System.Uri.

    Read the article

  • I'm trying to install psycopg2 onto Mac OS 10.6.3; it claims it can't find "stdarg.h" but I can see

    - by cojadate
    I'm desperately trying to successfully install psycopg2 but keep running into errors. The latest one seems to involve it not being to find "stdarg.h" (see code below). However I can see with my own eyes that a file called stdarg.h exists at /Developer/SDKs/MacOSX10.4u.sdk/usr/include/stdarg.h (where it claims it can't find anything) so I've no idea what to do about it. I'm running Mac OS 10.6.3 and within the last few days I've made sure I have all the latest OS developer tools. I have Python 2.6.2 and PostgreSQL 8.4 if that makes any difference. python setup.py install running install running build running build_py running build_ext building 'psycopg2._psycopg' extension creating build/temp.macosx-10.3-fat-2.6 creating build/temp.macosx-10.3-fat-2.6/psycopg gcc -arch ppc -arch i386 -isysroot /Developer/SDKs/MacOSX10.4u.sdk -fno-strict-aliasing -fno-common -dynamic -DNDEBUG -g -O3 -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.2.1 (dt dec ext pq3)" -DPG_VERSION_HEX=0x080404 -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -DHAVE_PQPROTOCOL3=1 -I/Library/Frameworks/Python.framework/Versions/2.6/include/python2.6 -I. -I/opt/local/include/postgresql84 -I/opt/local/include/postgresql84/server -c psycopg/psycopgmodule.c -o build/temp.macosx-10.3-fat-2.6/psycopg/psycopgmodule.o In file included from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/unicodeobject.h:4, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:85, from psycopg/psycopgmodule.c:27: /Developer/SDKs/MacOSX10.4u.sdk/usr/include/stdarg.h:4:25: error: stdarg.h: No such file or directory In file included from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/unicodeobject.h:4, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:85, from psycopg/psycopgmodule.c:27: /Developer/SDKs/MacOSX10.4u.sdk/usr/include/stdarg.h:4:25: error: stdarg.h: No such file or directory lipo: can't figure out the architecture type of: /var/folders/MQ/MQ-tWOWWG+izzuZCrAJpzk+++TI/-Tmp-//ccakFhRS.out error: command 'gcc' failed with exit status

    Read the article

  • Why MKMapView region is different than requested?

    - by Kamil
    Greetings! I'm saving map region into user defaults when my iPhone app is closing like this: MKCoordinateRegion region = mapView.region; [[NSUserDefaults standardUserDefaults] setDouble:region.center.latitude forKey:@"map.location.center.latitude"]; [[NSUserDefaults standardUserDefaults] setDouble:region.center.longitude forKey:@"map.location.center.longitude"]; [[NSUserDefaults standardUserDefaults] setDouble:region.span.latitudeDelta forKey:@"map.location.span.latitude"]; [[NSUserDefaults standardUserDefaults] setDouble:region.span.longitudeDelta forKey:@"map.location.span.longitude"]; When app launches again, i read those values back the same way, so that the user can see exactly the same map view as it was last time: MKCoordinateRegion region; region.center.latitude = [[NSUserDefaults standardUserDefaults] doubleForKey:@"map.location.center.latitude"]; region.center.longitude = [[NSUserDefaults standardUserDefaults] doubleForKey:@"map.location.center.longitude"]; region.span.latitudeDelta = [[NSUserDefaults standardUserDefaults] doubleForKey:@"map.location.span.latitude"]; region.span.longitudeDelta = [[NSUserDefaults standardUserDefaults] doubleForKey:@"map.location.span.longitude"]; NSLog([NSString stringWithFormat:@"Region read : %f %f %f %f", region.center.latitude, region.center.longitude, region.span.latitudeDelta, region.span.longitudeDelta]); [mapView setRegion:region]; NSLog([NSString stringWithFormat:@"Region on map: %f %f %f %f", mapView.region.center.latitude, mapView.region.center.longitude, mapView.region.span.latitudeDelta, mapView.region.span.longitudeDelta]); The region I read from user defaults is (not surprisingly) exactly the same as when it was saved. Notice that what is saved comes directly from the map, so it's not transformed in any way. I set it back on map with setRegion: method, but then it is different! Example results: Region read : 50.241110 8.891555 0.035683 0.042915 Region on map: 50.241057 8.891544 0.050499 0.054932 Does anybody know why this happens?

    Read the article

  • Munging non-printable characters to dots using string.translate()

    - by Jim Dennis
    So I've done this before and it's a surprising ugly bit of code for such a seemingly simple task. The goal is to translate any non-printable character into a . (dot). For my purposes "printable" does exclude the last few characters from string.printable (new-lines, tabs, and so on). This is for printing things like the old MS-DOS debug "hex dump" format ... or anything similar to that (where additional whitespace will mangle the intended dump layout). I know I can use string.translate() and, to use that, I need a translation table. So I use string.maketrans() for that. Here's the best I could come up with: filter = string.maketrans( string.translate(string.maketrans('',''), string.maketrans('',''),string.printable[:-5]), '.'*len(string.translate(string.maketrans('',''), string.maketrans('',''),string.printable[:-5]))) ... which is an unreadable mess (though it does work). From there you can call use something like: for each_line in sometext: print string.translate(each_line, filter) ... and be happy. (So long as you don't look under the hood). Now it is more readable if I break that horrid expression into separate statements: ascii = string.maketrans('','') # The whole ASCII character set nonprintable = string.translate(ascii, ascii, string.printable[:-5]) # Optional delchars argument filter = string.maketrans(nonprintable, '.' * len(nonprintable)) And it's tempting to do that just for legibility. However, I keep thinking there has to be a more elegant way to express this!

    Read the article

< Previous Page | 694 695 696 697 698 699 700 701 702 703 704 705  | Next Page >