Search Results

Search found 9242 results on 370 pages for 'apple push notifications'.

Page 324/370 | < Previous Page | 320 321 322 323 324 325 326 327 328 329 330 331  | Next Page >

  • A weird crash...

    - by Nima
    Hi, I have a piece of code that runs in debug mode in VS2008, C++. The problem is that when I am debugging the code line by line, at a very weird point of the code, it crashes and says: debug assertion faild. Expression: _BLOCK_TYPE_IS_VALID(pHead-nBlockUse) The crash point is on the first closed curly bracket (after mesh-edges[e].needsUpdate=false;) I don't understand why on a curly bracket? does that make sense to you guys? Can anybody help me understanding what is going on..? for(int e=0; e<mesh->edges.size(); e++) { if(mesh->edges[e].valid && mesh->edges[e].v[0]>=0 && mesh->edges[e].v[1]>=0 && mesh->points[mesh->edges[e].v[0]].writable && mesh->points[mesh->edges[e].v[1]].writable) { //update v_hat and its corresponding error DecEdge Current = DecEdge(e); pair<Point, float> ppf = computeVhat(e); Current.v_hat = ppf.first; Current.error = ppf.second; edgeSoup.push(Current); mesh->edges[e].needsUpdate=false; } }

    Read the article

  • CM and Agile validation process of merging to the Trunk?

    - by LoneCM
    Hello All, We are a new Agile shop and we are encountering an issue that I hope others have seen. In our process, the Trunk is considered an integration branch; it does not have to be releasable, but it does have to be stable and functional for others to branch off of. We create Feature branches of the Trunk for new development. All work and testing occurs in these branches. An individual branch pulls up as needed to stay integrated with the Trunk as other features that are accepted and are committed. But now we have numerous feature branches. Each are focused, have a short life cycle, and are pushed to the trunk as they are completed, so we not debating the need for the branches and trying very much to be Agile. My issue comes in here: I require that the branches pull up from the Trunk at the end of their life cycle and complete the validation, regression testing and handle all configuration issues before pushing to the trunk. Once reintegrated into the Trunk, I ask for at least a build and an automated smoke test. However, I am now getting push back on the Trunk validation. The argument is that the developers can merge the code and not need the QA validation steps because they already complete the work in the feature branch. Therefore, the extra testing is not needed. I have attempted to remind management of the numerous times "brainless" merges have failed. Thier solution is to instead of build and regression testing to have the developer diff the Feature branch and the newly merged Trunk. That process in thier mind would replace the regression testing I asked for. So what do you require when you reintegrate back to the Trunk? What are the issues that we will encounter if we remove this step and replace with the diff? Is the cost of staying Agile the additional work of the intergration of the branches? Thanks for any input. LoneCM

    Read the article

  • Git repository gets corrupted when I do a large commit: "Possible repository corruption on the remot

    - by mindthief
    Hi All, A friend of mine and I have been trying to use git for a project. It is hosted on his server, and I git clone it as: git clone [email protected]:/path/to/git/repos.git Pretty standard stuff, and it works great for a while. But every time one of us has added a large commit (which git supposedly handles very well), of the order of 100MB or so, the git repository gets kind of broken. Basically, at this point I will be able to push new changes and pull other changes (I think), but when I try to clone the repository in a fresh location using that command above, I get an error message that says: $git clone [email protected]:/path/to/git/repos.git Initialized empty Git repository in /local/path/to/repos/.git/ remote: Counting objects: 1455, done. remote: Compressing objects: 100% (1235/1235), done. error: git upload-pack: git-pack-objects died with error.s fatal: git upload-pack: aborting due to possible repository corruption on the remote side. remote: aborting due to possible repository corruption on the remote side. fatal: early EOF fatal: index-pack failed This has happened 3 or 4 times now, and it's always when I add a large commit. Any idea why this is happening? How can we fix it? We're both using Mac OSX Snow Leopard. Thanks! -M

    Read the article

  • Mail "Can't continue" for a AppleScript function

    - by Paul J. Lucas
    I'm trying to write an AppleScript for use with Mail (on Snow Leopard) to save image attachments of messages to a folder. The main part of the AppleScript is: property ImageExtensionList : {"jpg", "jpeg"} property PicturesFolder : path to pictures folder as text property SaveFolderName : "Fetched" property SaveFolder : PicturesFolder & SaveFolderName tell application "Mail" set theMessages to the selection repeat with theMessage in theMessages repeat with theAttachment in every mail attachment of theMessage set attachmentFileName to theAttachment's name if isImageFileName(attachmentFileName) then set attachmentPathName to SaveFolder & attachmmentFileName save theAttachment in getNonexistantFile(attachmentPathName) end if end repeat end repeat end tell on isImageFileName(theFileName) set dot to offset of "." in theFileName if dot > 0 then set theExtension to text (dot + 1) thru -1 of theFileName return theExtension is in ImageExtensionList end if return false end isImageFileName When run, I get the error: error "Mail got an error: Can’t continue isImageFileName." number -1708 where error -1708 is: Event wasnt handled by an Apple event handler. However, if I copy/paste the isImageFileName() into another script like: property imageExtensionList : {"jpg", "jpeg"} on isImageFileName(theFileName) set dot to offset of "." in theFileName if dot > 0 then set theExtension to text (dot + 1) thru -1 of theFileName return theExtension is in ImageExtensionList end if return false end isImageFileName if isImageFileName("foo.jpg") then return true else return false end if it works fine. Why does Mail complain about this?

    Read the article

  • Using jQuery tools from within GWT -- is there a way to allow exchanges?

    - by culov
    I have a GWT web-app with a nearly full page Google map window. Inside the the map, I have infowindows which include links. What I want to do is use jquery tools overlays (http://flowplayer.org/tools/overlay/apple.html) to open the overlay and display it on top of the map once the link inside the info window is clicked. Now, the link source is generated dynamically, and it is hosted on a different server, so i have to open it in an iframe and set the source of the iframe before it opens. this seems simple enough since i only have 1 iframe on the page: function changeSource(url){ $("#menuIFrame").attr("src",url); } To call this before opening the overlay (which is done on mouse release), i create the following element in the google maps infowindow via GWT: <a href="http://whatever.com/menu/" onClick="changeSource('http://whatever.com/menu/');" rel=#menu"> View menu </a> Jquery tools works by opening whatever div has the id assigned to the value of 'rel', but because i have my javascript/jquery on my LandingPage.html in GWT, there appears to be some sort of disconnect between the two because the changeSource function is never called AND the overlay is never added to the window. Here's my app: http://truxmapper.appspot.com/ As you can see, the other overlays will work fine, but once you click an infowindow and try to view the menu, it will simply use the href to open the url in that window. Does anyone know of a solution that will allow me to accomplish my goal? Thanks!

    Read the article

  • jquery google visual api graph's data rows does not work.

    - by marharépa
    Hi! I'd like to use google drawVisualization API. Example: var data = new google.visualization.DataTable(); data.addColumn('string'); data.addColumn('number'); data.addRows([ ['a', 14], ['b', 47], ['c', 80], ['d', 55], ['e', 16], ['f', 90], ['g', 29], ['h', 23], ['i', 58], ['j', 48] ]); My version gets elements by an other google api, and join them, and after place the variable between ([ and ]), to be like in the example. var outputGraph = []; for (var i = 0, entry; entry = entries[i]; ++i) { var asd = [ entry.getValueOf('ga:pageTitle'), entry.getValueOf('ga:pageviews') ].join("',"); outputGraph.push(" ['" + asd + "]"); //get the 2 elements and join them to be like ['asd', 2], } // this is fine, the outputgraph is like ['asd', 2], ['asd', 2], ['asd', 2] as seen in the example var outputGraphFine = ("(["+outputGraph+"])"); // i suggest this is which fails the script. var data = new google.visualization.DataTable(); data.addColumn('string', 'Task'); data.addColumn('number', 'Hours per Day'); data.addRows = outputGraphFine; But it doesn't work. Why?

    Read the article

  • Precision of Interval for PL/SQL Function value

    - by Gary
    Generally, when you specify a function the scale/precision/size of the return datatype is undefined. For example, you say FUNCTION show_price RETURN NUMBER or FUNCTION show_name RETURN VARCHAR2. You are not allowed to have FUNCTION show_price RETURN NUMBER(10,2) or FUNCTION show_name RETURN VARCHAR2(20), and the function return value is unrestricted. This is documented functionality. Now, I get an precision error (ORA-01873) if I push 9999 hours (about 400 days) into the following. The limit is because the default days precision is 2 DECLARE v_int INTERVAL DAY (4) TO SECOND(0); FUNCTION hhmm_to_interval return INTERVAL DAY TO SECOND IS v_hhmm INTERVAL DAY (4) TO SECOND(0); BEGIN v_hhmm := to_dsinterval('PT9999H'); RETURN v_hhmm; -- END hhmm_to_interval; BEGIN v_int := hhmm_to_interval; end; / and it won't allow the precision to be specified directly as part of the datatype returned by the function. DECLARE v_int INTERVAL DAY (4) TO SECOND(0); FUNCTION hhmm_to_interval return INTERVAL DAY (4) TO SECOND IS v_hhmm INTERVAL DAY (4) TO SECOND(0); BEGIN v_hhmm := to_dsinterval('PT9999H'); RETURN v_hhmm; -- END hhmm_to_interval; BEGIN v_int := hhmm_to_interval; end; / I can use a SUBTYPE DECLARE subtype t_int is INTERVAL DAY (4) TO SECOND(0); v_int INTERVAL DAY (4) TO SECOND(0); FUNCTION hhmm_to_interval return t_int IS v_hhmm INTERVAL DAY (4) TO SECOND(0); BEGIN v_hhmm := to_dsinterval('PT9999H'); RETURN v_hhmm; -- END hhmm_to_interval; BEGIN v_int := hhmm_to_interval; end; / Any drawbacks to the subtype approach ? Any alternatives (eg some place to change a default precision) ? Working with 10gR2.

    Read the article

  • How do you parse with SAX using Attributes & Values to a URL path using iPhone SDK?

    - by Jim
    I'm trying to get my head around parsing with SAX and thought a good place to start was the TopSongs example found at the iPhone Dev Center. I get most of it but when it comes to reaching Attributes and Values within a node I can't find a good example anywhere. The XML has a path to a URL for the coverArt. And the XML node looks like this. <itms:coverArt height="60" width="60">http://a1.phobos.apple.com/us/r1000/026/Music/aa/aa/27/mzi.pbxnbfvw.60x60-50.jpg</itms:coverArt> What I've tried is this for the startElement… ((prefix != NULL && !strncmp((const char *)prefix, kName_Itms, kLength_Itms)) && (!strncmp((const char *)localname, kName_CoverArt, kLength_Item) && !strncmp((const char *)attributes, kAttributeName_CoverArt, kAttributeLength_CoverArt) && !strncmp((const char *)attributes, kValueName_CoverArt, kValueLength_CoverArt) || !strncmp((const char *)localname, kName_Artist, kLength_Artist) || and picking it up again with just the localname at the end like this. if (!strncmp((const char *)localname, kName_CoverArt, kLength_CoverArt)) { importer.currentSong.coverArt = [NSURL URLWithString:importer.currentString]; The trace is -[Song setCoverArt:]: unrecognized selector sent to instance.

    Read the article

  • XSLT - How to select top a to top b

    - by user812241
    how can I extract the first 2 C-Values ('Baby' and 'Cola') where B is 'RED'. Input instance is: <Root> <A> <B>BLACK</B> <C>Apple</C> </A> <A> <B>RED</B> <C>Baby</C> </A> <A> <B>GREEN</B> <C>Sun</C> </A> <A> <B>RED</B> <C>Cola</C> </A> <A> <B>RED</B> <C>Mobile</C> </A> </Root> Output instance must be: <Root> <D>Baby</D> <D>Cola</D> </Root> I thought about the combination of for-each and global variables. But in XSLT it is not possible to change the value for a global variable to break the for-each. I have no idea anymore.

    Read the article

  • Updating progress dialog in Activity from AsyncTask

    - by Laimoncijus
    In my app I am doing some intense work in AsyncTask as suggested by Android tutorials and showing a ProgressDialog in my main my activity: dialog = ProgressDialog.show(MyActivity.this, "title", "text"); new MyTask().execute(request); where then later in MyTask I post results back to activity: class MyTask extends AsyncTask<Request, Void, Result> { @Override protected Result doInBackground(Request... params) { // do some intense work here and return result } @Override protected void onPostExecute(Result res) { postResult(res); } } and on result posting, in main activity I hide the dialog: protected void postResult( Result res ) { dialog.dismiss(); // do something more here with result... } So everything is working fine here, but I would like to somehow to update the progress dialog to able to show the user some real progress instead just of dummy "Please wait..." message. Can I somehow access the progress dialog from MyTask.doInBackground, where all work is done? As I understand it is running as separate Thread, so I cannot "talk" to main activity from there and that is why I use onPostExecute to push the result back to it. But the problem is that onPostExecute is called only when all work is already done and I would like to update progress the dialog in the middle of doing something. Any tips how to do this?

    Read the article

  • Is jQuery forcing Adobe ColdFusion to abandon the dead flash product line?

    - by crosenblum
    I have been reading a lot about how flash development/design had died, and as jQuery and in the near future html5 comes out, will this start to push Adobe/Coldfusion away from flash towards less product linking? I mean, I love coldfusion, and want that to continue to grow, however, if Adobe only bought Coldfusion from Macromedia, so they can bundle flash and coldfusion together, does the death of flash mean the death of coldfusion? http://topnews.us/content/221385-jobs-says-adobes-flash-waning-and-had-its-day http://aext.net/2010/03/javascript-jquery-killing-flash-tutorial-jquery-plugin/ I really don't mind if Flash dies, I do mind greatly if coldfusion does. Is the success of Flash linked to Coldfusion? If so, why? or why not? The purpose of this isn't to start some war about flash pro's and con's. I was only worried that Adobe would cause problems for Coldfusion, if flash had some market/financial problems. That was my main concern... And no I am not anti-flash... But my financial sanity depends on Coldfusion being a success, so that is why I stated my question. Because I WANT EVERYONE ELSE'S OPINION OF THIS SITUATION. Thank You.

    Read the article

  • Sync Framework Considerations for Smart Client app

    - by DarkwingDuck
    Microsoft Sync Framework with SQL 2005? Is it possible? It seems to hint that the OOTB providers use SQL2008 functionality. I'm looking for some quick wins in relation to a sync project. The client app will be offline for a number of days. There will be a central server that MUST be SQL Server 2005. I can use .net 3.5. Basically the client app could go offline for a week. When it comes back online it needs to sync its data. But the good thing is that the data only needs to push to the server. The stuff that syncs back to the client will just be lookup data which the client never changes. So this means I don't care about sync collisions. To simplify the scenario for you, this smart client goes offline and the user surveys data about some observations. They enter the data into the system. When the laptop is reconnected to the network, it syncs back all that data to the server. There will be other clients doing the same thing too, but no one ever touches each other's data. Then there are some reports on the server for viewing the data that has been pushed to the server. This also needs to use ClickOnce. My biggest concern is that there is an interim release while a client is offline. This release might require a new field in the database, and a new field to fill in on the survey. Obviously that new field will be nullable because we can't update old data, that's fine to set as an assumption. But when the client connects up and its local data schema and the server schema don't match, will sync framework be able to handle this? After the data is pushed to the server it is discarded locally. Hope my problem makes sense.

    Read the article

  • Scapy install issues. Nothing seems to actually be installed?

    - by Chris
    I have an apple computer running Leopard with python 2.6. I downloaded the latest version of scapy and ran "python setup.py install". All went according to plan. Now, when I try to run it in interactive mode by just typing "scapy", it throws a bunch of errors. What gives! Just in case, here is the FULL error message.. INFO: Can't import python gnuplot wrapper . Won't be able to plot. INFO: Can't import PyX. Won't be able to use psdump() or pdfdump(). ERROR: Unable to import pcap module: No module named pcap/No module named pcapy ERROR: Unable to import dnet module: No module named dnet Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/runpy.py", line 122, in _run_module_as_main "__main__", fname, loader, pkg_name) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/runpy.py", line 34, in _run_code exec code in run_globals File "/Users/owner1/Downloads/scapy-2.1.0/scapy/__init__.py", line 10, in <module> interact() File "scapy/main.py", line 245, in interact scapy_builtins = __import__("all",globals(),locals(),".").__dict__ File "scapy/all.py", line 25, in <module> from route6 import * File "scapy/route6.py", line 264, in <module> conf.route6 = Route6() File "scapy/route6.py", line 26, in __init__ self.resync() File "scapy/route6.py", line 39, in resync self.routes = read_routes6() File "scapy/arch/unix.py", line 147, in read_routes6 lifaddr = in6_getifaddr() File "scapy/arch/unix.py", line 123, in in6_getifaddr i = dnet.intf() NameError: global name 'dnet' is not defined

    Read the article

  • If I do "jquery sortable" on a contenteditable item(s), I then can't focus mouse anywhere inside con

    - by Matej
    Strangely this is broken only in Firefox and Opera (IE, Chrome and Safari works as it should). Any suggestion for a quick fix? <html> <head> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js" type="text/javascript"></script> <script src="http://ajax.googleapis.com/ajax/libs/jqueryui/1.8.2/jquery-ui.min.js" type="text/javascript"></script> <script> $(document).ready(function(){ $('#sortable').sortable(); }); </script> </head> <body> <span id="sortable"> <p contenteditable="true">One apple</p> <p>Two pears</p> <p>Three oranges</p> </span> </body> </html>

    Read the article

  • Mercurial central server file discrepancy (using 'diff to local')

    - by David Montgomery
    Newbie alert! OK, I have a working central Mercurial repository that I've been working with for several weeks. Everything has been great until I hit a really bizarre problem: my central server doesn't seem to be synced to itself? I only have one file that seems to be out-of-sync right now, but I really need to know how this happened to prevent it from happening in the future. Scenario: 1) created Mercurial repository on server using an existing project directory. The directory contained the file 'mypage.aspx'. 2) On my workstation, I cloned the central repository 3) I made an edit to mypage.aspx 4) hg commit, then hg push from my workstation to the central server 5) now if I look at mypage.aspx on the server's repository using TortoiseHg's repository explorer, I see the change history for mypage.aspx -- an initial check-in and one edit. However, when I select 'Diff to local', it shows the current version on the server's disk is the original version, not the edited version! I have not experimented with branching at all yet, so I'm sure I'm not getting a branch problem. 'hg status' on the server or client returns no pending changes. If I create a clone of the server's repository to a new location, I see the same change history as I would expect, but the file on disk doesn't contain my edit. So, to recap: Central repository = original file, but shows change in revision history (bad) Local repository 'A' = updated file, shows change in revision history (good) Local repository 'B' = original file, but shows change in revision history (bad) Help please! Thanks, David

    Read the article

  • jquery autocomplete, $array source. how do i make it multiple?

    - by Toni Michel Caubet
    hello there! I'm using autocomplete so user can easly enter data on inputs, like this: <? $a = new etiqueta(0, ''); $b = $a->autocomplete_etiquetas(); ?> <script type="text/javascript"> function cargar_autocomplete_etiquetas(){ $("#tags").autocomplete({ source: [<? echo $b; ?>] }); } </script> $a = $b its an array with a result like: 'help','please',i','need','to,'be able to', 'select next item',' with autocomplete'; and i checked the ui documentation, but it doesn't fith with my source method.. any idea? I'm trying like this (edited with Bugai13 aportation): <? $a = new etiqueta(0, ''); $b = $a->autocomplete_etiquetas(); ?> <script type="text/javascript"> function cargar_autocomplete_etiquetas(){ $("#tags").autocomplete({ source: [<? echo $b; ?>], multiple: true, multipleSeparator: ", ", matchContains: true }); } </script> but i don't know how to do it.. any idea? are .push and .pop functions from the autocomplete? or shall i define, them? thanks again! PS: i'm getting adicted to this site! PS: come on dudes, i think the answer will be very usefull for many people PS: is it allowed to offer paypal reward?

    Read the article

  • Prototypal inheritance should save memory, right?

    - by Techpriester
    Hi Folks, I've been wondering: Using prototypes in JavaScript should be more memory efficient than attaching every member of an object directly to it for the following reasons: The prototype is just one single object. The instances hold only references to their prototype. Versus: Every instance holds a copy of all the members and methods that are defined by the constructor. I started a little experiment with this: var TestObjectFat = function() { this.number = 42; this.text = randomString(1000); } var TestObjectThin = function() { this.number = 42; } TestObjectThin.prototype.text = randomString(1000); randomString(x) just produces a, well, random String of length x. I then instantiated the objects in large quantities like this: var arr = new Array(); for (var i = 0; i < 1000; i++) { arr.push(new TestObjectFat()); // or new TestObjectThin() } ... and checked the memory usage of the browser process (Google Chrome). I know, that's not very exact... However, in both cases the memory usage went up significantly as expected (about 30MB for TestObjectFat), but the prototype variant used not much less memory (about 26MB for TestObjectThin). I also checked: The TestObjectThin instances contain the same string in their "text" property, so they are really using the property of the prototype. Now, I'm not so sure what to think about this. The prototyping doesn't seem to be the big memory saver at all. I know that prototyping is a great idea for many other reasons, but I'm specifically concerned with memory usage here. Any explanations why the prototype variant uses almost the same amount of memory? Am I missing something?

    Read the article

  • JBoss Clustered Service that sends emails from txt file

    - by michael lucas
    I need a little push in the right direction. Here's my problem: I have to create an ultra-reliable service that sends email messages to clients whose addresses are stored in txt file on FTP server. Single txt file may contain unlimited number of entries. Most often the file contains about 300,000 entries. Service exposes interface with just two simple methods: TaskHandle sendEmails(String ftpFilePath); ProcessStatus checkProcessStatus(TaskHandle taskHandle); Method sendEmails() returns TaskHandle by which we can ask for ProcessStatus. For such a service to be reliable clustering is necessary. Processing single txt file might take a long time. Restarting one node in a cluster should have no impact on sending emails. We use JBoss AS 4.2.0 which comes with a nice HASingletonController that ensure one instance of service is running at given time. But once a fail-over happens, the second service should continue work from where the first one stopped. How can I share state between nodes in a cluster in such a way that leaves no possibility of sending some emails twice?

    Read the article

  • Windows Azure - access webrole local storage from separate workerrole

    - by Brett Smith
    I'm running an application on windows azure, the MVC views need to be dynamic, I started by storing them as records in the database, but am quite keen to move them to a physical location. My concept was to create the physical file via code... which worked great and speeds up the page load dramatically. This was of course before I realised that the files were only available for the duration of the role Next I looked at a start up task to create the files when the role was started - however I then realised that any separate instances weren't going to sync up unless I monitored the database for changes. So I moved from a start up task to a function in the run method of the role that checks the database every 10 minutes to see if changes have occurred. The problem is that this seems to choke up the application (at least in the warm up stage). Ideally I would like to move the run function to it's own worker role that can sit there and push files out to web role instances, but I'm unsure on how I would go about accessing the web roles local storage from the worker role. Can anybody tell me whether this is actually possible? and hopefully point me in the right direction to achieve this? Just to clarify what I'm trying to achieve -View is created in user interface running on web role and stored in database -Separate web role (front end) has clientside application with virtualpath provider pointing Views requests to local storage (localresource) -separate worker role to create View structure and load this into clientside web role local storage

    Read the article

  • I dont know how or where to add the correct encoding code to this iPhone code...

    - by BC
    Ok, I understand that using strings that have special characters is an encoding issue. However I am not sure how to adjust my code to allow these characters. Below is the code that works great for text that contains no special characters, but can you show me how and where to change the code to allow for the special characters to be used. Right now those characters crash the app. enter code here - (void)alertView:(UIAlertView *)alertView clickedButtonAtIndex:(NSInteger)buttonIndex{ if (buttonIndex == 1) { //iTunes Audio Search NSString *stringURL = [NSString stringWithFormat:@"http://phobos.apple.com/WebObjects/MZSearch.woa/wa/search?WOURLEncoding=ISO8859_1&lang=1&output=lm&term=\"%@\"",currentSong.title]; stringURL = [stringURL stringByAddingPercentEscapesUsingEncoding:NSASCIIStringEncoding]; NSURL *url = [NSURL URLWithString:stringURL]; [[UIApplication sharedApplication] openURL:url]; } } And this: -(IBAction)launchLyricsSearch:(id)sender{ WebViewController * webView = [[WebViewController alloc] initWithNibName:@"WebViewController" bundle:[NSBundle mainBundle]]; webView.webURL = [NSString stringWithFormat:@"http://www.google.com/m/search?hl=es&q=\"%@\"+letras",currentSong.title]; webView.webTitle = @"Letras"; [self.navigationController pushViewController:webView animated:YES]; } Please show me how and where to do this for these two bits of code.

    Read the article

  • git crlf configuration in mixed environment

    - by Jonas Byström
    I'm running a mixed environment, and keep a central, bare repository where I pull and push most of my stuff. This centralized repository runs on Linux, and I check out to Windows XP/7, Mac and Linux. In all repositories I put the following line in my .git/config: [core] autocrlf = true I don't have the flag safecrlf=true anywhere. First time when I modify stuff on my one Windows machine (XP) there is no problem and when I look at the diff, it looks fine. But when I do the same on the other Windows machine (7), all lines are shown as changed but local line endings are \r\n as expected (when checked in a hex editor). The same applies to a MacOSX can. Sometimes I get the feeling that the different systems wrestle on line endings, but I can't be sure (I'm loosing track of all the times I change specific files). I didn't use to have the autocrlf set, but set the flag many months back. Could that be causing my current problems? Do I need to clone everything again to loose some old baggage? Or are there other things that needs configuring too? I tried git checkout -- . about a million times, but with no success.

    Read the article

  • How can you transform a set of numbers into mostly whole ones?

    - by Alice
    Small amount of background: I am working on a converter that bridges between a map maker (Tiled) that outputs in XML, and an engine (Angel2D) that inputs lua tables. Most of this is straight forward However, Tiled outputs in pixel offsets (integers of absolute values), while Angel2D inputs OpenGL units (floats of relative values); a conversion factor between these two is needed (for example, 32px = 1gu). Since OpenGL units are abstract, and the camera can zoom in or out if the objects are too small or big, the actual conversion factor isn't important; I could use a random number, and the user would merely have to zoom in or out. But it would be best if the conversion factor was selected such that most numbers outputted were small and whole (or fractions of small whole numbers), because that makes it easier to work with (and the whole point of the OpenGL units is that they are easy to work with). How would I find such a conversion factor reliably? My first attempt was to use the smallest number given; this resulted in no fractions below 1, but often lead to lots of decimal places where the factors didn't line up. Then I tried the mode of the sequence, which lead to the largest number of 1's possible, but often lead to very long floats for background images. My current approach gets the GCD of the whole sequence, which, when it works, works great, but can easily be thrown off course by a single bad apple. Note that while I could easily just pass the numbers I am given along, or pick some fixed factor, or use one of the conversions I specified above, I am looking for a method to reliably scale this list of integers to small, whole numbers or simple fractions, because this would most likely be unsurprising to the end user; this is not a one off conversion. The end users tend to use 1.0 as their "base" for manipulations (because it's simple and obvious), so it would make more sense for the sizes of entities to cluster around this.

    Read the article

  • Jqyery Bugs?? Long decimal number after two numbers multiply...

    - by Jerry
    Hi all I am working on a shopping site and I am trying to calculate the subtotal of products. I got my price from a array and quantity from getJSON response array. Two of them multiply comes to my subtotal. I can change the quantity and it will comes out different subtotal. However,when I change the quantity to certain number, the final subtotal is like 259.99999999994 or some long decimal number. I use console.log to check the $price and $qty. Both of them are in the correct format ex..299.99 and 6 quantity.I have no idea what happen. I would appreciate it if someone can help me about it. Here is my Jquery code. $(".price").each(function(index, price){ $price=$(this); //get the product id and the price shown on the page var id=$price.closest('tr').attr('id'); var indiPrice=$($price).html(); //take off $ indiPrice=indiPrice.substring(1) //make sure it is number format var aindiPrice=Number(indiPrice); //push into the array productIdPrice[id]=(aindiPrice); var url=update.php $.getJSON( url, {productId:tableId, //tableId is from the other jquery code which refers to qty:qty}, productId function(responseProduct){ $.each(responseProduct, function(productIndex, Qty){ //loop the return data if(productIdPrice[productIndex]){ //get the price from the previous array we create X Qty newSub=productIdPrice[productIndex]*Number(Qty); //productIdPrice[productIndex] are the price like 199.99 or 99.99 // Qty are Quantity like 9 or 10 or 3 sum+=newSub; newSub.toFixed(2); //try to solve the problem with toFixed but didn't work console.log("id: "+productIdPrice[productIndex]) console.log("Qty: "+Qty); console.log(newSub); **//newSub sometime become XXXX.96999999994** }; Thanks again!

    Read the article

  • Differences between Assembly Code output of the same program

    - by ultrajohn
    I have been trying to replicate the buffer overflow example3 from this article aleph one I'm doing this as a practice for a project in a computer security course i'm taking so please, I badly need your help. I've been the following the example, performing the tasks as I go along. My problem is the assembly code dumped by gdb in my computer (i'm doing this on a debian linux image running on VM Ware) is different from that of the example in the article. There are some constructs which I find confusing. Here is the one from my computer: here is the one from the article... Dump of assembler code for function main: 0x8000490 <main>: pushl %ebp 0x8000491 <main+1>: movl %esp,%ebp 0x8000493 <main+3>: subl $0x4,%esp 0x8000496 <main+6>: movl $0x0,0xfffffffc(%ebp) 0x800049d <main+13>: pushl $0x3 0x800049f <main+15>: pushl $0x2 0x80004a1 <main+17>: pushl $0x1 0x80004a3 <main+19>: call 0x8000470 <function> 0x80004a8 <main+24>: addl $0xc,%esp 0x80004ab <main+27>: movl $0x1,0xfffffffc(%ebp) 0x80004b2 <main+34>: movl 0xfffffffc(%ebp),%eax 0x80004b5 <main+37>: pushl %eax 0x80004b6 <main+38>: pushl $0x80004f8 0x80004bb <main+43>: call 0x8000378 <printf> 0x80004c0 <main+48>: addl $0x8,%esp 0x80004c3 <main+51>: movl %ebp,%esp 0x80004c5 <main+53>: popl %ebp 0x80004c6 <main+54>: ret 0x80004c7 <main+55>: nop As you can see, there are differences between the two. I'm confuse and I can't understand totally the assembly code from my computer. I would like to know the differences between the two. How is pushl different from push, mov vs movl , and so on... what does the expression 0xhexavalue(%register) means? I am sorry If I'm asking a lot, But I badly need your help. Thanks for the help really...

    Read the article

  • Sending and receiving IM messages via controller in Rails

    - by Grnbeagle
    Hi, I need a way to handle XMPP communication in my Rails app. My requirements are: Keep an instance of XMPP client running and logged in as one specific user (my bot user) Trigger an event from a controller to send a message and wait for a reply. The message is sent to another machine equipped with a bot so that the reply is supposed to be returned quickly. I installed xmpp4r and backgrounDrb similar to what's described here, but backgrounDrb seems to have evolved and I couldn't get it to wait for a reply. If it has to happen asynchronously, I am willing to use a server-push technology to notify the browser when the reply arrives. To give you a better idea, here are snippets of my code: (In controller) class ServicesController < ApplicationController layout 'simple' def index render :text => "index" end def show @my_service = Service.find(params[:id]) worker = MiddleMan.worker(:jabber_agent_worker) worker.send_request(:arg => {:jid => "someuser@someserver", :cmd => "help"}) render :text => "testing" end end (In worker script) require 'xmpp4r' require 'logger' class JabberAgentWorker < BackgrounDRb::MetaWorker set_worker_name :jabber_agent_worker def create(args = nil) jid = Jabber::JID.new('myagent@myserver') @client = Jabber::Client.new(jid) @client.connect @client.auth('pass') @client.send(Jabber::Presence.new.set_show(:chat).set_status('BackgrounDRb')) @client.add_message_callback do |message| logger.info("**** messaged received: #{message}") # never reaches here end end def send_request(args = nil) to_jid = Jabber::JID.new(args[:jid]) message = Jabber::Message::new(to_jid, args[:cmd]).set_type(:normal).set_id('1') @client.send(message) end end If anyone can tell me any of the following, I'd much appreciate it: issue with my backgrounDrb usage other background process alternatives appropriate for XMPP interactions other ways of achieving this Thanks in advance.

    Read the article

< Previous Page | 320 321 322 323 324 325 326 327 328 329 330 331  | Next Page >