Search Results

Search found 18154 results on 727 pages for 'support multilanguage so'.

Page 648/727 | < Previous Page | 644 645 646 647 648 649 650 651 652 653 654 655  | Next Page >

  • Make conversion to a native type explicit in C++

    - by Tal Pressman
    I'm trying to write a class that implements 64-bit ints for a compiler that doesn't support long long, to be used in existing code. Basically, I should be able to have a typedef somewhere that selects whether I want to use long long or my class, and everything else should compile and work. So, I obviously need conversion constructors from int, long, etc., and the respective conversion operators (casts) to those types. This seems to cause errors with arithmetic operators. With native types, the compiler "knows" that when operator*(int, char) is called, it should promote the char to int and call operator*(int, int) (rather than casting the int to char, for example). In my case it gets confused between the various built-in operators and the ones I created. It seems to me like if I could flag the conversion operators as explicit somehow, that it would solve the issue, but as far as I can tell the explicit keyword is only for constructors (and I can't make constructors for built-in types). So is there any way of marking the casts as explicit? Or am I barking up the wrong tree here and there's another way of solving this? Or maybe I'm just doing something else wrong...

    Read the article

  • Context.Items.Add on an XML return type?

    - by Scott Schluer
    So I have the following code contained within an HttpModule in an application I've been asked to support: app.Context.Response.ContentType = "text/xml"; app.Context.Items.Add("IpixRoomId", ipixRoomId); app.Context.Items.Add("IpixId", ipixId); app.Context.Response.Cache.SetCacheability(HttpCacheability.NoCache); app.Context.RewritePath(rewriteUrl, true); What's the purpose of adding data to Context.Items when the content type is XML? EDIT: For clarification, I'm calling up this URL: http://website.com/virtualtour/1971/6284/panorama2flash.swf I assume the SWF file (I know very little about Flash) makes another call to http://website.com/virtualtour/config.xml. The code I pasted above only executes on calls to config.xml. So since it's only the SWF file and config.xml being requested from the server, I'm a little confused. Can the .SWF file have access to HttpContext.Current.Items? Other than the HttpModule, there is no .NET involved in the code, it's a straight request to the SWF file which triggers a call to config.xml but it seems that those Context.Items contain the data needed to make the SWF file display the right virtual tour. I'm just missing where that link happens. It can't happen in the XML, so maybe in Flash?

    Read the article

  • Error when using jquery webservice call to populate dropdownlist

    - by bugz
    For some reason when i tested parsing a json string to a dropdownlist it worked doing this var json = [{ "Id": "12345", "WorkUnitId": "SR0001954", "Description": "Test Service Request From Serena", "WorkUnitCategory": "ServiceRequest" }, { "Id": "12355", "WorkUnitId": "WOR001854", "Description": "Test Work Order From Serena", "WorkUnitCategory": "ServiceRequest" }, { "Id": "12365", "WorkUnitId": "DBR001274", "Description": "Test Database Related Service Request From Serena", "WorkUnitCategory": "ServiceRequest"}]; $(json).each(function() { comboboxWorkUnit.append($('<option>').val(this.Id).text(this.WorkUnitId)); }); However when i tryed taking the json from my webservice and placing it into the dropdownlist i get an error saying the dropdownlist doesnt support this metod. $.ajax({ type: "POST", url: "Services/WorkUnitService.asmx/WorkUnit", data: "{'Number' : '" + Number.text() + "','Id': '" + combobox.val() + "','workerType': '" + EmployeeType + "'}", contentType: "application/json; charset=utf-8", dataType: "json", success: function(json) { $(json).each(function() { comboboxWorkUnit.append($('<option>').val(this.Id).text(this.WorkUnitId)); }); } }); I also tried which called the webservice method but through an error $.postJSON("Services/WorkUnitService.asmx/WorkUnitsForParAndPhase", "{'parNumber' : '" + parNumber + "','phaseId': '" + combobox.val() + "','workerType': '" + EmployeeType + "'}", function(data) { alert(data.toString()); }); And i tryed this which never called the webservice $.getJSON('Services/WorkUnitService.asmx/WorkUnitsForParAndPhase' + parNumber + '/' + combobox.val() + '/' + EmployeeType, function(myData) { alert(myData.toString()); });

    Read the article

  • Use `require()` with `node --eval`

    - by rentzsch
    When utilizing node.js's newish support for --eval, I get an error (ReferenceError: require is not defined) when I attempt to use require(). Here's an example of the failure: $ node --eval 'require("http");' undefined:1 ^ ReferenceError: require is not defined at eval at <anonymous> (node.js:762:36) at eval (native) at node.js:762:36 $ Here's a working example of using require() typed into the REPL: $ node > require("http"); { STATUS_CODES: { '100': 'Continue' , '101': 'Switching Protocols' , '102': 'Processing' , '200': 'OK' , '201': 'Created' , '202': 'Accepted' , '203': 'Non-Authoritative Information' , '204': 'No Content' , '205': 'Reset Content' , '206': 'Partial Content' , '207': 'Multi-Status' , '300': 'Multiple Choices' , '301': 'Moved Permanently' , '302': 'Moved Temporarily' , '303': 'See Other' , '304': 'Not Modified' , '305': 'Use Proxy' , '307': 'Temporary Redirect' , '400': 'Bad Request' , '401': 'Unauthorized' , '402': 'Payment Required' , '403': 'Forbidden' , '404': 'Not Found' , '405': 'Method Not Allowed' , '406': 'Not Acceptable' , '407': 'Proxy Authentication Required' , '408': 'Request Time-out' , '409': 'Conflict' , '410': 'Gone' , '411': 'Length Required' , '412': 'Precondition Failed' , '413': 'Request Entity Too Large' , '414': 'Request-URI Too Large' , '415': 'Unsupported Media Type' , '416': 'Requested Range Not Satisfiable' , '417': 'Expectation Failed' , '418': 'I\'m a teapot' , '422': 'Unprocessable Entity' , '423': 'Locked' , '424': 'Failed Dependency' , '425': 'Unordered Collection' , '426': 'Upgrade Required' , '500': 'Internal Server Error' , '501': 'Not Implemented' , '502': 'Bad Gateway' , '503': 'Service Unavailable' , '504': 'Gateway Time-out' , '505': 'HTTP Version not supported' , '506': 'Variant Also Negotiates' , '507': 'Insufficient Storage' , '509': 'Bandwidth Limit Exceeded' , '510': 'Not Extended' } , IncomingMessage: { [Function: IncomingMessage] super_: [Function: EventEmitter] } , OutgoingMessage: { [Function: OutgoingMessage] super_: [Function: EventEmitter] } , ServerResponse: { [Function: ServerResponse] super_: [Circular] } , ClientRequest: { [Function: ClientRequest] super_: [Circular] } , Server: { [Function: Server] super_: { [Function: Server] super_: [Function: EventEmitter] } } , createServer: [Function] , Client: { [Function: Client] super_: { [Function: Stream] super_: [Function: EventEmitter] } } , createClient: [Function] , cat: [Function] } > Is there a way to use require() with node's --eval? I'm on node 0.2.6 on Mac OS X 10.6.5.

    Read the article

  • IIS7 URL Rewriting: How not to drop HTTPS protocol from rewritten URL?

    - by Scott Mitchell
    I'm working on a website that's using IIS 7's URL rewriting feature to do a permanent redirect from example.com to www.example.com, as well as rewrites from similar domain names to the "main" one, such as from www.examples.com to www.example.com. This rewrite rule - shown below - has worked well for sometime now. However, we recently added HTTPS support and noticed that if users visit one of the URLs to be rewritten to www.example.com then HTTPS is dropped. For instance, if a user visits https://example.com they get redirected to http://www.example.com, whereas we would like them to be sent to https://www.example.com. Here is the rewrite rule of interest (in Web.config): <rule name="Canonical Host Name" stopProcessing="true"> <match url="(.*)" /> <conditions logicalGrouping="MatchAny"> <add input="{HTTP_HOST}" pattern="^example\.com$" /> <add input="{HTTP_HOST}" pattern="^(www\.)?example\.net$" /> <add input="{HTTP_HOST}" pattern="^(www\.)?example\.info$" /> <add input="{HTTP_HOST}" pattern="^(www\.)?examples\.com$" /> </conditions> <action type="Redirect" url="http://www.example.com/{R:1}" redirectType="Permanent" /> </rule> As you can see, the action element's url attribute points directly to http://, so I get why https://example.com is redirected to http://www.example.com. My question is, how do I fix this? I tried (naively) to just drop the http:// part from the url attribute, but that didn't work. Thanks!

    Read the article

  • Mercurial: Class library that will exist for both .NET 3.5 and 4.0?

    - by Lasse V. Karlsen
    I have a rather big class library written in .NET 3.5 that I'd like to upgrade to make available for .NET 4.0 as well. In that process, I will rip out a lot of old junk, and rewrite some code to better take advantage of the new classes and support in .NET 4.0 (like TPL.) The class libraries will thus diverge, but still be similar enough that some bug-fixes can be done to both in the same manner. How should I best organize this class library in Mercurial? I'm using Kiln (fogbugz) if that matters. I'm thinking: Named branches in one repository, can then transplant any bugfixes from one to the other Unnamed branches in one repository, can also transplant, but I think this will look messy Separate repositories, will have to reimplement the bugfixes (or use a non-mercurial-integraded compare tool to help me) What would you do? (any other alternatives that I haven't though of is welcome as well.) Note that the class libraries will diverge pretty heavily in areas, I have some remnants of old collection-type code that does something similar to Linq that I will remove, and some code that uses it that I will rewrite to use the Linq-methods instead. As such, just copying the project files and using #if NET40..#endif sections is not going to work out. Also, the 3.5 version of the class library will not be getting many new features, mostly just critical bug-fixes, so keeping both versions equally "alive" isn't really necessary. Thus, separate copies of all the files are good enough.

    Read the article

  • jQuery way to replace just a text node with a mix of HTML and text

    - by hippietrail
    In my web browser userscript project I need to replace just one text node without affecting the other HTML elements under the same parent node as the text. And I need to replace it with more than one node: <div id="target"> some text<img src="/image.png"> </div> Needs to become: <div id="target"> <a href="#">mixed</a> text <a href="#">and</a> HTML<img src="/image.png"> </div> I know jQuery doesn't have a whole lot of support for text nodes. I know I could use direct DOM calls instead of jQuery. And I know I could just do something like $('#target').html(my new stuff + stuff I don't want to change). What I'd like to ask the experts here is, Is there a most idiomatic jQuery way to do this?

    Read the article

  • Servlets: forwarding to a resource in a different webapp

    - by skaffman
    I'm trying to construct a java web app along modular principles, with some common resources (JSPs, mainly) in one WAR, and some custom resources in another. This means JSPs scattered across different WARs. Now JavaEE frowns upon this sort of shenanigans, and wants you to put everything in one place. My current workaround to this is to have an Eclipse-triggered Ant script which copies the content of one WAR into the other, but this is not a pleasant solution (it's fragile and too IDE-dependent). Ideally, what I'd like to be able to do is for a servlet to forward to a JSP located in a different WAR to one in which it is itself deployed. This would allow greater freedom in how I assemble my WARs. However, the RequestDispatcher does not seem to support such things. Another possibility is to use <c:import>, which does allow resources to be imported from a different WAR (with some caveats). This would probably allow me to have a "hook" JSP in one WAR, which then drags in the required JSP from another. This is a bit clunky, though, and the fact that <c:import> permits it shows that the underlying servlet API does also. But how do I access that functionality via the RequestDispatcher in a servlet?

    Read the article

  • PHP: exif_imagetype() not working?

    - by Karem
    I have this extension checker: $upload_name = "file"; $max_file_size_in_bytes = 8388608; $extension_whitelist = array("jpg", "gif", "png", "jpeg"); /* checking extensions */ $path_info = pathinfo($_FILES[$upload_name]['name']); $file_extension = $path_info["extension"]; $is_valid_extension = false; foreach ($extension_whitelist as $extension) { if (strcasecmp($file_extension, $extension) == 0) { $is_valid_extension = true; break; } } if (!$is_valid_extension) { echo "{"; echo "error: 'ext not allowed!'\n"; echo "}"; exit(0); } And then i added this: if (exif_imagetype($_FILES[$upload_name]['name']) != IMAGETYPE_GIF OR exif_imagetype($_FILES[$upload_name]['name']) != IMAGETYPE_JPEG OR exif_imagetype($_FILES[$upload_name]['name']) != IMAGETYPE_PNG) { echo "{"; echo "error: 'This is no photo..'\n"; echo "}"; exit(0); } As soon when I added this to my imageupload function, the function stops working. I dont get any errors, not even the one i made myself "this is no photo", what could be wrong? Just checked with my host. They support the function exif_imagetype()

    Read the article

  • Does the Java Memory Model (JSR-133) imply that entering a monitor flushes the CPU data cache(s)?

    - by Durandal
    There is something that bugs me with the Java memory model (if i even understand everything correctly). If there are two threads A and B, there are no guarantees that B will ever see a value written by A, unless both A and B synchronize on the same monitor. For any system architecture that guarantees cache coherency between threads, there is no problem. But if the architecture does not support cache coherency in hardware, this essentially means that whenever a thread enters a monitor, all memory changes made before must be commited to main memory, and the cache must be invalidated. And it needs to be the entire data cache, not just a few lines, since the monitor has no information which variables in memory it guards. But that would surely impact performance of any application that needs to synchronize frequently (especially things like job queues with short running jobs). So can Java work reasonably well on architectures without hardware cache-coherency? If not, why doesn't the memory model make stronger guarantees about visibility? Wouldn't it be more efficient if the language would require information what is guarded by a monitor? As i see it the memory model gives us the worst of both worlds, the absolute need to synchronize, even if cache coherency is guaranteed in hardware, and on the other hand bad performance on incoherent architectures (full cache flushes). So shouldn't it be more strict (require information what is guarded by a monitor) or more lose and restrict potential platforms to cache-coherent architectures? As it is now, it doesn't make too much sense to me. Can somebody clear up why this specific memory model was choosen? EDIT: My use of strict and lose was a bad choice in retrospect. I used "strict" for the case where less guarantees are made and "lose" for the opposite. To avoid confusion, its probably better to speak in terms of stronger or weaker guarantees.

    Read the article

  • Why does the VS2005 debugger not report "base." values properly? (was "Why is this if statement fail

    - by Rawling
    I'm working on an existing class that is two steps derived from System.Windows.Forms.Combo box. The class overrides the Text property thus: public override string Text { get { return this.AccessibilityObject.Value; } set { if (base.Text != value) { base.Text = value; } } } The reason given for that "get" is this MS bug: http://support.microsoft.com/kb/814346 However, I'm more interested in the fact that the "if" doesn't work. There are times where "base.Text != value" is true and yet pressing F10 steps straight to the closing } of the "set" and the Text property is not changed. I've seen this both by just checking values in the debugger, and putting a conditional breakpoint on that only breaks when the "if" statement's predicate is true. How on earth can "if" go wrong? The class between this and ComboBox doesn't touch the Text property. The bug above shouldn't really be affecting anything - it says it's fixed in VS2005. Is the debugger showing different values than the program itself sees? Update I think I've found what is happening here. The debugger is reporting value incorrectly (including evaluating conditional breakpoints incorrectly). To see this, try the following pair of classes: class MyBase { virtual public string Text { get { return "BaseText"; } } } class MyDerived : MyBase { public override string Text { get { string test = base.Text; return "DerivedText"; } } } Put a breakpoint on the last return statement, then run the code and access that property. In my VS2005, hovering over base.Text gives the value "DerivedText", but the variable test has been correctly set to "BaseText". So, new question: why does the debugger not handle base properly, and how can I get it to?

    Read the article

  • Please suggest other ways of communicating between server & client.

    - by Geo
    I'm writing a TCP chat server ( programming language does not mather ). It's a school project for my nephew, so it won't be released, and all questions I'm asking are just for my knowledge :). . Some of the things it will support: chatting between users ( doh ), it will be multithreaded sending each other files I know I could easily get away with all the stuff above if I go with serialization, and just send objects from client to server and back. But, if I do that, it will be limited to a specific programming language ( meaning clients written in other programming languages may not be able to deserialize the objects ). What would be the way to go so that other clients written in other languages could be supported? One way to go, off the top of my head, would be to go in this direction: the server & the client communicate by sending messages & chunks ( in lieu of other names ). Here's what I mean by this: every time the client/server wants to send something ( text message or file ) it will first send a simple text message ( newline terminated ) with the number of the chunks it will send. Example: command 4,20,30,40,50 Where command would be something like instant-message or file,4 would be the number of chunks to be sent, 20 would be the size in bytes of the first chunk, 30 of the 2nd, and so forth. after the message was sent, the client/server will start sending chunks ( of sizes mentioned in the sent message ). What do you think about implementing the client/server communication this way? What better options are there?

    Read the article

  • Organizing development teams

    - by Patrick
    A long time ago, when my company was much smaller, dividing the development work over teams was quite easy: the 'application' team developed the applications-specific logic, often requiring a deep insight of specific industry problems) the 'generic' team developed the parts that were common/generic for all applications (user interface related stuff, database access, low-level Windows stuff, ...) Over the years the boundaries between the teams have become fuzzy: the 'application' teams often write application-specific functionality with a 'generic' part, so instead of asking the 'generic' team to write that part for them, they write it themselves to speed up the developments; then donate it to the 'generic' team the 'generic' team's focus seems to be more 'maintenance oriented'. All of the 'very generic' code has already been written, so no new developments are needed in it, but instead they continuously have to support all the functionality donated by the application teams. All this seems to indicate that it's not a good idea anymore to have this split in teams. Maybe the 'generic' team should evolve into a 'software quality' team (defining and guarding the rules for writing good quality software), or into a 'software deployment' team (defining how software should be deployed, installed, ...). How do you split up the work in different teams if you have different applications? everybody can write generic code and donates it to a central 'generic' team? everybody can write generic code, but nobody 'manages' this generic code (everybody is the owner) generic code is written by a 'generic' team only and the applications have to wait until the 'generic' team delivers the generic part (via a library, via a DLL) there is no overlap in code between the different applications some other way? Notice that thee advantage of having the mix (allowing everybody to write everywhere in the code) is that: code is written in a more flexible way it's easier to debug the code since you can easily step into the 'generic' code in the debugger But the big (and maybe only) disadvantage is that this generic code may become nobody's responsibility if there is no clear team that manages it anymore. What is your vision?

    Read the article

  • ESPN API - How can I retrieve college basketball conferences using the Teams API?

    - by nomad
    The support forums on ESPN.com recommend using Stack Overflow with the ESPN tag. That's why I'm here. I'm trying to obtain a list of all NCAA college basketball teams using ESPN's Teams API. I started with this GET request: http://api.espn.com/v1/sports/basketball/mens-college-basketball/teams?apikey=MY_API_KEY That gave me a list of teams, but many of them are missing. For example, there is no Nebraska. So then I thought that maybe I need to get a list of teams by conference. So I read this in the documentation: GROUPS: Allows for filtering by "group" or division, e.g. AL East, NFC South, etc. For group IDs and their corresponding values, make a request to http://developer.espn.com/v1/{resource}/leagues. Not applicable to golf and tennis. So then I try to make a request to `http://developer.espn.com/v1/basketball/mens-college-basketball/leagues?apikey=MY_API_KEY' and it says the page does not exist. Is this a bug or user error?

    Read the article

  • How can I record and save flash video's with an H264 codec

    - by JimHendriks
    Right now I am working on an AIR 3.2 application which lets you stream a video to a Flash Media Server and saves it on a hard drive. This sequence works fine with the standard Sorenson codec but I want to use H.264 for my videos. I found lots example c ode and implemented it in my code, but when I record a video of myself I am unable to re-watch it afterwards. I found how to implement a H.264 encoding in a realeyes blog post here. My code is here. It saves the video as a .f4v file, but my browser (I've tried the latest versions of both Chrome and Firefox, with the latest Flash) and also VLC are unable to load the video. I also used a program called Movie Player which is able to open the file but can only show the first frame and the audio. Neither am I able to upload the video to YouTube because they do not support the file extension. Here is an example video file it saved: H264Test1.f4v. My question is: How do I stream and save the movie with a file extension that I am able to re-watch while using the H.264 codec?

    Read the article

  • Does video tag (HTML 5) injection via JavaScript work in any browsers?

    - by JoshNaro
    I'm trying to dynamically spawn a video element on a page using JavaScript. JavaScript <script type="text/javascript"> $(document).ready(function() { var video = $(document.createElement('video')) .attr('id', 'VideoElement') .attr('controls', 'controls') .attr('src', 'videopath.mp4') // Changed 'href' attribute to 'src' .css({ width: 640, height: 360 }); $('#VideoContainer').append(video); }); HTML <body> <div id="VideoContainer"></div> </body> In Firefox I get the video harness, but the actual video doesn't load. In IE8 the video harness doesn't even appear. Is HTML 5 just not supported enough to accomplish this yet? Edit: Got this to work with Artiom's fix. Looks like this works fine with Chrome and Safari. I'm using a codec Firefox doesn't support, so it doesn't work there; although I suspect it will work with a supported codec. IE8 sure enough doesn't work (high five IE).

    Read the article

  • jQuery: How can I get the HTML from one div to display in another.

    - by rockinthesixstring
    I've got a small script that detects if the user is using Google Chrome var is_chrome = navigator.userAgent.toLowerCase().indexOf('chrome') > -1; if(is_chrome){ $('.ifChrome').attr('style', 'display:block;'); $('.ifChrome').html($('noscript > div').html()); }; If they are using chrome, we want to display a div tag and show the HTML from a different div tag inside. <noscript> <div class="note"> Your browser does not properly support the WMD Markdown Editor.<br /> Please use <a href="/about/markdown" target="_blank">Markdown<.a> to style your input. </div> </noscript> <div class="hidden ifChrome note"></div> What I'm trying to do is show the "not supported" text from inside my <noscript> tag to users who are using google Chrome (because WMD Markdown doesn't work properly with Chrome). My existing Javascript is not working. Any help would be greatly appreciated.

    Read the article

  • SQL: find entries in 1:n relation that don't comply with condition spanning multiple rows

    - by milianw
    I'm trying to optimize SQL queries in Akonadi and came across the following problem that is apparently not easy to solve with SQL, at least for me: Assume the following table structure (should work in SQLite, PostgreSQL, MySQL): CREATE TABLE a ( a_id INT PRIMARY KEY ); INSERT INTO a (a_id) VALUES (1), (2), (3), (4); CREATE TABLE b ( b_id INT PRIMARY KEY, a_id INT, name VARCHAR(255) NOT NULL ); INSERT INTO b (b_id, a_id, name) VALUES (1, 1, 'foo'), (2, 1, 'bar'), (3, 1, 'asdf'), (4, 2, 'foo'), (5, 2, 'bar'), (6, 3, 'foo'); Now my problem is to find entries in a that are missing name entries in table b. E.g. I need to make sure each entry in a has at least the name entries "foo" and "bar" in table b. Hence the query should return something similar to: a_id = 3 is missing name "bar" a_id = 4 is missing name "foo" and "bar" Since both tables are potentially huge in Akonadi, performance is of utmost importance. One solution in MySQL would be: SELECT a.a_id, CONCAT('|', GROUP_CONCAT(name ORDER BY NAME ASC SEPARATOR '|'), '|') as names FROM a LEFT JOIN b USING( a_id ) GROUP BY a.a_id HAVING names IS NULL OR names NOT LIKE '%|bar|foo|%'; I have yet to measure the performance tomorrow, but severly doubt it's any fast for tens of thousand of entries in a and thrice as many in b. Furthermore we want to support SQLite and PostgreSQL where to my knowledge the GROUP_CONCAT function is not available. Thanks, good night.

    Read the article

  • JMS message. Model to include data or pointers to data?

    - by John
    I am trying to resolve a design difference of opinion where neither of us has experience with JMS. We want to use JMS to communicate between a j2ee application and the stand-alone application when a new event occurs. We would be using a single point-to-point queue. Both sides are Java-based. The question is whether to send the event data itself in the JMS message body or to send a pointer to the data so that the stand-alone program can retrieve it. Details below. I have a j2ee application that supports data entry of new and updated persons and related events. The person records and associated events are written to an Oracle database. There are also stand-alone, separate programs that contribute new person and event records to the database. When a new event occurs through any of 5-10 different application functions, I need to notify remote systems through an outbound interface using an industry-specific standard messaging protocol. The outbound interface has been designed as a stand-alone application to support scalability through asynchronous operation and by moving it to a separate server. The j2ee application currently has most of the data in memory at the time the event is entered. The data would consist of approximately 6 different objects; a person object and some with multiple instances for an average size in the range of 3000 to 20,000 bytes. Some special cases could be many times this amount. From a performance and reliability perspective, should I model the JMS message to pass all the data needed to create the interface message, or model the JMS message to contain record keys for the data and have the stand-alone Java application retrieve the data to create the interface message?

    Read the article

  • How to do something when AVQueuePlayer finishes the last playeritem

    - by user1634529
    I've got an AVQueuePlayer which I'm creating from an array of 4 AVPlayerItems, and it all plays fine. I want to do something when the last item in the queue finishes playing, I've looked a load of answers on here and this is the one that looks most promising for what I want: The best way to execute code AFTER a sound has finished playing In my button handler i have this code: static const NSString *ItemStatusContext; [thePlayerItemA addObserver:self forKeyPath:@"status" options:0 context:&ItemStatusContext]; [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(playerItemDidReachEnd) name:AVPlayerItemDidPlayToEndTimeNotification object:thePlayerItemA]; theQueuePlayer = [AVQueuePlayer playerWithPlayerItem:thePlayerItemA]; [theQueuePlayer play]; and then I have a function to handle playerItemDidReachEnd: - (void)playerItemDidReachEnd:(NSNotification *)notification { // Do stuff here NSLog(@"IT REACHED THE END"); } But when I run this I get an Internal Inconsistency Exception: An -observeValueForKeyPath:ofObject:change:context: message was received but not handled. Key path: status Observed object: <AVPlayerItem: 0x735a130, asset = <AVURLAsset: 0x73559c0, URL = file://localhost/Users/mike/Library/Application%20Support/iPhone%20Simulator/5.0/Applications/A0DBEC13-2DA6-4887-B29D-B43A78E173B8/Phonics%2001.app/yes.mp3>> Change: { kind = 1; } What am I doing wrong?

    Read the article

  • How to calculate how many lines (or records) can fit in the given page size

    - by devcoder
    My problem is to calculate the data (string) that can fit into the given page size (in inches). I have an application which creates plain vanilla HTML report without using any reporting controls. Now I have to provide paging support in this report. the report is dynamic in nature i.e. columns are decided at run time. Depending upon the page width, I want to wrap columns in multiple lines. For example, if the page width is 8", i want to fit only first 'n' columns in first line and rest columns can be displayed in second line (or more lines if required). For this I need to calculate how much data can fit in a 8" wide line. Similarly, I want to calculate the height of data that can fit into the given height of page. To summarize, how can I calculate how much data can fit into the given page size in inches. Note: The calculation should also consider the font as it is decided at run time.

    Read the article

  • iPhone application purchase verification -- possible?

    - by Sedate Alien
    The iPhone 3.0 SDK's StoreKit.framework provides support for in-app purchases to give the user additional content, functionality and so on. It is possible for an app to send the transactionReceipt property of SKPaymentTransaction objects to the developer's server for verification of successful purchasing before granting service. Is there any analogous SDK to verify the initial application purchase itself? A developer that wishes for their server to only provide services to genuine applications (i.e. not pirated) without using IAP could do so by verifying the application in this manner, e.g. ensure that only users with the correct transactionReceipt are catered for. I understand that this approach would still be vulnerable to replay attacks; a dedicated group of pirates could share a valid transactionReceipt. However, my server provides a consumable service to users, i.e. once they've connected and done the work, it needn't work a second time so replay attacks are nullified. The service that my app provides is relatively niche. I could distribute it on the App Store as a free application that requires at least one IAP to do anything useful, but I am lead to believe that this would be a very unpopular move among users as it would be considered misleading. If I distribute it as a paid app, I do not know how to ensure that only genuine apps can access the webservice. This is important as every invocation of the webservice costs me money! What are my options?

    Read the article

  • Designers, Expression or SharePoint Designer, and real source control

    - by David Lively
    I'm trying desperately to move from VSS to a real source control system. Options include TFS and SVN. My designers need to keep their ability to modify source files and instantly preview their changes in a browser without having to commit their changes. Using FPSE with VSS, this works flawlessly, since saving a file causes the copy in the working folder on the dev server to be updated, so they can just save and refresh their browser which is pointed at the dev server. The site in question consists of 350k+ lines of classic ASP code and some new ASP.NET MVC. They only need to be able to modify views within the MVC code, not C#. Though Expression includes a version of Cassini for local debugging, Cassini does not support classic ASP. Surely someone has solved this problem before. It can't be necessary to install IIS on each designer's machine (this is absolutely untenable). I need a way to have a common working folder on a dev webserver updated whenever someone saves a file locally, just like using FPSE. I'd rather not write an FPSE proxy that knows how to talk to TFS/SVN. Any suggestions? (I know I've asked this question in the past, but I haven't yet found a solution.)

    Read the article

  • Automate Field Entry in outside site

    - by JClaspill
    I am attempting to automate the entry of data into form fields. The problem is that this data (user/pass) is not known by the user. I'm not expressly hiding it from them, but they also don't need to know it. This is used to automate logins on several of our outside partner websites, who do not want our agents knowing their passwords. Sadly, most of these sites do not have any APIs I can work with... so I have to get the user logged in. I tried using an iframe and javascript, but I ran into the issue of security permissions denying it access. And sadly, our clients do not have access to add our domain to their sites(they seem to be 3rd party). Requirements: - Display webpage - Automatically enter data into fields Would be nice: - Automate signin similar to form.submit() - Flash/AJAX support. These seem to give the VB app issues. Is there a way to do this via javascript/html, and if not, do you have any recommendations for C#/php/asp.net options? PS: I am not sure what this techinque is called, so google isn't helping me it seems. Please set me straight on the terminology of what I am actually trying to accomplish.

    Read the article

  • OcaIDE doesn't see JoCaml tools

    - by Surikator
    I'm having a problem while using OcaIDE in ocamlbuild mode. I'm trying to compile my own JoCaml sources. According to the JoCaml manual (bottom of page), to use ocamlbuild with JoCaml, I just need to add the -use-jocaml argument to ocamlbuild. Indeed, if I go to the root of my project and write ocamlbuild -use-jocaml foo.native it generates my executable just fine. However, in OcaIDE I get /bin/sh: jocamldep: command not found In OcaIDE, the -use-jocaml flag is passed in the "Other Flags" box (in Project Properties). And that certainly is working, as the complaint is precisely that it doesn't find jocaml stuff. The puzzling thing is that jocaml is installed and can be accessed from any random terminal window. For example, running jocamldep -modules foo.ml > foo.ml.depends on my project does generate the desired dependency file. So, it would seem I would have to configure OcaIDE and tell it where JoCaml executables are or something. This is done for OCaml, for example. But there is no place to do that for JoCaml. And it's really strange that, if jocamldep/jocamlc/etc are all accessible from anywhere, OcaIDE wouldn't be able to pick them. Any ideas? (I am aware I can do an ocamlbuild plugin and pass the flag in a "myocamlbuild.ml" file. I'll probably use that a latter stage after I get familiar with ocamlbuild plugins. But here the question is about OcaIDE. EDIT: Actually, ocamlbuild plugins don't seem to be a solution as, although there is an option -use-jocaml in ocamlbuild to enforce jocaml use (and it works fine), the plugin system doesn't support it, i.e. jocaml is not in the list of options.)

    Read the article

< Previous Page | 644 645 646 647 648 649 650 651 652 653 654 655  | Next Page >