Search Results

Search found 13332 results on 534 pages for 'compatibility level'.

Page 344/534 | < Previous Page | 340 341 342 343 344 345 346 347 348 349 350 351  | Next Page >

  • Memory issue regarding UIImageView on IPhone 4.0 / IPad

    - by Sagar Mane
    Hello All, My Application is crashing due to low memory [ Received memory warning level 1 + 2] To trace this I have used Instrument and come with following points Test Enviorment : Single view controller added on Window When I don't use UIImageView Real Memory is used 3.66 MB When I uses UIImageView with Image having size 25 KB : Real Memory is used 4.24 MB. almost 560 KB extra when compare to w/o UIImageView and which keep on adding as I am adding more UIImageview on the view. below is sample code for adding UIImageview which I am refering UIImageView* iSplashImage = [[UIImageView alloc] initWithImage:[UIImage imageNamed:@"Default-Landscape.png"]]; iSplashImage.frame = CGRectMake(0, 0, 320, 480); [self.window addSubview:iSplashImage]; AND dealloc if(iSplashImage) { [iSplashImage release]; iSplashImage = nil; } Issue is this 560 KB is not getting release and after some time application receives low memory warning. Can anyone point out if I am missing something or doing else. As My application uses lots of Images in One session. Thanks in Advance, Sagar

    Read the article

  • Image 8-connectivity without excessive branching?

    - by shoosh
    I'm writing a low level image processing algorithm which needs to do alot of 8-connectivity checks for pixels. For every pixel I often need to check the pixels above it, below it and on its sides and diagonals. On the edges of the image there are special cases where there are only 5 or 3 neighbors instead of 8 neighbors for a pixels. The naive way to do it is for every access to check if the coordinates are in the right range and if not, return some default value. I'm looking for a way to avoid all these checks since they introduce a large overhead to the algorithm. Are there any tricks to avoid it altogether?

    Read the article

  • NSIS Installer - Displaying different licences

    - by Wysawyg
    Heya, I'm trying to modify an existing NSIS install script to allow for different licence files to be presented to the user depending on whether they are a new or existing user. I have pre-existing code which detects an existing install in the .onInit section. However I'm running into bumps trying to use the NSIS provided licence screen e.g. !InsertMacro MUI_PAGE_LICENSE Content\Licence.rtf I would like to be able to choose between Licence and Licence2.rtf (though they'll be renamed something representative in the final version). I've tried using selectable sections calling functions which nest the !insertmacro but that doesn't work because it needs to be in the base level of the script. I can't change the parameter to be runtime definable because it needs to know what the file is at compile time to build it into the installer. I know I can roll my own custom page called from a function and do it that way but I was wondering if anyone had got the NSIS installer working with using the MUI_PAGE_LICENSE and different licences. Thanks

    Read the article

  • What is needed to get Delphi back on top?

    - by Jim McKeeth
    Delphi 2009 is due in the next couple months, which is its 12th release since Turbo Pascal became Delphi in 1995. Despite continued innovation it has not returned to its level of popularity before the Inprise fiasco. Many developers with Delphi backgrounds are moving to C# and many Delphi legacy applications are being rewritten in C#, despite the fact Delphi supports .NET and in many cases the existing application could be ported without rewriting. Is it just a losing battle to compete against Microsoft's tools on their platform? Is there something Code Gear / Delphi can do now that they are under new management to regain market share? What can enthusiasts do to help? Why do you do Delphi programming? or Why are you not doing Delphi programming?

    Read the article

  • PyQt: How to keep QTreeView nodes correctly expanded after a sort

    - by taynaron
    I'm writing a simple test program using QTreeModel and QTreeView for a more complex project later on. In this simple program, I have data in groups which may be contracted or expanded, as one would expect in a QTreeView. The data may also be sorted by the various data columns (QTreeView.setSortingEnabled is True). Each tree item is a list of data, so the sort function implemented in the TreeModel class uses the built-in python list sort: self.layoutAboutToBeChanged.emit() self.rootItem.childItems.sort(key=lambda x: x.itemData[col], reverse=order) for item in self.rootItem.childItems: item.childItems.sort(key=lambda x: x.itemData[col], reverse=order) self.layoutChanged.emit() The problem is that whenever I change the sorting of the root's child items (the tree is only 2 levels deep, so this is the only level with children) the nodes aren't necessarily expanded as they were before. If I change the sorting back without expanding or collapsing anything, the nodes are expanded as before the sorting change. Can anyone explain to me what I'm doing wrong? I suspect it's something with not properly reassigning QModelIndex with the sorted nodes, but I'm not sure.

    Read the article

  • algorithm for image comparison

    - by Rajnikant
    Please consider following use case, I have one bigger image, lets called is master image. Now from some where else, I am getting one small image. I want to check whether this small image is subset of master image or not. important points are, smaller image might have different file format, smaller image might captured from comparatively different view. smaller image may have different light intensity. At this stage of algorithm/computation advancement, which level of accuracy I could expect? Any algorithm/open source implementation that would have such implementation? Thanks, Rajnikant

    Read the article

  • What features are important in a programming language for young beginners?

    - by NoMoreZealots
    I was talking with some of the mentors in a local robotics competition for 7th and 8th level kids. The robot was using PBASIC and the parallax Basic Stamp. One of the major issues was this was short term project that required building the robot, teaching them to program in PBASIC and having them program the robot. All in only 2 hours or so a week over a couple months. PBASIC is kinda nice in that it has built in features to do everything, but information overload is possible to due this. My thought are simplicity is key. When you have kids struggling to grasp: if X>10 then <DOSOMETHING> There is not much point in throwing "proper" object oriented programming at them. What are the essentials needed to foster an interest in programming?

    Read the article

  • Algorithm - Numbering for TOC (Table of Contents)

    - by belisarius
    I want to implement a VBA function to number Excel rows based upon the grouping depth of the row. But I think a general algorithm for generating TOCs is more interesting. The problem is: Given a list of "indented" lines such as One Two Three Four Five Six (the "indentation level" may be assumed to be known and part of the input data) To generate the following output: 1. One 1.1 Two 1.1.1 Three 1.1.1.1 Four 1.2 Five 2. Six Of course my code is up and running ... and also hidden under THWoS (The Heavy Weight of Shame)

    Read the article

  • Virtual Earth Shape Rendering Performance

    - by Mike
    I am overlaying a transparent image on my VEMap control by rendering it as a single VEShape. The shape changes sizes dynamically depeding on the zoom level of my map and can be as large as 4000*4000px. In older browsers such as IE6 and early versions of Firefox 2.x, map control performance degrades rapidly when my shape gets larger than 1500*1500px. The mouse pointer moves slowly and the map responds very slowly to events. I don't see this issue at all in newer browsers (IE7+). Are there any workarounds to boost performance of rendering a large shape for IE6 users?

    Read the article

  • Why CSS Transitions -module does not support image-to-image transitions?

    - by Kai Sellgren
    Hi, I've read the spec for CSS Transitions Module Level 3 and I'd like to know why it does not support image-based transitions. According to the draft, the background-image transitions are only supported when using with gradients. Both Webkit and Gecko seems to follow this practice. It's just that I see this as a major drawback. HTML 5 and CSS 3 could become the killer of Flash, but if I can't even transit between two images, I don't see how one could have beautiful menus without Flash.

    Read the article

  • Good books on Sybase ASE 15?

    - by Ilya Kochetov
    We need to get some good books on Sybase ASE 15 for our developers. The people in the team have previous experience with different SQL flavors (MS SQL, MySQL, Informix and Oracle) but no one worked with Sybase before. Therefore I am looking for two kinds of books: Book for developers on how to use Sybase for queries,sprocs, views etc. Has to be a book for professionals and not something like 'learn SQL in 21 day' Book for the DB administrator on how to maintain the database. This could be on any level and a dummy guide would not go wrong :) Thank you

    Read the article

  • In Drupal 6, is there a way to take a custom field from the latest post to a taxonomy term, and disp

    - by user278457
    The title for this question pretty much sums up what I'm asking. I've got a list of taxonomy terms, and I'm using a view to display the latest post to each one. I'd like to also display a custom field set up in CCK just under this. Currently, I'm just using "date updated" of the taxonomy term itself which was easy to set up in views. I'd like to drill a little deeper and get the custom "event date" field I've added to the content type last posted to the taxonomy term I'm "viewing". I've got a feeling I'm going to have to write my own database query for this. If (I can avoid that){ How do I set up such a view? } Else{ What's the best practice for including lower level database queries alongside views? }

    Read the article

  • [Newbie] How to join mysql tables

    - by Ivan
    I've an old table like this: user> id | name | address | comments And now I've to create an "alias" table to allow some users to have an alias name for some reasons. I've created a new table 'user_alias' like this: user_alias> name | user But now I have a problem due my poor SQL level... How to join both tables to generate something like this: 1 | my_name | my_address | my_comments 1 | my_alias | my_address | my_comments 2 | other_name | other_address | other_comments I mean, I want to make a "SELECT..." query that returns in the same format as the "user" table ALL users and ALL alias.. Something like this: SELECT user.* FROM user LEFT JOIN user_alias ON `user`=`id` but it doesn't work for me..

    Read the article

  • error with Security Exception

    - by Alexander
    I am getting the following error on my page: Security Exception Description: The application attempted to perform an operation not allowed by the security policy. To grant this application the required permission please contact your system administrator or change the application's trust level in the configuration file. The problem is with the following code SmtpClient mailClient = new SmtpClient("smtp.gmail.com", 587); What's weird is that when testing it at my localhost, everything works fine, but when I put it on my 1and1 web host it generates the error above. I contacted their support and here's their answer. We do check the error logs and the operation require a FullTrust environment which currently fall under restriction on .NET Framewor k What should I do?

    Read the article

  • Lua API for TokyoTyrant

    - by jideel
    Hi SO folks, I didn't managed to find an Lua client/api for TokyoTyrant. Such Api exists for TokyoCabinet, but not for TT. And Perl and Ruby API exists for TT. TT provides a native binary protocol, a memcached-compatible protocol, and an HTTP-oriented protocol. So my questions are : 1/ Do you think using the memcached (using luamemcached) or the HTTP protocol (using luaSocket) is "enough" for most / simple usage, and so a native Lua api is not necessary ? (the app is a simple uuid storage/distributor) ? 2/ Does it make sense to not use TokyoTyrant, but only TokyoCabinet, and use Lua at the application level to provide network and concurrent access to TC, using, say, Copas (Copas is , from their website, "a dispatcher based on coroutines that can be used by TCP/IP servers." ? Thanks.

    Read the article

  • Has anyone got a simple step by step Mozilla plugin tutorial.

    - by s1mm0t
    I'm trying to create a Mozilla browser plugin for the sole purpose of detecting with javascript whether or not an application that I have written is installed. This was inspired by another stackoverflow question and answer on a similar subject. All I want really is a HelloWorld type example as what I need to write will be even simpler than that. There is a lot of information on the Mozilla website, but as a plugin noob and a C++ novice I'm experiencing information overload. I just need a step by step "this is how to create your first simple plugin". I have already written the IE equivalent by following this ATL tutorial. This is the kind of level of guide that I would ideally like to find. Please note, this is specifically about Mozilla plugins and not extensions - Googling this subject brings up a lot of information about extensions in addition to plugins.

    Read the article

  • Difference between Document-oriented-DB and Bigtable clones

    - by chen
    We are looking for a suitable storage engine for our weblog history data. We looked at Bigtable's paper and understand it is suitable to us well. However, I also understand that Document-oriented-DB such as MongoDB seems to provide a little more powerful schema power -- i.e, it can model our data as well. I wonder how nowadays ppl choose a scalable NoSQL DB --- I read enough articles like "we looked at A, B and C, and we decided to use C". But I'd like to see some benchmark number. What I am saying is that if MongoDB and the like can provide same level of performance as Bigtable clones, why don't web companies choose it (preparing to deal with various potentially more complex data problem)? Thanks, By the way, I read an article (which convinced me at the moment) saying Cassandra does not fit the M/R operation, any comments?

    Read the article

  • Generate HTML To PDF Control for the .NET application

    - by Karan
    Has anyone used any open source or paid .NET Control which does the conversion job from html to pdf file? At the moment, i am using Winnovative convertor control. But it has a performance limitation during the generation of bulk pages (like more than 1000) in the pdf. The limitation comes when we use bigger images in the html content. From last 4 months i've been working on the winnovative control and found plenty of major bugs in it. For a small application and usage. winnovative is good but not for the level where application will be used by thousands of clients. Please suggest.

    Read the article

  • iPhone SDK: CLocationAccuracy. What constants map to what positioning technology?

    - by buzzappsoftware
    With respect to CLocationManager docs.... Constant values you can use to specify the accuracy of a location. extern const CLLocationAccuracy kCLLocationAccuracyBestForNavigation; extern const CLLocationAccuracy kCLLocationAccuracyBest; extern const CLLocationAccuracy kCLLocationAccuracyNearestTenMeters; extern const CLLocationAccuracy kCLLocationAccuracyHundredMeters; extern const CLLocationAccuracy kCLLocationAccuracyKilometer; extern const CLLocationAccuracy kCLLocationAccuracyThreeKilometers; Given that, I have the following questions. What triangulation method (GPS, cell tower or wi-fi) corresponds to each accuracy level? Does iPhone SDK utilize Skyhook Wireless API? For kCLLocationAccuracyBestForNavigation, there is note stating the phone must be plugged in. Is this enforced or is it just warning the developer the battery is likely to drain quick from using the GPS receiver. Thanks in advance.

    Read the article

  • mod_rewrite to find missing /img/foo.jpg in /img/f/

    - by Ambrose
    I've got a folder of images which is reaching a critical mass after a few years. I want to move images into alphabetical folders, so that /img/foo.jpg goes into /img/f/foo.jpg and /img/bar.jpg goes into /img/b/bar.jpg and so on. In order to make the transition smooth, and to allow the manual uploaders to put stuff into the top level, I'd like to use mod_rewrite to do this: if /img/foo.jpg exists, serve it up, if not look for it in /img/f/foo.jpg thanks for any suggestions. For the record, no, I don't think we need to go /img/f/fo/foo.jpg just yet.

    Read the article

  • VB.net Edit-And-Continue: ignore "unable to apply this change while debugging"

    - by FastAl
    When using VB.Net (2008) and paused in debugging, Edit-And-Continue is a great time-saver. However if you change any module/class-level information (variable, sub/function signature, etc), you get the error message like this: "unable to apply this change while debugging" While I can understand the technical challenge to making this work (and why it would be hard), it leaves me in a tight spot with just a few options: 1) Restart and recompile and get the program back to the same state 2) Continue debugging without making the change, and risk forgetting 3) Type up a reminder note to make the change All of which are annoying. Now I know that option '4) Just actually make the change' may not be possible. but does anybody know how to enable the following 'technically easy' possibility? 4) Let me change the code, get it flagged with the purple squiggly underline, so I can save it, but just ignore the change until recompile I have checked the Tools|options|debug|edit and continue, nothing appears to let me do this. thanks!

    Read the article

  • Getting Line Numbers for Errors Thrown in SQL Server CLR Runtime

    - by fetucine53
    Hi all, I've created a CLR stored procedure that I'm running on SQL 2k5 and I'm wondering if there's any way to get line numbers for exceptions thrown by the .NET code. When an Exception is thrown, I get something along the lines of Msg 6522, Level 16, State 1, Procedure myProcedure, Line 0 A .NET Framework error occurred during execution of user-defined routine or aggregate "myProcedure": System.Exception: testing exception System.Exception: at DummyDLL.myProcedure (String dummyInput) . Is there some way I can load the assembly to give me specific line numbers rather than just the function in which the error was thrown? The assembly itself was compiled with a .pdb, but SQL 2k5 doesn't appear to be reading it in when I load the assembly initially. Thanks!

    Read the article

  • Linked Server related

    - by rmdussa
    I have two instances of SQL Server: Server1 (SQL Server 2008) Server2 (SQL Server 2005) I am executing a stored procedure from Server1 which references tables on Server2. It is working fine in my test environment: Server1 runs Vista SP2, SQL Server 2008; Server2 runs Windows XP SP2, SQL Server 2005. However, it is not working in the production environment: Server1 runs Vista SP1, SQL Server 2008; Server2 runs Windows XP SP2, SQL Server 2005. The error message I receive is: OLE DB provider "SQLNCLI10" for linked server "Server2" returned message "No transaction is active.". Msg 7391, Level 16, State 2, Line 21 The operation could not be performed because OLE DB provider "SQLNCLI10" for linked server "Server2" was unable to begin a distributed transaction.

    Read the article

  • Why darcs instead of git?

    - by Ctrl Alt D-1337
    Using pure functional languages can have a lot of benefits over using impure imperatives but low level systems languages will generally allow you to achieve much greater performance especially when they are imperative because it allows you to specify the exact steps in how the cpu should compute the result. If there is ever list of tools where high performance is an absolute must then I would put source version controls systems right at the top of that list and git achieves this very well but performance is not it's only advantage over many other other types of version control systems anyway. The git team are handling the unsafe c code very well and I never worry about my type system or any other features of the language it is written in so why is it that there is a lot of haskell developers that must use darcs when they will only be using the finished product?

    Read the article

  • Node.js vs PHP processing speed

    - by Cody Craven
    I've been looking into node.js recently and wanted to see a true comparison of processing speed for PHP vs Node.js. In most of the comparisons I had seen, Node trounced Apache/PHP set ups handily. However all of the tests were small 'hello worlds' that would not accurately reflect any webpage's markup. So I decided to create a basic HTML page with 10,000 hello world paragraph elements. In these tests Node with Cluster was beaten to a pulp by PHP on Nginx utilizing PHP-FPM. So I'm curious if I am misusing Node somehow or if Node is really just this bad at processing power. Note that my results were equivalent outputting "Hello world\n" with text/plain as the HTML, but I only included the HTML as it's closer to the use case I was investigating. My testing box: Core i7-2600 Intel CPU (has 8 threads with 4 cores) 8GB DDR3 RAM Fedora 16 64bit Node.js v0.6.13 Nginx v1.0.13 PHP v5.3.10 (with PHP-FPM) My test scripts: Node.js script var cluster = require('cluster'); var http = require('http'); var numCPUs = require('os').cpus().length; if (cluster.isMaster) { // Fork workers. for (var i = 0; i < numCPUs; i++) { cluster.fork(); } cluster.on('death', function (worker) { console.log('worker ' + worker.pid + ' died'); }); } else { // Worker processes have an HTTP server. http.Server(function (req, res) { res.writeHead(200, {'Content-Type': 'text/html'}); res.write('<html>\n<head>\n<title>Speed test</title>\n</head>\n<body>\n'); for (var i = 0; i < 10000; i++) { res.write('<p>Hello world</p>\n'); } res.end('</body>\n</html>'); }).listen(80); } This script is adapted from Node.js' documentation at http://nodejs.org/docs/latest/api/cluster.html PHP script <?php echo "<html>\n<head>\n<title>Speed test</title>\n</head>\n<body>\n"; for ($i = 0; $i < 10000; $i++) { echo "<p>Hello world</p>\n"; } echo "</body>\n</html>"; My results Node.js $ ab -n 500 -c 20 http://speedtest.dev/ This is ApacheBench, Version 2.3 <$Revision: 655654 $> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Licensed to The Apache Software Foundation, http://www.apache.org/ Benchmarking speedtest.dev (be patient) Completed 100 requests Completed 200 requests Completed 300 requests Completed 400 requests Completed 500 requests Finished 500 requests Server Software: Server Hostname: speedtest.dev Server Port: 80 Document Path: / Document Length: 190070 bytes Concurrency Level: 20 Time taken for tests: 14.603 seconds Complete requests: 500 Failed requests: 0 Write errors: 0 Total transferred: 95066500 bytes HTML transferred: 95035000 bytes Requests per second: 34.24 [#/sec] (mean) Time per request: 584.123 [ms] (mean) Time per request: 29.206 [ms] (mean, across all concurrent requests) Transfer rate: 6357.45 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 0 0.2 0 2 Processing: 94 547 405.4 424 2516 Waiting: 0 331 399.3 216 2284 Total: 95 547 405.4 424 2516 Percentage of the requests served within a certain time (ms) 50% 424 66% 607 75% 733 80% 813 90% 1084 95% 1325 98% 1843 99% 2062 100% 2516 (longest request) PHP/Nginx $ ab -n 500 -c 20 http://speedtest.dev/test.php This is ApacheBench, Version 2.3 <$Revision: 655654 $> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Licensed to The Apache Software Foundation, http://www.apache.org/ Benchmarking speedtest.dev (be patient) Completed 100 requests Completed 200 requests Completed 300 requests Completed 400 requests Completed 500 requests Finished 500 requests Server Software: nginx/1.0.13 Server Hostname: speedtest.dev Server Port: 80 Document Path: /test.php Document Length: 190070 bytes Concurrency Level: 20 Time taken for tests: 0.130 seconds Complete requests: 500 Failed requests: 0 Write errors: 0 Total transferred: 95109000 bytes HTML transferred: 95035000 bytes Requests per second: 3849.11 [#/sec] (mean) Time per request: 5.196 [ms] (mean) Time per request: 0.260 [ms] (mean, across all concurrent requests) Transfer rate: 715010.65 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 0 0.2 0 1 Processing: 3 5 0.7 5 7 Waiting: 1 4 0.7 4 7 Total: 3 5 0.7 5 7 Percentage of the requests served within a certain time (ms) 50% 5 66% 5 75% 5 80% 6 90% 6 95% 6 98% 6 99% 6 100% 7 (longest request) Additional details Again what I'm looking for is to find out if I'm doing something wrong with Node.js or if it is really just that slow compared to PHP on Nginx with FPM. I certainly think Node has a real niche that it could fit well, however with these test results (which I really hope I made a mistake with - as I like the idea of Node) lead me to believe that it is a horrible choice for even a modest processing load when compared to PHP (let alone JVM or various other fast solutions). As a final note, I also tried running an Apache Bench test against node with $ ab -n 20 -c 20 http://speedtest.dev/ and consistently received a total test time of greater than 0.900 seconds.

    Read the article

< Previous Page | 340 341 342 343 344 345 346 347 348 349 350 351  | Next Page >