Search Results

Search found 25284 results on 1012 pages for 'test driven'.

Page 937/1012 | < Previous Page | 933 934 935 936 937 938 939 940 941 942 943 944  | Next Page >

  • Python performance: iteration and operations on nested lists

    - by J.J.
    Problem Hey folks. I'm looking for some advice on python performance. Some background on my problem: Given: A mesh of nodes of size (x,y) each with a value (0...255) starting at 0 A list of N input coordinates each at a specified location within the range (0...x, 0...y) Increment the value of the node at the input coordinate and the node's neighbors within range Z up to a maximum of 255. Neighbors beyond the mesh edge are ignored. (No wrapping) BASE CASE: A mesh of size 1024x1024 nodes, with 400 input coordinates and a range Z of 75 nodes. Processing should be O(x*y*Z*N). I expect x, y and Z to remain roughly around the values in the base case, but the number of input coordinates N could increase up to 100,000. My goal is to minimize processing time. Current results I have 2 current implementations: f1, f2 Running speed on my 2.26 GHz Intel Core 2 Duo with Python 2.6.1: f1: 2.9s f2: 1.8s f1 is the initial naive implementation: three nested for loops. f2 is replaces the inner for loop with a list comprehension. Code is included below for your perusal. Question How can I further reduce the processing time? I'd prefer sub-1.0s for the test parameters. Please, keep the recommendations to native Python. I know I can move to a third-party package such as numpy, but I'm trying to avoid any third party packages. Also, I've generated random input coordinates, and simplified the definition of the node value updates to keep our discussion simple. The specifics have to change slightly and are outside the scope of my question. thanks much! f1 is the initial naive implementation: three nested for loops. 2.9s def f1(x,y,n,z): rows = [] for i in range(x): rows.append([0 for i in xrange(y)]) for i in range(n): inputX, inputY = (int(x*random.random()), int(y*random.random())) topleft = (inputX - z, inputY - z) for i in xrange(max(0, topleft[0]), min(topleft[0]+(z*2), x)): for j in xrange(max(0, topleft[1]), min(topleft[1]+(z*2), y)): if rows[i][j] <= 255: rows[i][j] += 1 f2 is replaces the inner for loop with a list comprehension. 1.8s def f2(x,y,n,z): rows = [] for i in range(x): rows.append([0 for i in xrange(y)]) for i in range(n): inputX, inputY = (int(x*random.random()), int(y*random.random())) topleft = (inputX - z, inputY - z) for i in xrange(max(0, topleft[0]), min(topleft[0]+(z*2), x)): l = max(0, topleft[1]) r = min(topleft[1]+(z*2), y) rows[i][l:r] = [j+1 for j in rows[i][l:r] if j < 255]

    Read the article

  • When -exactly- does the Rails3 application get initialized?

    - by bergyman
    I've been fighting left and right with rails 3 and bundler. There are a few gems out there that don't work properly if the rails application hasn't been loaded yet. factory_girl and shoulda are both examples, even on the rails3 branch. Taking shoulda as an example, when trying to run rake test:units I get the following error: DEPRECATION WARNING: RAILS_ROOT is deprecated! Use Rails.root instead. (called from autoload_macros at c:/code/test_harness/vendor/windows_gems/gems/shoulda-2.10.3/lib/shoulda/autoload_macros.rb:40) c:/code/test_harness/vendor/windows_gems/gems/shoulda-2.10.3/lib/shoulda/autoload_macros.rb:44:in 'join': can't convert #<Class:0x232b7c0> into String (TypeError) from c:/code/test_harness/vendor/windows_gems/gems/shoulda-2.10.3/lib/shoulda/autoload_macros.rb:44:in 'block in autoload_macros' from c:/code/test_harness/vendor/windows_gems/gems/shoulda-2.10.3/lib/shoulda/autoload_macros.rb:44:in 'map' from c:/code/test_harness/vendor/windows_gems/gems/shoulda-2.10.3/lib/shoulda/autoload_macros.rb:44:in 'autoload_macros' from c:/code/test_harness/vendor/windows_gems/gems/shoulda-2.10.3/lib/shoulda/rails.rb:17:in '<top (required)>' Digging a bit deeper into lib/shoulda/rails, I see this: root = if defined?(Rails.root) && Rails.root Rails.root else RAILS_ROOT end # load in the 3rd party macros from vendorized plugins and gems Shoulda.autoload_macros root, File.join("vendor", "{plugins,gems}", "*") So...what's happening here is while Rails.root is defined, Rails.root == nil, so RAILS_ROOT is used, and RAILS_ROOT==nil, which is then being passed on to Shoulda.autoload_macros. Obviously the rails app has yet to be initialized. With Rails3 using Bundler now, there's been some hubub over on the Bundler side about being able to specify an order in which the gems are required, but I'm not sure whether or not this would solve the problem at hand. Ultimately my questions is this: When exactly does the environment.rb file (which actually initializes the application) get pulled in? Is there any harm to bumping up when the app is initialized and have it happen before the Bundler.require line in config/application.rb? I've tried to hack bundler to specify the order myself, and have the rails gem pulled in first, but it doesn't appear to me that requiring the rails gem actually initializes the application. As this line (in config/application.rb) is being called before the app is initialized, any gem in the bundler Gemfile that requires rails to be initialized is going to tank. # Auto-require default libraries and those for the current Rails environment. Bundler.require :default, Rails.env

    Read the article

  • Default Database Collations got messed up

    - by dominicdinada
    I am using Ubuntu 9.10 with XAMPP ( Lampp "MYSQL 5.1.45 PHPMYADMIN 3.3.1 PHP 5.3.2 ) What my problem is, is that I set up my testing env to debug my scripts locally and when I did so there arose a problem. This problem is that I used firefox's addon SQLinject ME to test for weakness' and upon doing so it caused mysql to change the default local collations; character sets dir /opt/lampp/share/mysql/charsets/ collation connection latin1_general_ci (Global value) latin1_swedish_ci collation database latin1_swedish_ci collation server latin1_swedish_ci I have searched for quite sometime in regards to a solution to this problem and have come up with searching for the db.opt file which stores this information without success. Upon not finding a solution I removed lampp with the "sudo rm -fR /opt" command and reinstall and the problem still persists. I have tried to change the collations manually and still come up with the database displaying latin1_swedish_ci as the default language. Why is this a problem?? Why is it a problem with mysql? Because the application I am testing and debugging locally is built on the CodeIgnitor with Smarty framework and since this combination of framework is built to detect the LOCALES, Rather what the database defaults are I keep getting errors saying no language file for swedish...... Of course I could get the swedish language file to work around this problem but I do not feel the need to make this work around a perminant solution as with time when I move on to projects I will run into simular problems every time that A; When importing database files, backups etc it will default to import such databases as the locale swedish. B; As time passes on I might completly forget of this error and will be back to square one. I have found this code in searches for a fix,which seems to alter the tables to a desired Collaion; $value) { mysql_query("ALTER TABLE $value COLLATE latin1_general_ci"); }} echo "The collation of your database has been successfully changed!"; ? Which is handy to switch collations in One Schema at a time however this is not a fix when a framework doesnt care that the said database is in one langugae. It tests for the Default of the entire server. Someone with any knowledge of a purge or fix to this I would greatly appricate the help. One more final note is that when I was testing I only figured to back up the applications DataBase and not the entire Schema of the install. No matter if I uninstall or reinstall the database still seems to carry these problems.

    Read the article

  • Loading, listing, and using R Modules and Functions in PL/R

    - by Dave Jarvis
    I am having difficulty with: Listing the R packages and functions available to PostgreSQL. Installing a package (such as Kendall) for use with PL/R Calling an R function within PostgreSQL Listing Available R Packages Q.1. How do you find out what R modules have been loaded? SELECT * FROM r_typenames(); That shows the types that are available, but what about checking if Kendall( X, Y ) is loaded? For example, the documentation shows: CREATE TABLE plr_modules ( modseq int4, modsrc text ); That seems to allow inserting records to dictate that Kendall is to be loaded, but the following code doesn't explain, syntactically, how to ensure that it gets loaded: INSERT INTO plr_modules VALUES (0, 'pg.test.module.load <-function(msg) {print(msg)}'); Q.2. What would the above line look like if you were trying to load Kendall? Q.3. Is it applicable? Installing R Packages Using the "synaptic" package manager the following packages have been installed: r-base r-base-core r-base-dev r-base-html r-base-latex r-cran-acepack r-cran-boot r-cran-car r-cran-chron r-cran-cluster r-cran-codetools r-cran-design r-cran-foreign r-cran-hmisc r-cran-kernsmooth r-cran-lattice r-cran-matrix r-cran-mgcv r-cran-nlme r-cran-quadprog r-cran-robustbase r-cran-rpart r-cran-survival r-cran-vr r-recommended Q.4. How do I know if Kendall is in there? Q.5. If it isn't, how do I find out what package it is in? Q.6. If it isn't in a package suitable for installing with apt-get (aptitude, synaptic, dpkg, what have you), how do I go about installing it on Ubuntu? Q.7. Where are the installation steps documented? Calling R Functions I have the following code: EXECUTE 'SELECT ' 'regr_slope( amount, year_taken ),' 'regr_intercept( amount, year_taken ),' 'corr( amount, year_taken ),' 'sum( measurements ) AS total_measurements ' 'FROM temp_regression' INTO STRICT slope, intercept, correlation, total_measurements; This code calls the PostgreSQL function corr to calculate Pearson's correlation over the data. Ideally, I'd like to do the following (by switching corr for plr_kendall): EXECUTE 'SELECT ' 'regr_slope( amount, year_taken ),' 'regr_intercept( amount, year_taken ),' 'plr_kendall( amount, year_taken ),' 'sum( measurements ) AS total_measurements ' 'FROM temp_regression' INTO STRICT slope, intercept, correlation, total_measurements; Q.8. Do I have to write plr_kendall myself? Q.9. Where can I find a simple example that walks through: Loading an R module into PG. Writing a PG wrapper for the desired R function. Calling the PG wrapper from a SELECT. For example, would the last two steps look like: create or replace function plr_kendall( _float8, _float8 ) returns float as ' agg_kendall(arg1, arg2) ' language 'plr'; CREATE AGGREGATE agg_kendall ( sfunc = plr_array_accum, basetype = float8, -- ??? stype = _float8, -- ??? finalfunc = plr_kendall ); And then the SELECT as above? Thank you!

    Read the article

  • UIWebView leak? Can someone confirm?

    - by Shaggy Frog
    I was leak-testing my current project and I'm stumped. I've been browsing like crazy and tried everything except chicken sacrifice. I just created a tiny toy project app from scratch and I can duplicate the leak in there. So either UIWebView has a leak or I'm doing something really silly. Essentially, it boils down to a loadRequest: call to a UIWebView object, given an URLRequest built from an NSURL which references a file URL, for a file in the app bundle, which lives inside a folder that Xcode is including by reference. Phew. The leak is intermittent but still happens ~75% of the time (in about 20 tests it happened about 15 times). It only happens on the device -- this does not leak in the simulator. I am testing targeting both iPhone OS 3.1.2 and 3.1.3, on an original (1st Gen) iPod Touch that is using iPhone OS 3.1.3. To reproduce, just create a project from scratch. Add a UIWebView to the RootViewController's .xib, hook it up via IBOutlet. In the Finder, create a folder named "html" inside your project's folder. Inside that folder, create a file named "dummy.html" that has the word "Test" in it. (Does not need to be valid HTML.) Then add the html folder to your project in Xcode by choosing "Create Folder References for any added folders" Add the following to viewDidLoad NSString* resourcePath = [[NSBundle mainBundle] resourcePath]; NSString* filePath = [[resourcePath stringByAppendingPathComponent:@"html"] stringByAppendingPathComponent:@"dummy.html"]; NSURL* url = [[NSURL alloc] initFileURLWithPath:filePath]; NSURLRequest* request = [NSURLRequest requestWithURL:url]; // <-- this creates the leak! [browserView loadRequest:request]; [url release]; I've tried everything from setting delegate for the UIWebView and implementing UIWebViewDelegate, to not setting a delegate in IB, to not setting a delegate in IB and explicitly setting the web view's delegate property to nil, to using alloc/init instead of getting autoreleased NSURLRequests (and/or NSURLs)... I tried the answer to a similar question (setting the shared URL cache to empty) and that did not help. Can anyone help?

    Read the article

  • request response with activemq - always send double response.

    - by Chris Valley
    Hi, I'm new at activeMq. I tried to create a simple request response like this. public Listener(string destination) { // set factory ConnectionFactory factory = new ConnectionFactory(URL); IConnection connection; try { connection = factory.CreateConnection(); connection.Start(); ISession session = connection.CreateSession(); // create consumer for designated destination IMessageConsumer consumer = session.CreateConsumer(new Apache.NMS.ActiveMQ.Commands.ActiveMQQueue(destination)); consumer.Listener += new MessageListener(consumer_Listener); Console.ReadLine(); } catch (Exception ex) { Console.WriteLine(ex.ToString()); throw new Exception("Exception in Listening ", ex); } } The OnMessage static void consumer_Listener(IMessage message) { IConnectionFactory factory = new ConnectionFactory("tcp://localhost:61616/"); using (IConnection connection = factory.CreateConnection()) { //Create the Session using (ISession session = connection.CreateSession()) { //Create the Producer for the topic/queue // IMessageProducer prod = session.CreateProducer(new Apache.NMS.ActiveMQ.Commands.ActiveMQTempQueue(message.NMSDestination)); IMessageProducer producer = session.CreateProducer(message.NMSDestination); // Create Response // IMessage response = session.CreateMessage(); ITextMessage response = producer.CreateTextMessage("Replied from VS2010 Test"); //response.NMSReplyTo = new Apache.NMS.ActiveMQ.Commands.ActiveMQQueue("testQ1"); response.NMSCorrelationID = message.NMSCorrelationID; if (message.NMSReplyTo != null) { producer.Send(message.NMSReplyTo, response); Console.WriteLine("Receive: " + ((ITextMessage)message).NMSCorrelationID); Console.WriteLine("Received from : " + message.NMSDestination.ToString()); Console.WriteLine("----------------------------------------------------"); } } } } Every time i tried to send a request to the listener, the response always send repeatedly. The first response will have NMSReplyTo properties while the other not. My workaround to stop this situation by cheking the NMSReplyTo properties if (message.NMSReplyTo != null) { producer.Send(message.NMSReplyTo, response); Console.WriteLine("Receive: " + ((ITextMessage)message).NMSCorrelationID); Console.WriteLine("Received from : " + message.NMSDestination.ToString()); Console.WriteLine("----------------------------------------------------"); } In my understanding, this happened because there was a circular send response in the listener to the same Queue. Could you guys help me how to fix this? Many Thanks, Chris

    Read the article

  • WCF Webservices and FaultContract - Client's receiving SoapExc insted of FaultException<TDetails>

    - by Alessandro Di Lello
    Hi All, i'm developing a WCF Webservice and consuming it within a mvc2 application. My problem is that i'm using FaultContracts on my methods with a custom FaultDetail and i'm throwing manyally the faultexception but when the client receive the exception , it receives a normal SoapException instead of my FaultException that i throwed from the service side. Here is some code: Custom Fault Detail Class: [DataContract] public class MyFaultDetails { [DataMember] public string Message { get; set; } } Operation on service contract: [OperationContract] [FaultContract(typeof(MyFaultDetails))] void ThrowException(); Implementation: public void ThrowException() { var details = new MyFaultDetails { Message = "Exception Test" }; throw new FaultException<MyFaultDetails >(details , new FaultReason(details .Message), new FaultCode("MyFault")); } Client side: try { // Obv proxy init etc.. service.ThrowException(); } catch (FaultException<MyFaultDetails> ex) { // stuff } catch (Exception ex) { // stuff } What i expect is to catch the FaultException , instead that catch is skipped and the next catch is taken with an exception of type SoapException. Am i missing something ? i red a lot of threads about using faultcontracts within wcf and what i did seems to be good. I had a look at the wsdl and xsd generated and they look fine. here's a snippet regarding this method: <wsdl:operation name="ThrowException"> <wsdl:input wsaw:Action="http://tempuri.org/IAnyJobService/ThrowException" message="tns:IAnyJobService_ThrowException_InputMessage" /> <wsdl:output wsaw:Action="http://tempuri.org/IAnyJobService/ThrowExceptionResponse" message="tns:IAnyJobService_ThrowException_OutputMessage" /> <wsdl:fault wsaw:Action="http://tempuri.org/IAnyJobService/ThrowExceptionAnyJobServiceFaultExceptionFault" name="AnyJobServiceFaultExceptionFault" message="tns:IAnyJobService_ThrowException_AnyJobServiceFaultExceptionFault_FaultMessage" /> </wsdl:operation> <wsdl:operation name="ThrowException"> <soap:operation soapAction="http://tempuri.org/IAnyJobService/ThrowException" style="document" /> <wsdl:input> <soap:body use="literal" /> </wsdl:input> <wsdl:output> <soap:body use="literal" /> </wsdl:output> <wsdl:fault name="AnyJobServiceFaultExceptionFault"> <soap:fault use="literal" name="AnyJobServiceFaultExceptionFault" namespace="" /> </wsdl:fault> </wsdl:operation> Any help ? Thanks in advance Regards Alessandro

    Read the article

  • jQueryMobile: how to work with slider events?

    - by balexandre
    I'm testing the slider events in jQueryMobile and I must been missing something. page code is: <div data-role="fieldcontain"> <label for="slider">Input slider:</label> <input type="range" name="slider" id="slider" value="0" min="0" max="100" /> </div> and if I do: $("#slider").data("events"); I get blur, focus, keyup, remove What I want to do is to get the value once user release the slider handle and having a hook to the keyup event as $("#slider").bind("keyup", function() { alert('here'); } ); does absolutely nothing :( I must say that I wrongly assumed that jQueryMobile used jQueryUI controls as it was my first thought, but now working deep in the events I can see this is not the case, only in terms of CSS Design. What can I do? jQuery Mobile Slider source code can be found on Git if it helps anyone as well a test page can be found at JSBin As I understand, the #slider is the textbox with the value, so I would need to hook into the slider handle as the generated code for this slider is: <div data-role="fieldcontain" class="ui-field-contain ui-body ui-br"> <label for="slider" class="ui-input-text ui-slider" id="slider-label">Input slider:</label> <input data-type="range" max="100" min="0" value="0" id="slider" name="slider" class="ui-input-text ui-body-null ui-corner-all ui-shadow-inset ui-body-c ui-slider-input" /> <div role="application" class="ui-slider ui-btn-down-c ui-btn-corner-all"> <a class="ui-slider-handle ui-btn ui-btn-corner-all ui-shadow ui-btn-up-c" href="#" data-theme="c" role="slider" aria-valuemin="0" aria-valuemax="100" aria-valuenow="54" aria-valuetext="54" title="54" aria-labelledby="slider-label" style="left: 54%;"> <span class="ui-btn-inner ui-btn-corner-all"> <span class="ui-btn-text"></span> </span> </a> </div> </div> and checking the events in the handler anchor I get only the click event $("#slider").next().find("a").data("events");

    Read the article

  • Adding UIViewController.view to another view causes orientation problems

    - by Bob Vork
    Short version: I'm alloc/init/retaining a new UIViewController in one UIViewControllers viewDidLoad method, adding the new View to self.view. This usually works, but it seems to mess up orientation change handling of my iPad app. Longer version: I'm building a fairly complex iPad application, involving a lot of views and viewcontrollers. After running into some difficulties adjusting to the device orientation, I made a simple XCode project to figure out what the problem is. Firstly, I have read the Apple Docs on this subject (a small document called "Why won't my UIViewController rotate with the device?"), and while I do believe it has something to do with one of the reasons listed there, I'm not really sure how to fix it. In my test project I have an appDelegate, a rootViewController, and a UISplitViewController with two custom viewControllers. I use a button on the rootViewController to switch to the splitViewController, and from there I can use a button to switch back to the rootViewController. So far everything is great, i.e. all views adjust to the device orientation. However, in the right viewController of the splitViewController, I use the viewDidLoad method to initialize some other viewControllers, and add their views to its own view: self.newViewController = [[UIViewController new] autorelease]; [newViewController.view setBackgroundColor:[UIColor yellowColor]]; [self.view addSubview:newViewController.view]; This is where things go wrong. Somehow, after adding this view, adjusting to device orientation is messy. On startup everything is fine, after I switch to the splitViewController everything is still fine, but as soon as I switch back to the rootViewController it's all over. I have tried (almost) everything regarding retaining and releasing the viewcontroller, but nothing seems to fix it. As you can see from the code above, I have declared the newViewController as a property, but the same happens if I don't. Shouldn't I be adding a ViewController's view to my own view at all? That would really mess up my project, as I have a lot of viewControllers doing all sorts of things. Any help on this would be greatly appreciated...

    Read the article

  • Doubt about adopting CI (Hudson) into an existing automated Build Process (phing, svn)

    - by maraspin
    OUR CURRENT BUILD PROCESS We're a small team of developers (2 to 4 people depending on project) who currently use Phing to deploy code to a staging environment, before going live. We keep our code in a SVN repo, where the trunk holds current active development and, at certain times, we do make branches that we test and then (if successful), tag and export to the staging env. If everything goes well there too, we finally deploy'em in production servers. Actions are highly automated, but always triggered by human intervention. THE DOUBT We'd now like to introduce Continuous Integration (with Hudson) in the process; unfortunately we have a few doubts about activity syncing, since we're afraid that CI could somewhat interfere with our build process and cause certain problems. Considering that an automated CI cycle has a certain frequency of automatically executed actions, we in fact only see 2 possible cases for "integration", each with its own problems: Case A: each CI cycle produces a new branch with its own name; we do use such a name to manually (through phing as it happens now) export the code from the SVN to the staging env. The problem I see here is that (unless specific countermeasures are taken) the number of branches we have can grow out of control (let's suppose we commit often, so that we have a fresh new build/branch every N minutes). Case B: each CI cycle creates a new branch named 'current', for instance, which is tagged with a unique name only when we manually decide to export it to staging; the current branch, at any case is then deleted, as soon as the next CI cycle starts up. The problem we see here is that a new cycle could kick in while someone is tagging/exporting the 'current' branch to staging thus creating an inconsistent build (but maybe here I'm just too pessimist, since I confess I don't know whether SVN offers some built-in protection against this). With all this being said, I was wondering if anyone with similar experiences could be so kind to give us some hints on the subject, since none of the approaches depicted above looks completely satisfing to us. Is there something important we just completely left off in the overall picture? Thanks for your attention &, in advance, for your help!

    Read the article

  • Tridion Installation

    - by Kevin Brydon
    I am currently upgrading an installation of Tridion from 5.3 to 2011 starting almost from scratch (aside from migrating the database), brand new virtual servers. I just want to ask for some advice on my current server setup... a sanity check. All servers are running Windows Server 2008. The pages on our website are all classic ASP. Database SQL Server cluster. The 5.3 database has been migrated using the DatabaseManager. This is pretty standard and works well (in test anyway). Content Manager A single server to run the Content Manager and the Publisher. There are around 10 people using it at any one time so not under a particularly heavy load. Content Data Store Filesystem located somewhere on the network. One directory for live and one for staging. Content Delivery Two servers (cd1 and cd2) each with the the following server roles installed. cd1 writes to a filesystem content data store for the live website, cd2 writes to the content data store for the staging website. Presentation Two public facing web servers (web1 and web2) serving both the live and staging websites. The web servers read directly from the content data store as its a filesystem. Each of the web servers have the Content Delivery Server installed so that I can use dynamic linking (and other features?). I've so far set up everything but the web servers. Any thoughts? edit Thanks to Ram S who linked me to a decent walkthrough, upvoted. I suppose I should have posed some questions as I didn't really ask a question. I guess I'm a little confused over the content deliver aspect. I have the Content Delivery split in two separate parts. cd1 and cd2 do the work of shifting information from the Content Manager to the Staging/Live web directories. web1 and web2 should do the work of serving the web pages to the outside world and will interact with the content data store (file system). Is this a correct setup? I need some parts of the Content Delivery on my web servers right? Theoretically I could get rid of the cd1 and cd2 servers and use web1 and web2 to do the deployment right? But I suspect this will put the web servers under unnecessary strain should there ever be a big publish. I've been reading the 2011 Installation Manual, Content Delivery section, and I'm finding it quite hard to get my head around!

    Read the article

  • How to write a buffer-overflow exploit in GCC,windows XP,x86?

    - by Mask
    void function(int a, int b, int c) { char buffer1[5]; char buffer2[10]; int *ret; ret = buffer1 + 12; (*ret) += 8;//why is it 8?? } void main() { int x; x = 0; function(1,2,3); x = 1; printf("%d\n",x); } The above demo is from here: http://insecure.org/stf/smashstack.html But it's not working here: D:\test>gcc -Wall -Wextra hw.cpp && a.exe hw.cpp: In function `void function(int, int, int)': hw.cpp:6: warning: unused variable 'buffer2' hw.cpp: At global scope: hw.cpp:4: warning: unused parameter 'a' hw.cpp:4: warning: unused parameter 'b' hw.cpp:4: warning: unused parameter 'c' 1 And I don't understand why it's 8 though the author thinks: A little math tells us the distance is 8 bytes. My gdb dump as called: Dump of assembler code for function main: 0x004012ee <main+0>: push %ebp 0x004012ef <main+1>: mov %esp,%ebp 0x004012f1 <main+3>: sub $0x18,%esp 0x004012f4 <main+6>: and $0xfffffff0,%esp 0x004012f7 <main+9>: mov $0x0,%eax 0x004012fc <main+14>: add $0xf,%eax 0x004012ff <main+17>: add $0xf,%eax 0x00401302 <main+20>: shr $0x4,%eax 0x00401305 <main+23>: shl $0x4,%eax 0x00401308 <main+26>: mov %eax,0xfffffff8(%ebp) 0x0040130b <main+29>: mov 0xfffffff8(%ebp),%eax 0x0040130e <main+32>: call 0x401b00 <_alloca> 0x00401313 <main+37>: call 0x4017b0 <__main> 0x00401318 <main+42>: movl $0x0,0xfffffffc(%ebp) 0x0040131f <main+49>: movl $0x3,0x8(%esp) 0x00401327 <main+57>: movl $0x2,0x4(%esp) 0x0040132f <main+65>: movl $0x1,(%esp) 0x00401336 <main+72>: call 0x4012d0 <function> 0x0040133b <main+77>: movl $0x1,0xfffffffc(%ebp) 0x00401342 <main+84>: mov 0xfffffffc(%ebp),%eax 0x00401345 <main+87>: mov %eax,0x4(%esp) 0x00401349 <main+91>: movl $0x403000,(%esp) 0x00401350 <main+98>: call 0x401b60 <printf> 0x00401355 <main+103>: leave 0x00401356 <main+104>: ret 0x00401357 <main+105>: nop 0x00401358 <main+106>: add %al,(%eax) 0x0040135a <main+108>: add %al,(%eax) 0x0040135c <main+110>: add %al,(%eax) 0x0040135e <main+112>: add %al,(%eax) End of assembler dump. Dump of assembler code for function function: 0x004012d0 <function+0>: push %ebp 0x004012d1 <function+1>: mov %esp,%ebp 0x004012d3 <function+3>: sub $0x38,%esp 0x004012d6 <function+6>: lea 0xffffffe8(%ebp),%eax 0x004012d9 <function+9>: add $0xc,%eax 0x004012dc <function+12>: mov %eax,0xffffffd4(%ebp) 0x004012df <function+15>: mov 0xffffffd4(%ebp),%edx 0x004012e2 <function+18>: mov 0xffffffd4(%ebp),%eax 0x004012e5 <function+21>: movzbl (%eax),%eax 0x004012e8 <function+24>: add $0x5,%al 0x004012ea <function+26>: mov %al,(%edx) 0x004012ec <function+28>: leave 0x004012ed <function+29>: ret In my case the distance should be - = 5,right?But it seems not working.. Why function needs 56 bytes for local variables?( sub $0x38,%esp )

    Read the article

  • java.lang.Error: "Not enough storage is available to process this command" when generating images

    - by jhericks
    I am running a web application on BEA Weblogic 9.2. Until recently, we were using JDK 1.5.0_04, with JAI 1.1.2_01 and Image IO 1.1. In some circumstances (we never figured out exactly why), when we were processing large images (but not that large - a few MB), the JVM would crash without any error message or stack trace or anything. This didn't happen much in production, but enough to be a nuisance and eventually we were able to reproduce it. We decided to switch to JRockit90 1.5.0_04 and we were no longer able to reproduce the problem in our test environment, so we thought we had it licked. Now, however, after the application server has been up for a while, we start getting the error message, "Not enough storage is available to process this command" during image operations. For example: java.lang.Error: Error starting thread: Not enough storage is available to process this command. at java.lang.Thread.start()V(Unknown Source) at sun.awt.image.ImageFetcher$1.run(ImageFetcher.java:279) at sun.awt.image.ImageFetcher.createFetchers(ImageFetcher.java:272) at sun.awt.image.ImageFetcher.add(ImageFetcher.java:55) at sun.awt.image.InputStreamImageSource.startProduction(InputStreamImageSource.java:149) at sun.awt.image.InputStreamImageSource.addConsumer(InputStreamImageSource.java:106) at sun.awt.image.InputStreamImageSource.startProduction(InputStreamImageSource.java:144) at sun.awt.image.ImageRepresentation.startProduction(ImageRepresentation.java:647) at sun.awt.image.ImageRepresentation.prepare(ImageRepresentation.java:684) at sun.awt.SunToolkit.prepareImage(SunToolkit.java:734) at java.awt.Component.prepareImage(Component.java:3073) at java.awt.ImageMediaEntry.startLoad(MediaTracker.java:906) at java.awt.MediaEntry.getStatus(MediaTracker.java:851) at java.awt.ImageMediaEntry.getStatus(MediaTracker.java:902) at java.awt.MediaTracker.statusAll(MediaTracker.java:454) at java.awt.MediaTracker.waitForAll(MediaTracker.java:405) at java.awt.MediaTracker.waitForAll(MediaTracker.java:375) at SfxNET.System.Drawing.ImageLoader.loadImage(Ljava.awt.Image;)Ljava.awt.image.BufferedImage;(Unknown Source) at SfxNET.System.Drawing.ImageLoader.loadImage(Ljava.net.URL;)Ljava.awt.image.BufferedImage;(Unknown Source) at Resources.Tools.Commands.W$zw(Ljava.lang.ClassLoader;)V(Unknown Source) at Resources.Tools.Commands.getContents()[[Ljava.lang.Object;(Unknown Source) at SfxNET.sfxUtils.SfxResourceBundle.handleGetObject(Ljava.lang.String;)Ljava.lang.Object;(Unknown Source) at java.util.ResourceBundle.getObject(ResourceBundle.java:320) at SoftwareFX.internal.ChartFX.wxvw.yxWW(Ljava.lang.String;Z)Ljava.lang.Object;(Unknown Source) at SoftwareFX.internal.ChartFX.wxvw.vxWW(Ljava.lang.String;)Ljava.lang.Object;(Unknown Source) at SoftwareFX.internal.ChartFX.CommandBar.YWww(LSoftwareFX.internal.ChartFX.wxvw;IIII)V(Unknown Source) at SoftwareFX.internal.ChartFX.Internet.Server.xxvw.YzzW(LSoftwareFX.internal.ChartFX.Internet.Server.ChartCore;Z)LSoftwareFX.internal.ChartFX.CommandBar;(Unknown Source) at SoftwareFX.internal.ChartFX.Internet.Server.xxvw.XzzW(LSoftwareFX.internal.ChartFX.Internet.Server.ChartCore;)V(Unknown Source) at SoftwareFX.internal.ChartFX.Internet.Server.ChartCore.OnDeserialization(Ljava.lang.Object;)V(Unknown Source) at SoftwareFX.internal.ChartFX.Internet.Server.ChartCore.Zvvz(LSoftwareFX.internal.ChartFX.Base.wzzy;)V(Unknown Source) Has anyone seen something like this before? Any clue what might be happening?

    Read the article

  • Sending double quote character to CreateProcess?

    - by karikari
    I want to send the double quote character to my CreateProcess function. How can I do the correct way? I want to send all of this characters: "%h" CreateProcess(L"C:\\identify -format ",L"\"%h\" trustedsnapshot.png",0,0,TRUE,NORMAL_PRIORITY_CLASS|CREATE_NO_WINDOW,0,0,&sInfo,&pInfo); Here is the full code: int ExecuteExternalFile() { SECURITY_ATTRIBUTES secattr; ZeroMemory(&secattr,sizeof(secattr)); secattr.nLength = sizeof(secattr); secattr.bInheritHandle = TRUE; HANDLE rPipe, wPipe; //Create pipes to write and read data CreatePipe(&rPipe,&wPipe,&secattr,0); STARTUPINFO sInfo; ZeroMemory(&sInfo,sizeof(sInfo)); PROCESS_INFORMATION pInfo; ZeroMemory(&pInfo,sizeof(pInfo)); sInfo.cb=sizeof(sInfo); sInfo.dwFlags=STARTF_USESTDHANDLES; sInfo.hStdInput=NULL; sInfo.hStdOutput=wPipe; sInfo.hStdError=wPipe; CreateProcess(L"C:\\identify",L" -format \"%h\" trustedsnapshot.png",0,0,TRUE,NORMAL_PRIORITY_CLASS|CREATE_NO_WINDOW,0,0,&sInfo,&pInfo); CloseHandle(wPipe); char buf[100]; DWORD reDword; CString m_csOutput,csTemp; BOOL res; do { res=::ReadFile(rPipe,buf,100,&reDword,0); csTemp=buf; m_csOutput+=csTemp.Left(reDword); }while(res); //return m_csOutput; float fvar; //fvar = atof((const char *)(LPCTSTR)(m_csOutput)); ori //fvar=atof((LPCTSTR)m_csOutput); fvar = _tstof(m_csOutput); const size_t len = 256; wchar_t buffer[len] = {}; _snwprintf(buffer, len - 1, L"%d", fvar); MessageBox(NULL, buffer, L"test print createprocess value", MB_OK); return fvar; } I need this function to return the integer value from the CreateProcess.

    Read the article

  • Set default form textfield value (webbrowser control/DOM Javscript)

    - by Khou
    Hi I would like my application to load a webpage and set default the form textfield value a predefine value. Requirements: -The application is a windows form, it is to use the web browser control, to load a web page. -Textfield values are define by within the application. -When textfield on the webpage matches the applications predefined elements, the predefine fixed value is set and can not be changed by the end user. Example If my application defines element "FirstName" equal to value "John", the text field for value for element "FirstName" will always equal "John" and this value can not be changed by the end user. Below is html/javascript code to perform this functionality, now how do I implement this in a windows form? (without having to modify the loaded webpage source code (if possible). HTML <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"> <html> <head> <title>page title</title> <script script type="text/javascript" src="demo1.js"></script> </head> <body onload="def(document.someform, 'name', 'my default name value');"> <h2 style="color: #8e9182">test form title</h2> <form name="someform" id="someform_frm" action="#"> <table cellspacing="1"> <tr><td><label for="name">NameX: </label></td><td><input type="text" size="30" maxlength="155" name="name" onchange="def(document.someform, 'name', 'my default name value');"></td></tr> <tr><td><label for="name2">NameY: </label></td><td><input type="text" size="30" maxlength="155" name="name2"></td></tr> <tr><td colspan="2"><input type="button" name="submit" value="Submit" onclick="showFormData(this.form);" ></td></table> </form> </body> </html> JAVASCRIPT function def(oForm, element_name, def_txt) { oForm.elements[element_name].value = def_txt; }

    Read the article

  • Is MVVM pointless?

    - by joebeazelman
    Is orthodox MVVM implementation pointless? I am creating a new application and I considered Windows Forms and WPF. I chose WPF because it's future-proof and offer lots of flexibility. There is less code and easier to make significant changes to your UI using XAML. Since the choice for WPF is obvious, I figured that I may as well go all the way by using MVVM as my application architecture since it offers blendability, separation concerns and unit testability. Theoretically, it seems beautiful like the holy grail of UI programming. This brief adventure; however, has turned into a real headache. As expected in practice, I’m finding that I’ve traded one problem for another. I tend to be an obsessive programmer in that I want to do things the right way so that I can get the right results and possibly become a better programmer. The MVVM pattern just flunked my test on productivity and has just turned into a big yucky hack! The clear case in point is adding support for a Modal dialog box. The correct way is to put up a dialog box and tie it to a view model. Getting this to work is difficult. In order to benefit from the MVVM pattern, you have to distribute code in several places throughout the layers of your application. You also have to use esoteric programming constructs like templates and lamba expressions. Stuff that makes you stare at the screen scratching your head. This makes maintenance and debugging a nightmare waiting to happen as I recently discovered. I had an about box working fine until I got an exception the second time I invoked it, saying that it couldn’t show the dialog box again once it is closed. I had to add an event handler for the close functionality to the dialog window, another one in the IDialogView implementation of it and finally another in the IDialogViewModel. I thought MVVM would save us from such extravagant hackery! There are several folks out there with competing solutions to this problem and they are all hacks and don’t provide a clean, easily reusable, elegant solution. Most of the MVVM toolkits gloss over dialogs and when they do address them, they are just alert boxes that don’t require custom interfaces or view models. I’m planning on giving up on the MVVM view pattern, at least its orthodox implementation of it. What do you think? Has it been worth the trouble for you if you had any? Am I just a incompetent programmer or does MVVM not what it's hyped up to be?

    Read the article

  • TVirtualStringTree - resetting non-visual nodes and memory consumption

    - by Remy Lebeau - TeamB
    I have an app that loads records from a binary log file and displays them in a virtual TListView. There are potentially millions of records in a file, and the display can be filtered by the user, so I do not load all of the records in memory at one time, and the ListView item indexes are not a 1-to-1 relation with the file record offsets (List item 1 may be file record 100, for instance). I use the ListView's OnDataHint event to load records for just the items the ListView is actually interested in. As the user scrolls around, the range specified by OnDataHint changes, allowing me to free records that are not in the new range, and allocate new records as needed. This works fine, speed is tolerable, and the memory footprint is very low. I am currently evaluating TVirtualStringTree as a replacement for the TListView, mainly because I want to add the ability to expand/collapse records that span multiple lines (I can fudge it with the TListView by incrementing/decrementing the item count dynamically, but this is not as straight forward as using a real tree). For the most part, I have been able to port the TListView logic and have everything work as I need. I notice that TVirtualStringTree's virtual paradigm is vastly different, though. It does not have the same kind of OnDataHint functionality that TListView does (I can use the OnScroll event to fake it, which allows my memory buffer logic to continue working), and I can use the OnInitializeNode event to associate nodes with records that are allocated. However, once a tree node is initialized, it sees that it remains initialized for the lifetime of the tree. That is not good for me. As the user scrolls around and I remove records from memory, I need to reset those non-visual nodes without removing them from the tree completely, or losing their expand/collapse states. When the user scrolls them back into view, I can re-allocate the records and re-initialize the nodes. Basically, I want to make TVirtualStringTree act as much like TListView as possible, as far as its virtualization is concerned. I have seen that TVirtualStringTree has a ResetNode() method, but I encounter various errors whenever I try to use it. I must be using it wrong. I also thought of just storing a data pointer inside each node to my record buffers, and I allocate and free memory, update those pointers accordingly. The end effect does not work so well, either. Worse, my largest test log file has ~5 million records in it. If I initialize the TVirtualStringTree with that many nodes at one time (when the log display is unfiltered), the tree's internal overhead for its nodes takes up a whopping 260MB of memory (without any records being allocated yet). Whereas with the TListView, loading the same log file and all the memory logic behind it, I can get away with using just a few MBs. Any ideas?

    Read the article

  • The Skyline Problem.

    - by zeroDivisible
    I just came across this little problem on UVA's Online Judge and thought, that it may be a good candidate for a little code-golf. The problem: You are to design a program to assist an architect in drawing the skyline of a city given the locations of the buildings in the city. To make the problem tractable, all buildings are rectangular in shape and they share a common bottom (the city they are built in is very flat). The city is also viewed as two-dimensional. A building is specified by an ordered triple (Li, Hi, Ri) where Li and Ri are left and right coordinates, respectively, of building i and Hi is the height of the building. In the diagram below buildings are shown on the left with triples (1,11,5), (2,6,7), (3,13,9), (12,7,16), (14,3,25), (19,18,22), (23,13,29), (24,4,28) and the skyline, shown on the right, is represented by the sequence: 1, 11, 3, 13, 9, 0, 12, 7, 16, 3, 19, 18, 22, 3, 23, 13, 29, 0 The output should consist of the vector that describes the skyline as shown in the example above. In the skyline vector (v1, v2, v3, ... vn) , the vi such that i is an even number represent a horizontal line (height). The vi such that i is an odd number represent a vertical line (x-coordinate). The skyline vector should represent the "path" taken, for example, by a bug starting at the minimum x-coordinate and traveling horizontally and vertically over all the lines that define the skyline. Thus the last entry in the skyline vector will be a 0. The coordinates must be separated by a blank space. If I will not count declaration of provided (test) buildings and including all spaces and tab characters, my solution, in Python, is 223 characters long. Here is the condensed version: B=[[1,11,5],[2,6,7],[3,13,9],[12,7,16],[14,3,25],[19,18,22],[23,13,29],[24,4,28]] # Solution. R=range v=[0 for e in R(max([y[2] for y in B])+1)] for b in B: for x in R(b[0], b[2]): if b[1]>v[x]: v[x]=b[1] p=1 k=0 for x in R(len(v)): V=v[x] if p and V==0: continue elif V!=k: p=0 print "%s %s" % (str(x), str(V)), k=V I think that I didn't made any mistake but if so - feel free to criticize me. EDIT I don't have much reputation, so I will pay only 100 for a bounty - I am curious, if anyone could try to solve this in less than .. lets say, 80 characters. Solution posted by cobbal is 101 characters long and currently it is the best one. ANOTHER EDIT I thought, that 80 characters is a sick limit for this kind of problem. cobbal, with his 46 character solution totaly amazed me - though I must admit, that I spent some time reading his explanation before I partially understood what he had written.

    Read the article

  • NameClaimType in ClaimsIdentity from SAML

    - by object88
    I am attempting to understand the world of WIF in context of a WCF Data Service / REST / OData server. I have a hacked up version of SelfSTS that is running inside a unit test project. When the unit tests start, it kicks off a WCF service, which generates my SAML token. This is the SAML token being generated: <saml:Assertion MajorVersion="1" MinorVersion="1" ... > <saml:Conditions>...</saml:Conditions> <saml:AttributeStatement> <saml:Subject> <saml:NameIdentifier Format="EMAIL">4bd406bf-0cf0-4dc4-8e49-57336a479ad2</saml:NameIdentifier> <saml:SubjectConfirmation>...</saml:SubjectConfirmation> </saml:Subject> <saml:Attribute AttributeName="emailaddress" AttributeNamespace="http://schemas.xmlsoap.org/ws/2005/05/identity/claims"> <saml:AttributeValue>[email protected]</saml:AttributeValue> </saml:Attribute> <saml:Attribute AttributeName="name" AttributeNamespace="http://schemas.xmlsoap.org/ws/2005/05/identity/claims"> <saml:AttributeValue>bob</saml:AttributeValue> </saml:Attribute> </saml:AttributeStatement> <ds:Signature>...</ds:Signature> </saml:Assertion> (I know the Format of my NameIdentifier isn't really EMAIL, this is something I haven't gotten to cleaning up yet.) Inside my actual server, I put some code borrowed from Pablo Cabraro / Cibrax. This code seems to run A-OK, although I confess that I don't understand what's happening. I note that later in my code, when I need to check my identity, Thread.CurrentPrincipal.Identity is an instance of Microsoft.IdentityModel.Claims.ClaimsIdentity, which has a claim for all the attributes, plus a nameidentifier claim with the value in my NameIdentifier element in saml:Subject. It also has a property NameClaimType, which points to "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name". It would make more sense if NameClaimType mapped to nameidentifier, wouldn't it? How do I make that happen? Or am I expecting the wrong thing of the name claim? Thanks!

    Read the article

  • how to use SQL wildcard % with Queryset extra>select?

    - by tylias
    I'm trying to add weights to search terms I'm using to filter a queryset. Using the '%' wildcard is causing me some problems. I'm using the extra() modifier to add a weight parameter to the queryset, which I will be using to inform a sort ordering. (See http://docs.djangoproject.com/en/1.1/ref/models/querysets/#extra-select-none-where-none-params-none-tables-none-order-by-none-select-params-none ) Here's the gist of the code: def viewname(request) ... exact_matchstrings="" exact_matchstrings.append("(accountprofile.first_name LIKE '" + term + "')") exact_matchstrings.append("(accountprofile.first_name LIKE '" + term + '\%' + "')") extraquerystring = " + ".join(exact_matchstrings) return_queryset = return_queryset.extra( select = { 'match_weight': extraquerystring }, ) The effect I'm going for is that if the search term matches exactly, the weight associated with the record is 2, but if the term merely starts with the search term and isn't an exact match, the weight is 1. (for example, if 'term' = 'Jon', an entry with first_name='Jon' gets a weight of 2 but an entry with an entry with first_name = 'Jonathan' gets a weight of 1.) I can test the statement in SQL and it seems to work well enough. If I make this SQL query from the mysql shell, no problem: select (first_name like "Carl") + (first_name like "Car%") from accountprofile; But trying to run it via the extra() modifier in my view code and evaluating the resulting queryset gives me the following error: Traceback (most recent call last): File "<console>", line 1, in <module> File "/usr/local/lib/python2.6/dist-packages/django/db/models/query.py", line 68, in __repr__ data = list(self[:REPR_OUTPUT_SIZE + 1]) File "/usr/local/lib/python2.6/dist-packages/django/db/models/query.py", line 83, in __len__ self._result_cache.extend(list(self._iter)) File "/usr/local/lib/python2.6/dist-packages/django/db/models/query.py", line 238, in iterator for row in self.query.results_iter(): File "/usr/local/lib/python2.6/dist-packages/django/db/models/sql/query.py", line 287, in results_iter for rows in self.execute_sql(MULTI): File "/usr/local/lib/python2.6/dist-packages/django/db/models/sql/query.py", line 2369, in execute_sql cursor.execute(sql, params) File "/usr/local/lib/python2.6/dist-packages/django/db/backends/util.py", line 22, in execute sql = self.db.ops.last_executed_query(self.cursor, sql, params) File "/usr/local/lib/python2.6/dist-packages/django/db/backends/__init__.py", line 217, in last_executed_query return smart_unicode(sql) % u_params ValueError: unsupported format character ''' (0x27) at index 309 I've tried it escaping and not escaping % wildcard but that doesn't solve the problem. Doesn't seem to affect it at all, really. Any ideas?

    Read the article

  • ListView item won't extend width to fill_parent

    - by slybloty
    I have a custom ViewGroup that inflates a ListView from an xml layout. The list item layout is inflated from another xml file. All of the views are set to fill_parent. The ListView fills its parent, but the ListView items don't. I've tried putting the ListView in a LinearLayout and assigning weight to it. Tried RelativeLayout as well. Also, I've built the ListView programmaticaly, without using the xml layout. Even changed the LayoutParams before adding the view to the ViewGroup. I've also taken in consideration these posts as well: Width of clickable area in ListView w/ onListItemClick, In Android, how can I set a ListView item's height and width?, Android Listview width prob. Any ideas to why the items don't extend to fill width? And how to extend them? MyViewGroup class: public class MyViewGroup extends ViewGroup { public MyViewGroup(Context context, AttributeSet attrs) { super(context, attrs); generateMyViewGroup(); } private void generateMyViewGroup() { ListView main = (ListView) View.inflate(getContext(), R.layout.layout_main, null); main.setAdapter(new MyAdapter(getContext())); this.addView(main); } @Override protected void onLayout(boolean changed, int l, int t, int r, int b) { this.getChildAt(0).layout(l, t, r, b); } } ListView xml layout: <?xml version="1.0" encoding="utf-8"?> <ListView xmlns:android="http://schemas.android.com/apk/res/android" android:id="@android:id/list" android:layout_width="fill_parent" android:layout_height="wrap_content" android:layout_marginRight="3dp" android:background="#77000000" android:cacheColorHint="#00000000" android:divider="#00000000" android:dividerHeight="0dp" android:drawSelectorOnTop="false" android:scrollbars="vertical" > </ListView> ListView item layout: <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:id="@+id/layout_main_category" android:layout_width="fill_parent" android:layout_height="wrap_content" android:background="@color/mainBackground" android:gravity="fill_horizontal|center_vertical" android:orientation="vertical" > <TextView android:id="@+id/main_category" android:layout_width="fill_parent" android:layout_height="wrap_content" android:layout_marginLeft="8dp" android:layout_marginRight="8dp" android:paddingBottom="7dp" android:paddingLeft="20dp" android:paddingRight="5dp" android:paddingTop="20dp" android:text="test" android:textColor="@color/mainCategory" android:textSize="15sp" android:textStyle="bold" /> </LinearLayout>

    Read the article

  • VBScript Out of String space

    - by MalsiaPro
    I got the following code to capture information for files on a specified drive, I ran the script againts a 600 GB hard drive on one of our servers and after a while I get the error Out of String space; "Join". Line 34, Char 2 For this code, file script.vbs: Option Explicit Dim objFS, objFld Dim objArgs Dim strFolder, strDestFile, blnRecursiveSearch ''Dim strLines Dim strCsv ''Dim i '' i = 0 ' 'Get the commandline parameters ' Set objArgs = WScript.Arguments ' strFolder = objArgs(0) ' strDestFile = objArgs(1) ' blnRecursiveSearch = objArgs(2) '######################################## 'SPECIFY THE DRIVE YOU WANT TO SCAN BELOW '######################################## strFolder = "C:\" strDestFile = "C:\InformationOutput.csv" blnRecursiveSearch = True 'Create the FileSystemObject Set objFS=CreateObject("Scripting.FileSystemObject") 'Get the directory you are working in Set objFld = objFS.GetFolder(strFolder) 'Open the csv file Set strCsv = objFS.CreateTextFile(strDestFile, True) '' 'Write the csv file '' Set strCsv = objFS.CreateTextFile(strDestFile, True) strCsv.WriteLine "File Path,File Size,Date Created,Date Last Modified,Date Last Accessed" '' strCsv.Write Join(strLines, vbCrLf) 'Now get the file details GetFileDetails objFld, blnRecursiveSearch '' 'Close and cleanup objects '' strCsv.Close '' 'Write the csv file '' Set strCsv = objFS.CreateTextFile(strDestFile, True) '' For i = 0 to UBound(strLines) '' strCsv.WriteLine strLines(i) '' Next 'Close and cleanup objects strCsv.Close Set strCsv = Nothing Set objFld = Nothing Set strFolder = Nothing Set objArgs = Nothing '---------------------------SCAN SPECIFIED LOCATION------------------------------- Private Sub GetFileDetails(fold, blnRecursive) Dim fld, fil dim strLine(4) on error resume next If InStr(fold.Path, "System Volume Information") < 1 Then If blnRecursive Then 'Work through all the folders and subfolders For Each fld In fold.SubFolders GetFileDetails fld, True If err.number <> 0 then LogError err.Description & vbcrlf & "Folder - " & fold.Path err.Clear End If Next End If 'Now work on the files For Each fil in fold.Files strLine(0) = fil.Path strLine(1) = fil.Size strLine(2) = fil.DateCreated strLine(3) = fil.DateLastModified strLine(4) = fil.DateLastAccessed strCsv.WriteLine Join(strLine, ",") if err.number <> 0 then LogError err.Description & vbcrlf & "Folder - " & fold.Path & vbcrlf & "File - " & fil.Name err.Clear End If Next End If end sub Private sub LogError(strError) dim strErr 'Write the csv file Set strErr = objFS.CreateTextFile("C:\test\err.log", false) strErr.WriteLine strError strErr.Close Set strErr = nothing End Sub RunMe.cmd wscript.exe "C:\temp\script\script.vbs" How can I avoid getting this error? The server drives are quite a bit <???? and I would imagine that the CSV file would be at least 40 MB. Edit by Guffa: I commented out some lines in the code, using double ticks ('') so you can see where.

    Read the article

  • Uploading a file using post() method of QNetworkAccessManager

    - by user304361
    I'm having some trouble with a Qt application; specifically with the QNetworkAccessManager class. I'm attempting to perform a simple HTTP upload of a binary file using the post() method of the QNetworkAccessManager. The documentation states that I can give a pointer to a QIODevice to post(), and that the class will transmit the data found in the QIODevice. This suggests to me that I ought to be able to give post() a pointer to a QFile. For example: QFile compressedFile("temp"); compressedFile.open(QIODevice::ReadOnly); netManager.post(QNetworkRequest(QUrl("http://mywebsite.com/upload") ), &compressedFile); What seems to happen on the Windows system where I'm developing this is that my Qt application pushes the data from the QFile, but then doesn't complete the request; it seems to be sitting there waiting for more data to show up from the file. The post request isn't "closed" until I manually kill the application, at which point the whole file shows up at my server end. From some debugging and research, I think this is happening because the read() operation of QFile doesn't return -1 when you reach the end of the file. I think that QNetworkAccessManager is trying to read from the QIODevice until it gets a -1 from read(), at which point it assumes there is no more data and closes the request. If it keeps getting a return code of zero from read(), QNetworkAccessManager assumes that there might be more data coming, and so it keeps waiting for that hypothetical data. I've confirmed with some test code that the read() operation of QFile just returns zero after you've read to the end of the file. This seems to be incompatible with the way that the post() method of QNetworkAccessManager expects a QIODevice to behave. My questions are: Is this some sort of limitation with the way that QFile works under Windows? Is there some other way I should be using either QFile or QNetworkAccessManager to push a file via post()? Is this not going to work at all, and will I have to find some other way to upload my file? Any suggestions or hints would be appreciated. Thanks, Don

    Read the article

  • CSS - margin and float property

    - by David Casillas
    1.- We have a div with static positioning. Inside we have a parragraph with a margin. The heigth of the div will be the parragraph without the margin 2.- We have a div with float:left. Inside we have a parragraph with a margin. The heigth of the div will be the parragraph plus its margin. What is the explanation of this? Here is the html code and the CSS code. And here is a link to the test site. http://prueba.davidcasillas.es/ <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="es"> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> <title>Untitled Document</title> <link href="index.css" rel="stylesheet" type="text/css" /> </head> <body> <div id="nivel1"> <div id="nivel21"> <p>Este es el primer parrafo</p> </div> <div id="nivel22"> <p>Este es el primer parrafo</p> </div> </div> </body> </html> body { margin:0; padding:0; } #nivel1 { border:solid; border-color:#333; margin:0; background-color:#0F3; margin:2em; } #nivel21 { margin:2em; float:left; background-color:#C00; } #nivel22 { margin:2em; background-color:#FC0; }

    Read the article

  • For a .NET winforms datagridview I would like a combobox column to have a different set of values for each row.

    - by Seth Spearman
    Hello, I have a DataGridView that I am binding to a POCO. I have the databinding working fine. However, I have added a combobox column that I want to be different for each row. Specifically, I have a grid of purchased items, some of which have sizes (like Adult XL, Adult L) and other items are not sized (like Car Magnet.) So essentially what I want to change is the DATA SOURCE for a combobox column in the data grid. Can that be done? What event can I hook into that would allow me to change properties of certain columns FOR EACH ROW? An acceptable alternative is to change a property when the user clicks or tabs into the row. What event is that? Seth EDIT I need more help with this question. With Triduses help I am SO close but I need a bit more information. First, per the question, is the CellFormatting event really the best/only event for changing the DataSource for a combo box column. I ask because I am doing something rather resource/data intensive, not merely formatting the cell. Second, the cellformatting event is being called just by having the mouse hover over the cell. I tried to set the FormattingApplied property inside my if-block and then I check for it in the if- test but that is returning a weird error message. My ideal situation is that I would apply change the data source for the combo box once for each row and then be done with it. Finally, in order to set the data source of the combobox colunm I have to be able to cast the Cell inside my if block to a type of DataGridViewComboBoxColumn so that I can fill it with rows or set the datasource or something. Here is the code I have right now. Private Sub ProductsDataGrid_CellFormatting(ByVal sender As System.Object, ByVal e As System.Windows.Forms.DataGridViewCellFormattingEventArgs) Handles ProductsDataGrid.CellFormatting If e.ColumnIndex = ProductsDataGrid.Columns("SizeDGColumn").Index Then ' AndAlso Not e.FormattingApplied Then Dim product As LeagueOrderProductInfo = DirectCast(ProductsDataGrid.Rows(e.RowIndex).DataBoundItem, LeagueOrderProductInfo) Dim sizes As LeagueOrderProductSizeList = product.ProductSizes sizes.RemoveSizeFromList(_parentOrderDetail.SizeID) 'WHAT DO I DO HERE TO FILL THE COMBOBOX COLUMN WITH THE sizes collection. End If End Sub Please help. I am completely stuck and this task item should have taken an hour and I am 4+ hours in now. BTW, I am also open to resolving this by taking a completely different direction with it (as long as I can be done quickly.) Seth

    Read the article

< Previous Page | 933 934 935 936 937 938 939 940 941 942 943 944  | Next Page >