Search Results

Search found 11318 results on 453 pages for 'josh close'.

Page 341/453 | < Previous Page | 337 338 339 340 341 342 343 344 345 346 347 348  | Next Page >

  • Blob in Java/Hibernate/sql-server 2005

    - by Ramy
    Hi, I'm trying to insert an HTML blob into our sql-server2005 database. i've been using the data-type [text] for the field the blob will eventually live in. i've also put a '@Lob' annotation on the field in the domain model. The problem comes in when the HTML blob I'm attempting to store is larger than 65536 characters. Its seems that is the caracter-limit for a text data type when using the @Lob annotation. Ideally I'd like to keep the whole blob in tact rather than chunk it up into multiple rows in the database. I appreciate any help or insight that might be provided. Thanks! _Ramy Allow me to clarify annotation: @Lob @Column(length = Integer. MAX_VALUE) //per an answer on stackoverflow private String htmlBlob; database side (sql-server-2005): CREATE TABLE dbo.IndustrySectorTearSheetBlob( ... htmlBlob text NULL ... ) Still seeing truncation after 65536 characters... EDIT: i've printed out the contents of all possible strings (only 10 right now) that would be inserted into the Database. Each string seems to contain all cahracters, judging by the fact that the close html tag is present at the end of the string....

    Read the article

  • How can I do batch image processing with ImageJ in Java or clojure?

    - by Robert McIntyre
    I want to use ImageJ to do some processing of several thousand images. Is there a way to take any general imageJ plugin and apply it to hundreds of images automatically? For example, say I want to take my thousand images and apply a polar transformation to each--- A polar transformation plugin for ImageJ can be found here: http://rsbweb.nih.gov/ij/plugins/polar-transformer.html Great! Let's use it. From: [http://albert.rierol.net/imagej_programming_tutorials.html#How%20to%20automate%20an%20ImageJ%20dialog] I find that I can apply a plugin using the following: (defn x-polar [imageP] (let [thread (Thread/currentThread) options ""] (.setName thread "Run$_polar-transform") (Macro/setOptions thread options) (IJ/runPlugIn imageP "Polar_Transformer" ""))) This is good because it suppresses the dialog which would otherwise pop up for every image. But running this always brings up a window containing the transformed image, when what I want is to simply return the transformed image. The stupidest way to do what I want is to just close the window that comes up and return the image which it was displaying. Does what I want but is absolutely retarded: (defn x-polar [imageP] (let [thread (Thread/currentThread) options ""] (.setName thread "Run$_polar-transform") (Macro/setOptions thread options) (IJ/runPlugIn imageP "Polar_Transformer" "") (let [return-image (IJ/getImage)] (.hide return-image) return-image))) I'm obviously missing something about how to use imageJ plugins in a programming context. Does anyone know the right way to do this? Thanks, --Robert McIntyre

    Read the article

  • Problems running Ruby on Rails apps on shared hosted server

    - by Evgeny
    I have problems installing any Ruby On Rails app on my shared hosted server. Mongrel shows html as plain text for all pages. The problem occurs for any app, even if I create a test empty app and add a scaffolded view without changing anything. It appears that the Mongrel crashes when trying to put cookies to the response header. The HTTP header looks incomplete, the Content-type and other parameters are missing: curl 127.0.0.1:12002/users -I HTTP/1.1 200 OK Connection: close Date: Wed, 26 May 2010 09:46:50 GMT Content-Length: 0 Here is the output from mongrel.log Error calling Dispatcher.dispatch #<NoMethodError: You have a nil object when you didn't expect it! You might have expected an instance of ActiveRecord::Base. The error occurred while evaluating nil.[]> /usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel/cgi.rb:108:in `send_cookies' /usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel/cgi.rb:136:in `out' /usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel/http_response.rb:65:in `start' ruby 1.8.7 rails 2.3.8 mongrel 1.1.5 Here is the link to the test page. Has anyone seen anything like this?

    Read the article

  • Stream Reuse in C#

    - by MikeD
    I've been playing around with what I thought was a simple idea. I want to be able to read in a file from somewhere (website, filesystem, ftp), perform some operations on it (compress, encrypt, etc.) and then save it somewhere (somewhere may be a filesystem, ftp, or whatever). It's a basic pipeline design. What I would like to do is to read in the file and put it onto a MemoryStream, then perform the operations on the data in the MemoryStream, and then save that data in the MemoryStream somewhere. I was thinking I could use the same Stream to do this but run into a couple of problems: Everytime I use a StreamWriter or StreamReader I need to close it and that closes the stream so that I cannot use it anymore. That seems like there must be some way to get around that. Some of these files may be big and so I may run out of memory if I try to read the whole thing in at once. I was hoping to be able to spin up each of the steps as separate threads and have the compression step begin as soon as there is data on the stream, and then as soon as the compression has some compressed data available on the stream I could start saving it (for example). Is anything like this easily possible with the C# Streams? ANyone have thoughts as to how to accomplish this best? Thanks, Mike

    Read the article

  • Maximum page fetch with maximum bandwith

    - by Ehsan
    Hi I want to create an application like a spider I've implement fetching page as the following code in multi-thread application but there is two problem 1) I want to use my maximum bandwidth to send/receive request, how should I config my request to do so (Like Download Accelerator application and the like) cause I heard the normal application will use 66% of the available bandwidth. 2) I don't know what exactly HttpWebRequest.KeepAlive do, but as its name implies I think i can create a connection to a website and without closing the connection sending another request to that web site using existing connection. does it boost performance or Im wrong ?? public PageFetchResult Fetch() { PageFetchResult fetchResult = new PageFetchResult(); try { HttpWebRequest req = (HttpWebRequest)HttpWebRequest.Create(URLAddress); HttpWebResponse resp = (HttpWebResponse)req.GetResponse(); Uri requestedURI = new Uri(URLAddress); Uri responseURI = resp.ResponseUri; string resultHTML = ""; byte[] reqHTML = ResponseAsBytes(resp); if (!string.IsNullOrEmpty(FetchingEncoding)) resultHTML = Encoding.GetEncoding(FetchingEncoding).GetString(reqHTML); else if (!string.IsNullOrEmpty(resp.CharacterSet)) resultHTML = Encoding.GetEncoding(resp.CharacterSet).GetString(reqHTML); req.Abort(); resp.Close(); fetchResult.IsOK = true; fetchResult.ResultHTML = resultHTML; } catch (Exception ex) { fetchResult.IsOK = false; fetchResult.ErrorMessage = ex.Message; } return fetchResult; }

    Read the article

  • 'echo' or drop out of 'programming' write HTML then start PHP code again

    - by thecoshman
    For the most part, when I want to display some HTML code to be actually rendered I would use a 'close PHP' tag, write the HTML, then open the PHP again. eg <?php // some php code ?> <p>HTML that I want displayed</p> <?php // more php code ?> But I have seen lots of people who would just use echo instead, so they would have done the above something like <?php // some php code echo("<p>HTML that I want displayed</p>"); // more php code ?> Is their any performance hit for dropping out and back in like that? I would assume not as the PHP engine would have to process the entire file either way. What about when you use the echo function in the way that dose not look like a function, eg echo "<p>HTML that I want displayed</p>" I would hope that this is purely a matter of taste, but I would like to know if I was missing out on something. I personally find the first way preferable (dropping out of PHP then back in) as it helps draw a clear distinction between PHP and HTML and also lets you make use of code highlighting and hinting for your HTML, which is always handy.

    Read the article

  • Is it possible that a single-threaded program is executed simultaneously on more than one CPU core?

    - by Wolfgang Plaschg
    When I run a single-threaded program that i have written on my quad core Intel i can see in the Windows Task Manager that actually all four cores of my CPU are more or less active. One core is more active than the other three, but there is also activity on those. There's no other program (besided the OS kernel of course) running that would be plausible for that activitiy. And when I close my program all activity an all cores drops down to nearly zero. All is left is a little "noise" on the cores, so I'm pretty sure all the visible activity comes directly or indirectly (like invoking system routines) from my program. Is it possible that the OS or the cores themselves try to balance some code or execution on all four cores, even it's not a multithreaded program? Do you have any links that documents this technique? Some infos to the program: It's a console app written in Qt, the Task Manager states that only one thread is running. Maybe Qt uses threads, but I don't use signals or slots, nor any GUI. Link to Task Manager screenshot: http://img97.imageshack.us/img97/6403/taskmanager.png This question is language agnostic and not tied to Qt/C++, i just want to know if Windows or Intel do to balance also single-threaded code on all cores. If they do, how does this technique work? All I can think of is, that kernel routines like reading from disk etc. is scheduled on all cores, but this won't improve performance significantly since the code still has to run synchronous to the kernel api calls. EDIT Do you know any tools to do a better analysis of single and/or multi-threaded programs than the poor Windows Task Manager?

    Read the article

  • How to get "AuthSub " token in C#? For google APPS Contacts ?

    - by Pari
    Hi, I fount this code on net : HttpWebRequest update = (HttpWebRequest)WebRequest.Create(**editUrl** ); // editUrl is a string containing the contact's edit URL update.Method = "PUT"; update.ContentType = "application/atom+xml"; update.Headers.Add(HttpRequestHeader.Authorization, "GoogleLogin auth=" + **AuthToken**); update.Headers.Add(HttpRequestHeader.IfMatch, **etag**); // etag is a string containing the <entry> element's gd:etag attribute value update.Headers.Add("GData-Version", "3.0"); Stream streamRequest = update.GetRequestStream(); StreamWriter streamWriter = new StreamWriter(streamRequest, Encoding.UTF8); streamWriter.Write(entry); // entry is the string representation of the atom entry to update streamWriter.Close(); WebResponse response = update.GetResponse(); But here i am not getting what to put in " editurl" , "AuthToken" and "Etag". a) I studied abt "AuthToken" from this Link .But not getting how to create it? Can anyone help me out here? b) Also not getting " editurl" and "Etag". I am trying to use this method to Migrate my contacts to Google Apps. Thanx

    Read the article

  • Download-from-PyPI-and-install script

    - by zubin71
    Hello, I have written a script which fetches a distribution, given the URL. After downloading the distribution, it compares the md5 hashes to verify that the file has been downloaded properly. This is how I do it. def download(package_name, url): import urllib2 downloader = urllib2.urlopen(url) package = downloader.read() package_file_path = os.path.join('/tmp', package_name) package_file = open(package_file_path, "w") package_file.write(package) package_file.close() I wonder if there is any better(more pythonic) way to do what I have done using the above code snippet. Also, once the package is downloaded this is what is done: def install_package(package_name): if package_name.endswith('.tar'): import tarfile tarfile.open('/tmp/' + package_name) tarfile.extract('/tmp') import shlex import subprocess installation_cmd = 'python %ssetup.py install' %('/tmp/'+package_name) subprocess.Popen(shlex.split(installation_cmd) As there are a number of imports for the install_package method, i wonder if there is a better way to do this. I`d love to have some constructive criticism and suggestions for improvement. Also, I have only implemented the install_package method for .tar files; would there be a better manner by which I could install .tar.gz and .zip files too without having to write seperate methods for each of these?

    Read the article

  • Problem using UnhandledException in Windows Mobile app

    - by MusiGenesis
    I have a Windows Mobile program that accesses an attached device through a third-party DLL. Each call to the device can take an unknown length of time, so each call includes a timeout property. If the call takes longer than the specified timeout to return, the DLL instead throws an exception which my app catches with no problem. The problem that I have is with closing the application. If my application has made a call to the DLL and is waiting for the timeout to occur, and I then close the application before the timeout occurs, my application locks up and requires the PDA to be rebooted. I can ensure that the application waits for the timeout before closing, under normal conditions. However, I am trying to use AppDomain.CurrentDomain.UnhandledException to catch any unhandled exceptions in the program and use the event to wait for this pending timeout to occur so the program can be closed finally. My problem is that this event doesn't seem to stick around long enough. If I put a MessageBox.Show("unhandled exception"); line in the event, and then throw a new unhandled exception from my application's main form, I see the message box for a split second but then it disappears without my having clicked the OK button. The documentation I've found on this event suggests that by the time it's called the application is fully committed to closing and the closing can't be stopped, but I didn't think it meant that the event method itself won't finish. What gives (I guess that's the question)? Update: In full windows (Vista) this works as expected, but only if I use the Application.ThreadException event, which doesn't exist in .Net CF 2.0.

    Read the article

  • NSString's stringByAppendingPathComponent: removes a '/' in http://

    - by Jasarien
    I've been modifying some code to work between Mac OS X and iPhone OS. I came across some code that was using NSURL's URLByAppendingPathComponent: (added in 10.6), which as some may know, isn't available in the iPhone SDK. My solution to make this code work between OS's is to use NSString *urlString = [myURL absoluteString]; urlString = [urlString stringByAppendingPathComponent:@"helloworld"]; myURL = [NSURL urlWithString:urlString]; The problem with this is that NSString's stringByAppendingPathComponent: seems to remove one of the /'s from the http:// part of the URL. Is this intended behaviour or a bug? Edit Ok, So I was a bit too quick in asking the question above. I re-read the documentation and it does say: Note that this method only works with file paths (not, for example, string representations of URLs) However, it doesn't give any pointers in the right direction for what to do if you need to append a path component to a URL on the iPhone... I could always just do it manually, adding a /if necessary and the extra string, but I was looking to keep it as close to the original Mac OS X code as possible...

    Read the article

  • cx_Oracle makes subprocess give OSError

    - by Shrikant Sharat
    I am trying to use the cx_Oracle module with python 2.6.6 on ubuntu Maverick, with Oracle 11gR2 Enterprise edition. I am able to connect to my oracle db just fine, but once I do that, the subprocess module does not work anymore. Here is an iPython session that reproduces the problem... In [1]: import subprocess as sp, cx_Oracle as dbh In [2]: sp.call(['whoami']) sharat Out[2]: 0 In [3]: con = dbh.connect('system', 'password') In [4]: con.close() In [5]: sp.call(['whomai']) --------------------------------------------------------------------------- OSError Traceback (most recent call last) /home/sharat/desk/calypso-launcher/<ipython console> in <module>() /usr/lib/python2.6/subprocess.pyc in call(*popenargs, **kwargs) 468 retcode = call(["ls", "-l"]) 469 """ --> 470 return Popen(*popenargs, **kwargs).wait() 471 472 /usr/lib/python2.6/subprocess.pyc in __init__(self, args, bufsize, executable, stdin, stdout, stderr, preexec_fn, close_fds, shell, cwd, env, universal_newlines, startupinfo, creationflags) 621 p2cread, p2cwrite, 622 c2pread, c2pwrite, --> 623 errread, errwrite) 624 625 if mswindows: /usr/lib/python2.6/subprocess.pyc in _execute_child(self, args, executable, preexec_fn, close_fds, cwd, env, universal_newlines, startupinfo, creationflags, shell, p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite) 1134 1135 if data != "": -> 1136 _eintr_retry_call(os.waitpid, self.pid, 0) 1137 child_exception = pickle.loads(data) 1138 for fd in (p2cwrite, c2pread, errread): /usr/lib/python2.6/subprocess.pyc in _eintr_retry_call(func, *args) 453 while True: 454 try: --> 455 return func(*args) 456 except OSError, e: 457 if e.errno == errno.EINTR: OSError: [Errno 10] No child processes So, the call to sp.call works fine before connecting to oracle, but breaks after that. Even if I have closed the connection to the database. Looking around, I found http://bugs.python.org/issue1731717 as somewhat related to this issue, but I am not dealing with threads here. I don't know if cx_Oracle is. Moreover, the above issue mentions that adding a time.sleep(1) fixes it, but it didn't help me. Any help appreciated. Thanks.

    Read the article

  • Dynamically create a text file from a C# program

    - by techstu
    Can I dynamically create a text file from a C# program, using data from a previously created xml file and text file, I have written half the code, but can't go any further please help using System; using System.IO; using System.Xml; namespace Task3 { class TextFileReader { static void Main(string[] args) { String strn=" ", strsn=String.Empty; XmlTextReader reader = new XmlTextReader("my.xml"); while (reader.Read()) { switch (reader.NodeType) { case XmlNodeType.Element: // The node is an element. if (reader.HasAttributes) { strn = reader.GetAttribute(0); strsn = reader.GetAttribute(1); int counter = 0; string line; // Read the file and display it line by line. System.IO.StreamReader file = new System.IO.StreamReader("read_file.txt"); string ch, ch1; while ((line = file.ReadLine()) != null) { if (line.Substring(0, 1).Equals("%")) { int a = line.IndexOf('%'); int b = line.LastIndexOf('%'); ch = line.Substring(a + 1, b - 1); ch1 = line.Substring(a, b+1); if (ch == "name") { string test = line.Replace(ch1, strn); Console.WriteLine(test); } else if (ch == "sirname") { string test = line.Replace(ch1, strsn); Console.WriteLine(test); } } else { Console.WriteLine(line); } counter++; } file.Close(); } break; } } // Suspend the screen. Console.ReadLine(); } } } the xml file from which i am reading is: <?xml version="1.0" encoding="utf-8" ?> - <Workflow> <User UserName="pqr" Sirname="sbd" /> <User UserName="abc" Sirname="xyz" /> </Workflow> and the text file is: hi this is me %sirname% %name% but this is not wat i want..please help

    Read the article

  • ASP.net download page

    - by Russel
    Hi I have a Reports.aspx ASP.NET page that allows users to download excel report files by clicking on several hyperlinks. When a report hyperlink is clicked, I open a new window using the javascript window.open method and navigate off to the download.aspx page. The code-behind for the download page creates a excel file on the fly using openxml(in memory) and send it back to the browser. Here is some code from the download.aspx page: byte[] outputFileBytes = CreateExcelReport().ToArray(); Response.Clear(); Response.BufferOutput = true; Response.ContentType = "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet"; Response.AddHeader("Content-Disposition", string.Format("attachment; filename={0}", "tempReport.xlsx")); Response.BinaryWrite(outputFileBytes); Response.Flush(); Response.Close(); Response.End(); My problem : Some of these reports take some time to generate. I would like to display a loading.gif file on my Reports.aspx page, while the download.aspx page is requested. Once the page request is completed, the loading.gif file should be made invisible. Is there a way to achieve this. Perhaps some kind of event. I have mootools to my disposal. Thanks PS. I know that generating reports like this is not ideal, but thats a different story all together...

    Read the article

  • Why does Clojure hang after hacing performed my calculations?

    - by Thomas
    Hi all, I'm experimenting with filtering through elements in parallel. For each element, I need to perform a distance calculation to see if it is close enough to a target point. Never mind that data structures already exist for doing this, I'm just doing initial experiments for now. Anyway, I wanted to run some very basic experiments where I generate random vectors and filter them. Here's my implementation that does all of this (defn pfilter [pred coll] (map second (filter first (pmap (fn [item] [(pred item) item]) coll)))) (defn random-n-vector [n] (take n (repeatedly rand))) (defn distance [u v] (Math/sqrt (reduce + (map #(Math/pow (- %1 %2) 2) u v)))) (defn -main [& args] (let [[n-str vectors-str threshold-str] args n (Integer/parseInt n-str) vectors (Integer/parseInt vectors-str) threshold (Double/parseDouble threshold-str) random-vector (partial random-n-vector n) u (random-vector)] (time (println n vectors (count (pfilter (fn [v] (< (distance u v) threshold)) (take vectors (repeatedly random-vector)))))))) The code executes and returns what I expect, that is the parameter n (length of vectors), vectors (the number of vectors) and the number of vectors that are closer than a threshold to the target vector. What I don't understand is why the programs hangs for an additional minute before terminating. Here is the output of a run which demonstrates the error $ time lein run 10 100000 1.0 [null] 10 100000 12283 [null] "Elapsed time: 3300.856 msecs" real 1m6.336s user 0m7.204s sys 0m1.495s Any comments on how to filter in parallel in general are also more than welcome, as I haven't yet confirmed that pfilter actually works.

    Read the article

  • Word 2007 COM - Can't directly access a page when word is set to invisible

    - by Robbie
    I'm using Word 2007 via COM from PHP 5.2 Apache 2.0 on a windows machine. The goal is to programmatically render jpeg thumbnails from each page in a Word document. The following code works correctly if you set $word-Visible to 1: try { $word = new COM('word.application'); $word->Visible = 0; $word->Documents->Open("C:\\test.doc"); echo "Number of pages: " . $word->ActiveDocument->ActiveWindow->ActivePane->Pages->Count() . "</br>"; $i = 1; foreach ($word->ActiveDocument->ActiveWindow->ActivePane->Pages as $page) { echo "Page number: $i </br>"; $i++; } //get the EMF image of the page $data = $word->ActiveDocument->ActiveWindow->ActivePane->Pages->Item(3)->EnhMetaFileBits; $word->ActiveDocument->Close(); $word->Quit(); } catch (Exception $e) { echo "Exception: " .$e->getMessage(); } The test document I'm using contains 35 pages. The code will display the correct number of pages but the for each loop only loops over 1 page. I can only directly access page 1 and 2 in the Pages-Item() collection. If I try to access another page I get the exception: "The requested member of the collection does not exist." If I set the $word-Visible property to 1 I do get all the pages in the foreach loop and I can access any page directly. Everything is working as expected if Word is set to be visible. Even stranger is the fact that if I set Word to be invisible and I don't have the foreach loop I can only access page 1 instead of page 1 and 2 if I do the for each loop. Any pointers on how I can access all the pages in the document and keeping word invisible?

    Read the article

  • How to create a console application that does not terminate?

    - by John
    Hello, In C++,a console application can have a message handler in its Winmain procedure.Like this: int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nCmdShow) { HWND hwnd; MSG msg; #ifdef _DEBUG CreateConsole("Title"); #endif hwnd = CreateDialog(hInstance, MAKEINTRESOURCE(IDD_DIALOG1), NULL, DlgProc); PeekMessage(&msg, NULL, 0, 0, PM_NOREMOVE); while(msg.message != WM_QUIT) { if(PeekMessage(&msg, NULL, 0, 0, PM_REMOVE)) { if(IsDialogMessage(hwnd, &msg)) continue; TranslateMessage(&msg); DispatchMessage(&msg); } } return 0; } This makes the process not close until the console window has received WM_QUIT message. I don't know how to do something similar in delphi. My need is not for exactly a message handler, but a lightweight "trick" to make the console application work like a GUI application using threads. So that, for example, two Indy TCP servers could be handled without the console application terminating the process. My question: How could this be accomplished?

    Read the article

  • Is it possible to store controls(Panel) as object, serialize it and store it as a file?

    - by ikky
    The topic says it all. Using Compact Framework C# I'm tiling (order/sequence is important) some images that i download from an url, into a Panel(each image is a PictureBox). This can be a huge process, and may take some time. Therefor i only want the user to download the images and tile them once. So the next time the user uses the Tile Application, the Panel that was created the first time is already stored in a file and is loaded from that file. So what i want is a method to store a Panel as a file. Is this possible, or do you think i should do it another way? I've tried something like this: BinaryWriter panelStorage = new BinaryWriter(new FileStream("imagePanel.panel", FileMode.OpenOrCreate, FileAccess.Write, FileShare.None)); Byte[] bImageObject = new Byte[20000]; bImageObject = (byte[])(object)this.imagePanel; panelStorage .Write(bMapObject); panelStorage .Close(); But the casting was not very legal :P "InvalidCastException" Can anyone help me with this problem? Thank you in advance!

    Read the article

  • python NameError: name '<anything>' is not defined (but it is!)

    - by BenjaminGolder
    Note: Solved. It turned out that I was importing a previous version of the same module. It is easy to find similar topics on StackOverflow, where someone ran into a NameError. But most of the questions deal with specific modules and the solution is often to update the module. In my case, I am trying to import a function from a module that I wrote myself. The module is named InfraPy, and it is definitely on sys.path. One particular function (called listToText) in InfraPy returns a NameError, but only when I try to import it into another script. Inside InfraPy, under if __name__=='__main__':, the listToText function works just fine. From InfraPy I can import other functions with no problems. Including from InfraPy import * in my script does not return any errors until I try to use the listToText function. How can this occur? How can importing one particular function return a NameError, while importing all the other functions in the same module works fine? Using python 2.6 on MacOSX 10.6, also encountered the same error running the script on Windows 7, using IronPython 2.6 for .NET 4.0 Thanks. If there are other details you think would be helpful in solving this, I'd be happy to provide them. As requested, here is the function definition inside of InfraPy: def listToText(inputList, folder=None, outputName='list.txt'): ''' Creates a text file from a list (with each list item on a separate line). May be placed in any given folder, but will otherwise be created in the working directory of the python interpreter. ''' fname = outputName if folder != None: fname = folder+'/'+fname f = open(fname, 'w') for file in inputList: f.write(file+'\n') f.close() This function is defined above and outside of if __name__=='__main__': I've tried moving InfraPy around in relation to the script. The most baffling situation is that when InfraPy is in the same folder as the script, and I import using from InfraPy import listToText, I receive this error: NameError: name listToText is not defined. Again, the other functions import fine, they are all defined outside of if __name__=='__main__': in InfraPy.

    Read the article

  • Sharepoint designer is replacing french characters with &#65533;

    - by chris
    First of all, I'm not a web designer, I'm a programmer, so I'm working a bit out of my knowledge area. However, as the person in my office who has some working knowledge of French, I'm stuck with this issue. The Problem: Sharepoint Designer is replacing all French accented characters with the &#65533; (square box or diamond-? �) character. It doesn't appear to matter if I enter the 'é' character as alt-130 (in either design or source or as &eacute; Everything works fine when editing, but when the file is saved and loaded into a browser, it replaces the characters. When reloading into designer, the file shows the 65533 symbol. EDIT: More info. I use &#233; and save, close SP designer, Reloading SP designer will show the é (instead of the code) in source. Next reload will have replaced it with &#65533; Question 1: (more important) HOW DO I STOP THIS!? Question 2: (more interesting) Why does this happen? Charset is iso-8859-1

    Read the article

  • Problem: Sorting for GridView/ObjectDataSource changes depending on page

    - by user148298
    I have a GridView tied to an ObjectDataSource using paging. The paging works fine, except that the sort order changes depending on which page of the results is being viewed. This causes items to reappear on subsequent pages among other issues. I traced the problem to my DAL, which reads a page at a time and then sorts it. Obviously the sorting is going to change as the result set size changes. Is there an improvement to this algorithm. I would like to use a datareader if possible: [System.ComponentModel.DataObjectMethod(System.ComponentModel.DataObjectMethodType.Select)] public static WordsCollection LoadForCriteria(string sqlCriteria, int maximumRows, int startRowIndex, string sortExpression) { //DEFAULT SORT EXPRESSION if (string.IsNullOrEmpty(sortExpression)) sortExpression = "OrderBy"; //CREATE THE DYNAMIC SQL TO LOAD OBJECT StringBuilder selectQuery = new StringBuilder(); selectQuery.Append("SELECT"); if (maximumRows > 0) selectQuery.Append(" TOP " + (startRowIndex + maximumRows).ToString()); selectQuery.Append(" " + Words.GetColumnNames(string.Empty)); selectQuery.Append(" FROM sw_Words"); string whereClause = string.IsNullOrEmpty(sqlCriteria) ? string.Empty : " WHERE " + sqlCriteria; selectQuery.Append(whereClause); selectQuery.Append(" ORDER BY " + sortExpression); Database database = Token.Instance.Database; DbCommand selectCommand = database.GetSqlStringCommand(selectQuery.ToString()); //EXECUTE THE COMMAND WordsCollection results = new WordsCollection(); int thisIndex = 0; int rowCount = 0; using (IDataReader dr = database.ExecuteReader(selectCommand)) { while (dr.Read() && ((maximumRows < 1) || (rowCount < maximumRows))) { if (thisIndex >= startRowIndex) { Words varWords = new Words(); Words.LoadDataReader(varWords, dr); results.Add(varWords); rowCount++; } thisIndex++; } dr.Close(); } return results; }

    Read the article

  • Google Code Jam 2010 Large DataSets Take Too Long to Submit

    - by Travis
    Hey Guys, I'm participating in the 2010 code jam and I solved two of the problems for the small data sets, but I'm not even close to solving the large data sets in the 8 minute time frame. I'm wondering if anyone out there has solved the large data set: What hardware were you running on? What language were you running on? What performance tuning techniques did you do on your code to run as fast as possible? I'm writing the solutions in Ruby, which is not my day to day language, and executing them on my Macbook Pro. My solutions for problem A and problem C are on github at http://github.com/tjboudreaux/codejam2010. I'd appreciate any suggestions that you may have. FWIW, I have alot of experience in C++ from college, my primary language is PHP, and my "sandbox" language is Ruby. Was I just a bit ambitious by taking a shot at this in Ruby, not knowing where the language struggles for performance, or does anyone see anything that's a redflag as to why I can't complete the large dataset in time to submit.

    Read the article

  • How update DB table with DataSet

    - by Paul
    I am begginer with ADO.NET , I try update table with DataSet. O client side I have dataset with one table. I send this dataset on service side (it is ASP.NET Web Service). On Service side I try update table in database, but it dont 't work. public bool Update(DataSet ds) { SqlConnection conn = null; SqlDataAdapter da = null; SqlCommand cmd = null; try { string sql = "UPDATE * FROM Tab1"; string connStr = WebConfigurationManager.ConnectionStrings["Employees"].ConnectionString; conn = new SqlConnection(connStr); conn.Open(); cmd=new SqlCommand(sql,conn); da = new SqlDataAdapter(sql, conn); da.UpdateCommand = cmd; da.Update(ds.Tables[0]); return true; } catch (Exception ex) { throw ex; } finally { if (conn != null) conn.Close(); if (da != null) da.Dispose(); } } Where can be problem?

    Read the article

  • How to return a recordset from a function

    - by Scott
    I'm building a data access layer in Excel VBA and having trouble returning a recordset. The Execute() function in my class is definitely retrieving a row from the database, but doesn't seem to be returning anything. The following function is contained in a class called DataAccessLayer. The class contains functions Connect and Disconnect which handle opening and closing the connection. Public Function Execute(ByVal sqlQuery as String) As ADODB.recordset Set recordset = New ADODB.recordset Dim recordsAffected As Long ' Make sure we are connected to the database. If Connect Then Set command = New ADODB.command With command .ActiveConnection = connection .CommandText = sqlQuery .CommandType = adCmdText End With ' These seem to be equivalent. 'Set recordset = command.Execute(recordsAffected) recordset.Open command.Execute(recordsAffected) Set Execute = recordset recordset.ActiveConnection = Nothing recordset.Close Set command = Nothing Call Disconnect End If Set recordset = Nothing End Function Here's a public function that I'm using in cell A1 of my spreadsheet for testing. Public Function Scott_Test() Dim Database As New DataAccessLayer 'Dim rs As ADODB.recordset 'Set rs = CreateObject("ADODB.Recordset") Set rs = New ADODB.recordset Set rs = Database.Execute("SELECT item_desc_1 FROM imitmidx_sql WHERE item_no = '11001'") 'rs.Open Database.Execute("SELECT item_desc_1 FROM imitmidx_sql WHERE item_no = '11001'") 'rs.Open ' This never displays. MsgBox rs.EOF If Not rs.EOF Then ' This is displaying #VALUE! in cell A1. Scott_Test = rs!item_desc_1 End If rs.ActiveConnection = Nothing Set rs = Nothing End Function What am I doing wrong?

    Read the article

  • Fast JSON serialization (and comparison with Pickle) for cluster computing in Python?

    - by user248237
    I have a set of data points, each described by a dictionary. The processing of each data point is independent and I submit each one as a separate job to a cluster. Each data point has a unique name, and my cluster submission wrapper simply calls a script that takes a data point's name and a file describing all the data points. That script then accesses the data point from the file and performs the computation. Since each job has to load the set of all points only to retrieve the point to be run, I wanted to optimize this step by serializing the file describing the set of points into an easily retrievable format. I tried using JSONpickle, using the following method, to serialize a dictionary describing all the data points to file: def json_serialize(obj, filename, use_jsonpickle=True): f = open(filename, 'w') if use_jsonpickle: import jsonpickle json_obj = jsonpickle.encode(obj) f.write(json_obj) else: simplejson.dump(obj, f, indent=1) f.close() The dictionary contains very simple objects (lists, strings, floats, etc.) and has a total of 54,000 keys. The json file is ~20 Megabytes in size. It takes ~20 seconds to load this file into memory, which seems very slow to me. I switched to using pickle with the same exact object, and found that it generates a file that's about 7.8 megabytes in size, and can be loaded in ~1-2 seconds. This is a significant improvement, but it still seems like loading of a small object (less than 100,000 entries) should be faster. Aside from that, pickle is not human readable, which was the big advantage of JSON for me. Is there a way to use JSON to get similar or better speed ups? If not, do you have other ideas on structuring this? (Is the right solution to simply "slice" the file describing each event into a separate file and pass that on to the script that runs a data point in a cluster job? It seems like that could lead to a proliferation of files). thanks.

    Read the article

< Previous Page | 337 338 339 340 341 342 343 344 345 346 347 348  | Next Page >