Search Results

Search found 15087 results on 604 pages for 'python multithreading'.

Page 224/604 | < Previous Page | 220 221 222 223 224 225 226 227 228 229 230 231  | Next Page >

  • iPhone: how to use performSelector:onThread:withObject:waitUntilDone: method?

    - by Michael Kessler
    Hi all, I am trying to use a separate thread for working with some API. The problem is that I am not able to use performSelector:onThread:withObject:waitUntilDone: method with a thread that I' instantiated for this. My code: @interface MyObject : NSObject { NSThread *_myThread; } @property(nonatomic, retain) NSThread *myThread; @end @implementation MyObject @synthesize myThread = _myThread; - (NSThread *)myThread { if (_myThread == nil) { NSThread *myThreadTemp = [[NSThread alloc] init]; [myThreadTemp start]; self. myThread = myThreadTemp; [myThreadTemp release]; } return _myThread; } - (id)init { if (self = [super init]) { [self performSelector:@selector(privateInit:) onThread:[self myThread] withObject:nil waitUntilDone:NO]; } return self; } - (void)privateInit:(id)object { NSLog(@"MyObject - privateInit start"); } - (void)dealloc { [_myThread release]; _myThread = nil; [super dealloc]; } @end "MyObject - privateInit start" is never printed. What am I missing? I tried to instantiate the thread with target and selector, tried to wait for method execution completion (waitUntilDone:YES). Nothing helps. UPDATE: I don't need this multithreading for separating costly operations to another thread. In this case I could use the performSelectorInBackground as mentioned in few answers. The main reason for this separate thread is the need to perform all the actions in the API (TTS by Loquendo) from one single thread. Meaning that I have to create an instance of the TTS object and call methods on that object from the same thread all the time.

    Read the article

  • Algorithm design, "randomising" timetable schedule in Python although open to other languages.

    - by S1syphus
    Before I start I should add I am a musician and not a native programmer, this was undertook to make my life easier. Here is the situation, at work I'm given a new csv file each which contains a list of sound files, their length, and the minimum total amount of time they must be played. I create a playlist of exactly 60 minutes, from this excel file. Each sample played the by the minimum number of instances, but spread out from each other; so there will never be a period where for where one sound is played twice in a row or in close proximity to itself. Secondly, if the minimum instances of each song has been used, and there is still time with in the 60 min, it needs to fill the remaining time using sounds till 60 minutes is reached, while adhering to above. The smallest duration possible is 15 seconds, and then multiples of 15 seconds. Here is what I came up with in python and the problems I'm having with it, and as one user said its buggy due to the random library used in it. So I'm guessing a total rethink is on the table, here is where I need your help. Whats is the best way to solve the issue, I have had a brief look at things like knapsack and bin packing algorithms, while both are relevant neither are appropriate and maybe a bit beyond me.

    Read the article

  • How do I create a named temporary file on windows in Python?

    - by Chris B.
    I've got a Python program that needs to create a named temporary file which will be opened and closed a couple times over the course of the program, and should be deleted when the program exits. Unfortunately, none of the options in tempfile seem to work: TemporaryFile doesn't have a visible name NamedTemporaryFile creates a file-like object. I just need a filename. I've tried closing the object it returns (after setting delete = False) but I get stream errors when I try to open the file later. SpooledTemporaryFile doesn't have a visible name mkstemp returns both the open file object and the name; it doesn't guarantee the file is deleted when the program exits mktemp returns the filename, but doesn't guarantee the file is deleted when the program exits I've tried using mktemp1 within a context manager, like so: def get_temp_file(suffix): class TempFile(object): def __init__(self): self.name = tempfile.mktemp(suffix = '.test') def __enter__(self): return self def __exit__(self, ex_type, ex_value, ex_tb): if os.path.exists(self.name): try: os.remove(self.name) except: print sys.exc_info() return TempFile() ... but that gives me a WindowsError(32, 'The process cannot access the file because it is being used by another process'). The filename is used by a process my program spawns, and even though I ensure that process finishes before I exit, it seems to have a race condition out of my control. What's the best way of dealing with this? 1 I don't need to worry about security here; this is part of a testing module, so the most someone nefarious could do is cause our unit tests to spuriously fail. The horror!

    Read the article

  • In Python epoll can I avoid the errno.EWOULDBLOCK, errno.EAGAIN ?

    - by davyzhang
    I wrote a epoll wrapper in python, It works fine but recently I found the performance is not not ideal for large package sending. I look down into the code and found there's actually a LOT of error Traceback (most recent call last): File "/Users/dawn/Documents/workspace/work/dev/server/sandbox/single_point/tcp_epoll.py", line 231, in send_now num_bytes = self.sock.send(self.response) error: [Errno 35] Resource temporarily unavailable and previously silent it as the document said, so my sending function was done this way: def send_now(self): '''send message at once''' st = time.time() times = 0 while self.response != '': try: num_bytes = self.sock.send(self.response) l.info('msg wrote %s %d : %r size %r',self.ip,self.port,self.response[:num_bytes],num_bytes) self.response = self.response[num_bytes:] except socket.error,e: if e[0] in (errno.EWOULDBLOCK,errno.EAGAIN): #here I printed it, but I silent it in normal days #print 'would block, again %r',tb.format_exc() break else: l.warning('%r %r socket error %r',self.ip,self.port,tb.format_exc()) #must break or cause dead loop break except: #other exceptions l.warning('%r %r msg write error %r',self.ip,self.port,tb.format_exc()) break times += 1 et = time.time() I googled it, and says it caused by temporarily network buffer run out So how can I manually and efficiently detect this error instead it goes to exception phase? Because it cause to much time to rasie/handle the exception.

    Read the article

  • C# XMLRPC Multi Threaded newPost something is not right

    - by user1047463
    I recently decided to get my feet wet with c# multi threading and my progress was good until now. My program uses XML-RPC to create new posts on wordpress. When I run my app on the localhost everything works fine and i successfully able to create new posts on wordpress using a multi threaded approach. However when I try to do the same with a wordpress installed on a remote server , new posts won't appear, so when i go to see all posts, the total number of posts remains unchanged. Another thing to note is the function that creates a new post does return the ID of a supposedly created posts. But as I mentioned previously the new posts are not visible when I try to access them in admin menu. Since I'm new to the multithreading I was hoping some of you guys can help me and check if I'm doing anything wrong. Heres the code that creates threads and calls create post function. foreach (DataRow row in dt.Rows) { Thread t = null; t = new Thread(createPost); lock (t) { t.Start(); } } public void createPost() { string postid; try { icp = (IcreatePost)XmlRpcProxyGen.Create(typeof(IcreatePost)); clientProtocol = (XmlRpcClientProtocol)icp; clientProtocol.Url = xmlRpcUrl; postid = icp.NewPost(1, blogLogin, blogPass, newBlogPost, 1); currentThread--; threadCountLabel.Invoke(new MethodInvoker(delegate { threadCountLabel.Text = currentThread.ToString(); })); } catch (Exception ex) { MessageBox.Show( createPost ERROR ->" + ex.Message); } }

    Read the article

  • Maintaining a Python web application: heavier vs lighter framework?

    - by Tiberiu Ana
    Five+ years from now, you are hired to support and extend a data-centric web application written in Python that hasn't been kept up to date. Would you rather prefer it was written in the current version of Django/Pylons at the time, using the available standard components, or kept minimal with something like CherryPy/web.py and a few library dependencies? Heavy framework Advantages: standard approach to application design and structure, as encouraged by framework; less application code to worry about. Disadvantages: requires learning the framework to understand how things work; broken things in old version of framework difficult to fix; upgrading to new version potentially difficult due to changing APIs; finding relevant documentation/help potentially difficult due to changing APIs. Light framework Advantages: most application code is directly "visible"; only needed features are implemented; architecture should be simpler to understand; less need to upgrade external dependencies; easier to upgrade external dependencies. Disadvantages: some reinventing the wheel; non-standard design and structure (with the associated unique issues and bugs). I will update the list with any helpful answers.

    Read the article

  • Does this use of Monitor.Wait/Pulse have a race condition?

    - by jw
    I have a simple producer/consumer scenario, where there is only ever a single item being produced/consumed. Also, the producer waits for the worker thread to finish before continuing. I realize that kind of obviates the whole point of multithreading, but please just assume it really needs to be this way (: This code doesn't compile, but I hope you get the idea: // m_data is initially null // This could be called by any number of producer threads simultaneously void SetData(object foo) { lock(x) // Line A { assert(m_data == null); m_data = foo; Monitor.Pulse(x) // Line B while(m_data != null) Monitor.Wait(x) // Line C } } // This is only ever called by a single worker thread void UseData() { lock(x) // Line D { while(m_data == null) Monitor.Wait(x) // Line E // here, do something with m_data m_data = null; Monitor.Pulse(x) // Line F } } Here is the situation that I am not sure about: Suppose many threads call SetData() with different inputs. Only one of them will get inside the lock, and the rest will be blocked on Line A. Suppose the one that got inside the lock sets m_data and makes its way to Line C. Question: Could the Wait() on Line C allow another thread at Line A to obtain the lock and overwrite m_data before the worker thread even gets to it? Supposing that doesn't happen, and the worker thread processes the original m_data, and eventually makes its way to Line F, what happens when that Pulse() goes off? Will only the thread waiting on Line C be able to get the lock? Or will it be competing with all the other threads waiting on Line A as well? Essentially, I want to know if Pulse()/Wait() communicate with each other specially "under the hood" or if they are on the same level with lock(). The solution to these problems, if they exist, is obvious of course - just surround SetData() with another lock - say, lock(y). I'm just curious if it's even an issue to begin with.

    Read the article

  • Thread implemented as a Singleton

    - by rocknroll
    Hi all, I have a commercial application made with C,C++/Qt on Linux platform. The app collects data from different sensors and displays them on GUI. Each of the protocol for interfacing with sensors is implemented using singleton pattern and threads from Qt QThreads class. All the protocols except one work fine. Each protocol's run function for thread has following structure: void <ProtocolClassName>::run() { while(!mStop) //check whether screen is closed or not { mutex.lock() while(!waitcondition.wait(&mutex,5)) { if(mStop) return; } //Code for receiving and processing incoming data mutex.unlock(); } //end while } Hierarchy of GUI. 1.Login screen. 2. Screen of action. When a user logs in from login screen, we enter the action screen where all data is displayed and all the thread's for different sensors start. They wait on mStop variable in idle time and when data arrives they jump to receiving and processing data. Incoming data for the problem protocol is 117 bytes. In the main GUI threads there are timers which when timeout, grab the running instance of protocol using <ProtocolName>::instance() function Check the update variable of singleton class if its true and display the data. When the data display is done they reset the update variable in singleton class to false. The problematic protocol has the update time of 1 sec, which is also the frame rate of protocol. When I comment out the display function it runs fine. But when display is activated the application hangs consistently after 6-7 hours. I have asked this question on many forums but haven't received any worthwhile suggestions. I Hope that here I will get some help. Also, I have read a lot of literature on Singleton, multithreading, and found that people always discourage the use of singletons especially in C++. But in my application I can think of no other design for implementation. Thanks in advance A Hapless programmer

    Read the article

  • Yet another Python Windows CMD mklink problem ... can't get it to work!

    - by Felix Dombek
    OK I have just posted another question which outlined my program but the specific problem was different. Now, my program just stops working without any message whatsoever. I'd be grateful if someone could help me here. I want to create symlinks for each file in a directory structure, all in one large flat folder, and have the following code by now: # loop over directory structure: # for all items in current directory, # if item is directory, recurse into it; # else it's a file, then create a symlink for it def makelinks(folder, targetfolder, cmdprocess = None): if not cmdprocess: cmdprocess = subprocess.Popen("cmd", stdin = subprocess.PIPE, stdout = subprocess.PIPE, stderr = subprocess.PIPE) print(folder) for name in os.listdir(folder): fullname = os.path.join(folder, name) if os.path.isdir(fullname): makelinks(fullname, targetfolder, cmdprocess) else: makelink(fullname, targetfolder, cmdprocess) #for a given file, create one symlink in the target folder def makelink(fullname, targetfolder, cmdprocess): linkname = os.path.join(targetfolder, re.sub(r"[\/\\\:\*\?\"\<\>\|]", "-", fullname)) if not os.path.exists(linkname): try: os.remove(linkname) print("Invalid symlink removed:", linkname) except: pass if not os.path.exists(linkname): cmdprocess.stdin.write("mklink " + linkname + " " + fullname + "\r\n") So this is a top-down recursion where first the folder name is printed, then the subdirectories are processed. If I run this now over some folder, the whole thing just stops after 10 or so symbolic links. Here is the output: D:\Musik\neu D:\Musik\neu\# Electronic D:\Musik\neu\# Electronic\# tag & reencode D:\Musik\neu\# Electronic\# tag & reencode\ChillOutMix D:\Musik\neu\# Electronic\# tag & reencode\Unknown D&B D:\Musik\neu\# Electronic\# tag & reencode\Unknown D&B 2 The program still seems to run but no new output is generated. It created 9 symlinks for some files in the # tag & reencode and the first three files in the ChillOutMix folder. The cmd.exe Window is still open and empty, and shows in its title bar that it is currently processing the mklink command for the third file in ChillOutMix. I tried to insert a time.sleep(2) after each cmdprocess.stdin.write in case Python is just too fast for the cmd process, but it doesn't help. Does anyone know what the problem might be?

    Read the article

  • How is it that json serialization is so much faster than yaml serialization in python?

    - by guidoism
    I have code that relies heavily on yaml for cross-language serialization and while working on speeding some stuff up I noticed that yaml was insanely slow compared to other serialization methods (e.g., pickle, json). So what really blows my mind is that json is so much faster that yaml when the output is nearly identical. >>> import yaml, cjson; d={'foo': {'bar': 1}} >>> yaml.dump(d, Dumper=yaml.SafeDumper) 'foo: {bar: 1}\n' >>> cjson.encode(d) '{"foo": {"bar": 1}}' >>> import yaml, cjson; >>> timeit("yaml.dump(d, Dumper=yaml.SafeDumper)", setup="import yaml; d={'foo': {'bar': 1}}", number=10000) 44.506911039352417 >>> timeit("yaml.dump(d, Dumper=yaml.CSafeDumper)", setup="import yaml; d={'foo': {'bar': 1}}", number=10000) 16.852826118469238 >>> timeit("cjson.encode(d)", setup="import cjson; d={'foo': {'bar': 1}}", number=10000) 0.073784112930297852 PyYaml's CSafeDumper and cjson are both written in C so it's not like this is a C vs Python speed issue. I've even added some random data to it to see if cjson is doing any caching, but it's still way faster than PyYaml. I realize that yaml is a superset of json, but how could the yaml serializer be 2 orders of magnitude slower with such simple input?

    Read the article

  • Atomic Instructions and Variable Update visibility

    - by dsimcha
    On most common platforms (the most important being x86; I understand that some platforms have extremely difficult memory models that provide almost no guarantees useful for multithreading, but I don't care about rare counter-examples), is the following code safe? Thread 1: someVariable = doStuff(); atomicSet(stuffDoneFlag, 1); Thread 2: while(!atomicRead(stuffDoneFlag)) {} // Wait for stuffDoneFlag to be set. doMoreStuff(someVariable); Assuming standard, reasonable implementations of atomic ops: Is Thread 1's assignment to someVariable guaranteed to complete before atomicSet() is called? Is Thread 2 guaranteed to see the assignment to someVariable before calling doMoreStuff() provided it reads stuffDoneFlag atomically? Edits: The implementation of atomic ops I'm using contains the x86 LOCK instruction in each operation, if that helps. Assume stuffDoneFlag is properly cleared somehow. How isn't important. This is a very simplified example. I created it this way so that you wouldn't have to understand the whole context of the problem to answer it. I know it's not efficient.

    Read the article

  • How to implement a python REPL that nicely handles asynchronous output?

    - by andy
    I have a Python-based app that can accept a few commands in a simple read-eval-print-loop. I'm using raw_input('> ') to get the input. On Unix-based systems, I also import readline to make things behave a little better. All this is working fine. The problem is that there are asynchronous events coming in, and I'd like to print output as soon as they happen. Unfortunately, this makes things look ugly. The " " string doesn't show up again after the output, and if the user is halfway through typing something, it chops their text in half. It should probably redraw the user's text-in-progress after printing something. This seems like it must be a solved problem. What's the proper way to do this? Also note that some of my users are Windows-based. TIA Edit: The accepted answer works under Unixy platforms (when the readline module is available), but if anyone knows how to make this work under Windows, it would be much appreciated!

    Read the article

  • Convert array to CSV/TSV-formated string in Python.

    - by dreeves
    Python provides csv.DictWriter for outputting CSV to a file. What is the simplest way to output CSV to a string or to stdout? For example, given a 2D array like this: [["a b c", "1,2,3"], ["i \"comma-heart\" you", "i \",heart\" u, too"]] return the following string: "a b c, \"1, 2, 3\"\n\"i \"\"comma-heart\"\" you\", \"i \"\",heart\"\" u, too\"" which when printed would look like this: a b c, "1,2,3" "i ""heart"" you", "i "",heart"" u, too" (I'm taking csv.DictWriter's word for it that that is in fact the canonical way to output that array as CSV. Excel does parse it correctly that way, though Mathematica does not. From a quick look at the wikipedia page on CSV it seems Mathematica is wrong.) One way would be to write to a temp file with csv.DictWriter and read it back with csv.DictReader. What's a better way? TSV instead of CSV It also occurs to me that I'm not wedded to CSV. TSV would make a lot of the headaches with delimiters and quotes go away: just replace tabs with spaces in the entries of the 2D array and then just intersperse tabs and newlines and you're done. Let's include solutions for both TSV and CSV in the answers to make this as useful as possible for future searchers.

    Read the article

  • Elegant way to take basename of directory in Python?

    - by user248237
    I have several scripts that take as input a directory name, and my program creates files in those directories. Sometimes I want to take the basename of a directory given to the program and use it to make various files in the directory. For example, # directory name given by user via command-line output_dir = "..." # obtained by OptParser, for example my_filename = output_dir + '/' + os.path.basename(output_dir) + '.my_program_output' # write stuff to my_filename The problem is that if the user gives a directory name with a trailing slash, then os.path.basename will return the empty string, which is not what I want. What is the most elegant way to deal with these slash/trailing slash issues in python? I know I can manually check for the slash at the end of output_dir and remove it if it's there, but there seems like there should be a better way. Is there? Also, is it OK to manually add '/' characters? E.g. output_dir + '/' os.path.basename() or is there a more generic way to build up paths? Thanks.

    Read the article

  • What is the best way to translate this recursive python method into Java?

    - by Simucal
    In another question I was provided with a great answer involving generating certain sets for the Chinese Postman Problem. The answer provided was: def get_pairs(s): if not s: yield [] else: i = min(s) for j in s - set([i]): for r in get_pairs(s - set([i, j])): yield [(i, j)] + r for x in get_pairs(set([1,2,3,4,5,6])): print x This will output the desire result of: [(1, 2), (3, 4), (5, 6)] [(1, 2), (3, 5), (4, 6)] [(1, 2), (3, 6), (4, 5)] [(1, 3), (2, 4), (5, 6)] [(1, 3), (2, 5), (4, 6)] [(1, 3), (2, 6), (4, 5)] [(1, 4), (2, 3), (5, 6)] [(1, 4), (2, 5), (3, 6)] [(1, 4), (2, 6), (3, 5)] [(1, 5), (2, 3), (4, 6)] [(1, 5), (2, 4), (3, 6)] [(1, 5), (2, 6), (3, 4)] [(1, 6), (2, 3), (4, 5)] [(1, 6), (2, 4), (3, 5)] [(1, 6), (2, 5), (3, 4)] This really shows off the expressiveness of Python because this is almost exactly how I would write the pseudo-code for the algorithm. I especially like the usage of yield and and the way that sets are treated as first class citizens. However, there in lies my problem. What would be the best way to: 1.Duplicate the functionality of the yield return construct in Java? Would it instead be best to maintain a list and append my partial results to this list? How would you handle the yield keyword. 2.Handle the dealing with the sets? I know that I could probably use one of the Java collections which implements that implements the Set interface and then using things like removeAll() to give me a set difference. Is this what you would do in that case? Ultimately, I'm looking to reduce this method into as concise and straightforward way as possible in Java. I'm thinking the return type of the java version of this method will likely return a list of int arrays or something similar. How would you handle the situations above when converting this method into Java?

    Read the article

  • Where is my python script spending time? Is there "missing time" in my cprofile / pstats trace?

    - by fmark
    I am attempting to profile a long running python script. The script does some spatial analysis on raster GIS data set using the gdal module. The script currently uses three files, the main script which loops over the raster pixels called find_pixel_pairs.py, a simple cache in lrucache.py and some misc classes in utils.py. I have profiled the code on a moderate sized dataset. pstats returns: p.sort_stats('cumulative').print_stats(20) Thu May 6 19:16:50 2010 phes.profile 355483738 function calls in 11644.421 CPU seconds Ordered by: cumulative time List reduced from 86 to 20 due to restriction <20> ncalls tottime percall cumtime percall filename:lineno(function) 1 0.008 0.008 11644.421 11644.421 <string>:1(<module>) 1 11064.926 11064.926 11644.413 11644.413 find_pixel_pairs.py:49(phes) 340135349 544.143 0.000 572.481 0.000 utils.py:173(extent_iterator) 8831020 18.492 0.000 18.492 0.000 {range} 231922 3.414 0.000 8.128 0.000 utils.py:152(get_block_in_bands) 142739 1.303 0.000 4.173 0.000 utils.py:97(search_extent_rect) 745181 1.936 0.000 2.500 0.000 find_pixel_pairs.py:40(is_no_data) 285478 1.801 0.000 2.271 0.000 utils.py:98(intify) 231922 1.198 0.000 2.013 0.000 utils.py:116(block_to_pixel_extent) 695766 1.990 0.000 1.990 0.000 lrucache.py:42(get) 1213166 1.265 0.000 1.265 0.000 {min} 1031737 1.034 0.000 1.034 0.000 {isinstance} 142740 0.563 0.000 0.909 0.000 utils.py:122(find_block_extent) 463844 0.611 0.000 0.611 0.000 utils.py:112(block_to_pixel_coord) 745274 0.565 0.000 0.565 0.000 {method 'append' of 'list' objects} 285478 0.346 0.000 0.346 0.000 {max} 285480 0.346 0.000 0.346 0.000 utils.py:109(pixel_coord_to_block_coord) 324 0.002 0.000 0.188 0.001 utils.py:27(__init__) 324 0.016 0.000 0.186 0.001 gdal.py:848(ReadAsArray) 1 0.000 0.000 0.160 0.160 utils.py:50(__init__) The top two calls contain the main loop - the entire analyis. The remaining calls sum to less than 625 of the 11644 seconds. Where are the remaining 11,000 seconds spent? Is it all within the main loop of find_pixel_pairs.py? If so, can I find out which lines of code are taking most of the time?

    Read the article

  • multi-threaded proxy checker having problems

    - by Paul
    hello everyone, I am trying to create a proxy checker. This is my first attempt at multithreading and it's not going so well, the threads seem to be waiting for one to complete before initializing the next. Imports System.Net Imports System.IO Imports System.Threading Public Class Form1 Public sFileName As String Public srFileReader As System.IO.StreamReader Public sInputLine As String Public Class WebCall Public proxy As String Public htmlout As String Public Sub New(ByVal proxy As String) Me.proxy = proxy End Sub Public Event ThreadComplete(ByVal htmlout As String) Public Sub send() Dim myWebRequest As HttpWebRequest = CType(WebRequest.Create("http://www.myserver.com/ip.php"), HttpWebRequest) myWebRequest.Proxy = New WebProxy(proxy, False) Try Dim myWebResponse As HttpWebResponse = CType(myWebRequest.GetResponse(), HttpWebResponse) Dim loResponseStream As StreamReader = New StreamReader(myWebResponse.GetResponseStream()) htmlout = loResponseStream.ReadToEnd() Debug.WriteLine("Finished - " & htmlout) RaiseEvent ThreadComplete(htmlout) Catch ex As WebException If (ex.Status = WebExceptionStatus.ConnectFailure) Then End If Debug.WriteLine("Failed - " & proxy) End Try End Sub End Class Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button1.Click Dim proxy As String Dim webArray As New ArrayList() Dim n As Integer For n = 0 To 2 proxy = srFileReader.ReadLine() webArray.Add(New WebCall(proxy)) Next Dim w As WebCall For Each w In webArray Threading.ThreadPool.QueueUserWorkItem(New WaitCallback(AddressOf w.send), w) Next w End Sub Private Sub Form1_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load srFileReader = System.IO.File.OpenText("proxies.txt") End Sub End Class

    Read the article

  • Return a list of imported Python modules used in a script?

    - by Jono Bacon
    Hi All, I am writing a program that categorizes a list of Python files by which modules they import. As such I need to scan the collection of .py files ad return a list of which modules they import. As an example, if one of the files I import has the following lines: import os import sys, gtk I would like it to return: ["os", "sys", "gtk"] I played with modulefinder and wrote: from modulefinder import ModuleFinder finder = ModuleFinder() finder.run_script('testscript.py') print 'Loaded modules:' for name, mod in finder.modules.iteritems(): print '%s ' % name, but this returns more than just the modules used in the script. As an example in a script which merely has: import os print os.getenv('USERNAME') The modules returned from the ModuleFinder script return: tokenize heapq __future__ copy_reg sre_compile _collections cStringIO _sre functools random cPickle __builtin__ subprocess cmd gc __main__ operator array select _heapq _threading_local abc _bisect posixpath _random os2emxpath tempfile errno pprint binascii token sre_constants re _abcoll collections ntpath threading opcode _struct _warnings math shlex fcntl genericpath stat string warnings UserDict inspect repr struct sys pwd imp getopt readline copy bdb types strop _functools keyword thread StringIO bisect pickle signal traceback difflib marshal linecache itertools dummy_thread posix doctest unittest time sre_parse os pdb dis ...whereas I just want it to return 'os', as that was the module used in the script. Can anyone help me achieve this?

    Read the article

  • C# Multithreaded Domain Design

    - by Thijs Cramer
    Let's say i have a Domain Model that im trying to make compatible with multithreading. The prototype domain is a game domain that consists of Space, SpaceObject, and Location objects. A SpaceObject has the Move method and Asteroid and Ship extend this object with specific properties for the object (Ship has a name and Asteroid has a color) Let's say i want to make the Move method for each object run in a seperate thread. That would be stupid because with 10000 objects, i would have 10000 threads. What would be the best way to seperate the workload between cores/threads? I'm trying to learn the basics of concurrency, and building a small game to prototype a lot of concepts. What i've already done, is build a domain, and a threading model with a timer that launches events based on intervals. If the event occurs i want to update my entire model with the new locations of any SpaceObject. But i don't know how and when to launch new threads with workloads when the event occurs. Some people at work told me that u can't update your core domain multithreaded, because you have to synch everything. But in that case i can't run my game on a dual quadcore server, because it would only use 1 CPU for the hardest tasks. Anyone know what to do here?

    Read the article

  • Return a list of important Python modules in a script?

    - by Jono Bacon
    Hi All, I am writing a program that categorizes a list of Python files by which modules they import. As such I need to scan the collection of .py files ad return a list of which modules they import. As an example, if one of the files I import has the following lines: import os import sys, gtk I would like it to return: ["os", "sys", "gtk"] I played with modulefinder and wrote: from modulefinder import ModuleFinder finder = ModuleFinder() finder.run_script('testscript.py') print 'Loaded modules:' for name, mod in finder.modules.iteritems(): print '%s ' % name, but this returns more than just the modules used in the script. As an example in a script which merely has: import os print os.getenv('USERNAME') The modules returned from the ModuleFinder script return: tokenize heapq __future__ copy_reg sre_compile _collections cStringIO _sre functools random cPickle __builtin__ subprocess cmd gc __main__ operator array select _heapq _threading_local abc _bisect posixpath _random os2emxpath tempfile errno pprint binascii token sre_constants re _abcoll collections ntpath threading opcode _struct _warnings math shlex fcntl genericpath stat string warnings UserDict inspect repr struct sys pwd imp getopt readline copy bdb types strop _functools keyword thread StringIO bisect pickle signal traceback difflib marshal linecache itertools dummy_thread posix doctest unittest time sre_parse os pdb dis ...whereas I just want it to return 'os', as that was the module used in the script. Can anyone help me achieve this?

    Read the article

  • Setting an Excel Range with an Array using Python and comtypes?

    - by technomalogical
    Using comtypes to drive Python, it seems some magic is happening behind the scenes that is not converting tuples and lists to VARIANT types: # RANGE(“C14:D21”) has values # Setting the Value on the Range with a Variant should work, but # list or tuple is not getting converted properly it seems >>>from comtypes.client import CreateObject >>>xl = CreateObject("Excel.application") >>>xl.Workbooks.Open(r'C:\temp\my_file.xlsx') >>>xl.Visible = True >>>vals=tuple([(x,y) for x,y in zip('abcdefgh',xrange(8))]) # creates: #(('a', 0), ('b', 1), ('c', 2), ('d', 3), ('e', 4), ('f', 5), ('g', 6), ('h', 7)) >>>sheet = xl.Workbooks[1].Sheets["Sheet1"] >>>sheet.Range["C14","D21"].Value() (('foo',1),('foo',2),('foo',3),('foo',4),('foo',6),('foo',6),('foo',7),('foo',8)) >>>sheet.Range["C14","D21"].Value[()] = vals # no error, this blanks out the cells in the Range According to the comtypes docs: When you pass simple sequences (lists or tuples) as VARIANT parameters, the COM server will receive a VARIANT containing a SAFEARRAY of VARIANTs with the typecode VT_ARRAY | VT_VARIANT. This seems to be inline with what MSDN says about passing an array to a Range's Value. I also found this page showing something similar in C#. Can anybody tell me what I'm doing wrong? EDIT I've come up with a simpler example that performs the same way (in that, it does not work): >>>from comtypes.client import CreateObject >>>xl = CreateObject("Excel.application") >>>xl.Workbooks.Add() >>>sheet = xl.Workbooks[1].Sheets["Sheet1"] # at this point, I manually typed into the range A1:B3 >>> sheet.Range("A1","B3").Value() ((u'AAA', 1.0), (u'BBB', 2.0), (u'CCC', 3.0)) >>>sheet.Range("A1","B3").Value[()] = [(x,y) for x,y in zip('xyz',xrange(3))] # Using a generator expression, per @Mike's comment # However, this still blanks out my range :(

    Read the article

  • How do I create a python module from a fortran program with f2py?

    - by Lars Hellemo
    I am trying to read some smps files with python, and found a fortran implementation, so I thought I would give f2py a shot. The problem is that I have no experience with fortran. I have successfully installed gfortran and f2py on my Linux box and ran the example on thew f2py page, but I have some trouble compiling and running the large program. There are two files, one with a file reader wrapper and one with all the logic. They seem to call each other, but when I compile and link or try f2py, I get errors that they somehow can't find each other: f95 -c FILEWR~1.F f95 -c SMPSREAD.F90 f95 -o smpsread SMPSREAD.o FILEWR~1.o FILEWR~1.o In function `file_wrapper_' FILEWR~1.F(.text+0x3d) undefined reference to `chopen_' usrlibgcci486-linux-gnu4.4.1libgfortranbegin.a(fmain.o) In function `main' (.text+0x27) undefined reference to `MAIN__' collect2 ld returned 1 exit status I also tried changing the name to FILE_WRAPPER.F but that did not help. With f2py I found out I had to include a comment to get it to accept free format, and saved this as a new file and tried: f2py -c -m smpsread smpsread.f90 I get a lot of output and warnings, but the error seems to be this one: getctype: No C-type found in "{'typespec': 'type', 'attrspec': ['allocatable'], 'typename': 'node', 'dimension': [':']}", assuming void. The fortran 90 spms reader can be found here. Any help or suggestions appreciated.

    Read the article

  • Mutex example / tutorial ?

    - by Nav
    I've noticed that asking questions for the sake of creating a reference list etc. is encouraged in SO. This is one such question, so that anyone Googling for a mutex tutorial will find a good one here. I'm new to multithreading, and was trying to understand how mutexes work. Did a lot of Googling and this is the only decent tutorial I found, but it still left some doubts of how it works because I created my own program and the locking didn't work. One absolutely non-intuitive syntax of the mutex is pthread_mutex_lock( &mutex1 );, where it looks like the mutex is being locked, when what I really want to lock is some other variable. Does this syntax mean that locking a mutex locks a region of code until the mutex is unlocked? Then how do threads know that the region is locked? And isn't such a phenomenon supposed to be called critical section? In short, could you please help with the simplest possible mutex example program and the simplest possible explanation on the logic of how it works? I'm sure this will help plenty of other newbies.

    Read the article

  • Python/Sqlite program, write as browser app or desktop app?

    - by ChrisC
    I am in the planning stages of rewriting an Access db I wrote several years ago in a full fledged program. I have very slight experience coding, but not enough to call myself a programmer by far. I'll definitely be learning as I go, so I'd like to keep everything as simple as possible. I've decided on Python and SQLite for my program, but I need help on my next decision. Here is my situation 1) It'll be a desktop program, run locally on each machine, all Windows 2) I would really like a nice looking GUI with colors, nice screens, menus, lists, etc, 3) I'm thinking about using a browser interface because (a) from what I've read, browser apps can look really great, and (b) I understand there are lots of free tools to assist in setting up the GUI/GUI code with drag and drop tools, so that helps my "keep it simple" goal. 4) I want the program to be totally portable so it runs completely from one single folder on a user's PC, with no installation(s) needed for it to run (If I did it as a browser app, isn't there the possibility that a user's browser settings could affect or break the app. How likely is this?) For my situation, should I make it a desktop app or browser app?

    Read the article

  • Best way to convert a Unicode URL to ASCII (UTF-8 percent-escaped) in Python?

    - by benhoyt
    I'm wondering what's the best way -- or if there's a simple way with the standard library -- to convert a URL with Unicode chars in the domain name and path to the equivalent ASCII URL, encoded with domain as IDNA and the path %-encoded, as per RFC 3986. I get from the user a URL in UTF-8. So if they've typed in http://?.ws/? I get 'http://\xe2\x9e\xa1.ws/\xe2\x99\xa5' in Python. And what I want out is the ASCII version: 'http://xn--hgi.ws/%E2%99%A5'. What I do at the moment is split the URL up into parts via a regex, and then manually IDNA-encode the domain, and separately encode the path and query string with different urllib.quote() calls. # url is UTF-8 here, eg: url = u'http://?.ws/?'.encode('utf-8') match = re.match(r'([a-z]{3,5})://(.+\.[a-z0-9]{1,6})' r'(:\d{1,5})?(/.*?)(\?.*)?$', url, flags=re.I) if not match: raise BadURLException(url) protocol, domain, port, path, query = match.groups() try: domain = unicode(domain, 'utf-8') except UnicodeDecodeError: return '' # bad UTF-8 chars in domain domain = domain.encode('idna') if port is None: port = '' path = urllib.quote(path) if query is None: query = '' else: query = urllib.quote(query, safe='=&?/') url = protocol + '://' + domain + port + path + query # url is ASCII here, eg: url = 'http://xn--hgi.ws/%E3%89%8C' Is this correct? Any better suggestions? Is there a simple standard-library function to do this?

    Read the article

< Previous Page | 220 221 222 223 224 225 226 227 228 229 230 231  | Next Page >