Search Results

Search found 200 results on 8 pages for 'pythonic'.

Page 2/8 | < Previous Page | 1 2 3 4 5 6 7 8  | Next Page >

  • Pythonic mapping of an array (Beginner)

    - by scott_karana
    Hey StackOverflow, I've got a question related to a beginner Python snippet I've written to introduce myself to the language. It's an admittedly trivial early effort, but I'm still wondering how I could have written it more elegantly. The program outputs NATO phoenetic readable versions of an argument, such "H2O" - "Hotel 2 Oscar", or (lacking an argument) just outputs the whole alphabet. I mainly use it for calling in MAC addresses and IQNs, but it's useful for other phone support too. Here's the body of the relevant portion of the program: #!/usr/bin/env python import sys nato = { "a": 'Alfa', "b": 'Bravo', "c": 'Charlie', "d": 'Delta', "e": 'Echo', "f": 'Foxtrot', "g": 'Golf', "h": 'Hotel', "i": 'India', "j": 'Juliet', "k": 'Kilo', "l": 'Lima', "m": 'Mike', "n": 'November', "o": 'Oscar', "p": 'Papa', "q": 'Quebec', "r": 'Romeo', "s": 'Sierra', "t": 'Tango', "u": 'Uniform', "v": 'Victor', "w": 'Whiskey', "x": 'Xray', "y": 'Yankee', "z": 'Zulu', } if len(sys.argv) < 2: for n in nato.keys(): print nato[n] else: # if sys.argv[1] == "-i" # TODO for char in sys.argv[1].lower(): if char in nato: print nato[char], else: print char, As I mentioned, I just want to see suggestions for a more elegant way to code this. My first guess was to use a list comprehension along the lines of [nato[x] for x in sys.argv[1].lower() if x in nato], but that doesn't allow me to output any non-alphabetic characters. My next guess was to use map, but I couldn't format any lambdas that didn't suffer from the same corner case. Any suggestions? Maybe something with first-class functions? Messing with Array's guts? This seems like it could almost be a Code Golf question, but I feel like I'm just overthinking :)

    Read the article

  • Is this the 'Pythonic' way of doing things?

    - by Sergio Tapia
    This is my first effort on solving the exercise. I gotta say, I'm kind of liking Python. :D # D. verbing # Given a string, if its length is at least 3, # add 'ing' to its end. # Unless it already ends in 'ing', in which case # add 'ly' instead. # If the string length is less than 3, leave it unchanged. # Return the resulting string. def verbing(s): if len(s) >= 3: if s[-3:] == "ing": s += "ly" else: s += "ing" return s else: return s # +++your code here+++ return What do you think I could improve on here?

    Read the article

  • Concise way to getattr() and use it if not None in Python

    - by MTsoul
    I am finding myself doing the following a bit too often: attr = getattr(obj, 'attr', None) if attr is not None: attr() # Do something, either attr(), or func(attr), or whatever else: # Do something else Is there a more pythonic way of writing that? Is this better? (At least not in performance, IMO.) try: obj.attr() # or whatever except AttributeError: # Do something else

    Read the article

  • finding max in python as per some custom criterion

    - by MK
    Hi, I can do max(s) to find the max of a sequence. But suppose I want to compute max according to my own function , something like so - currmax = 0 def mymax(s) : for i in s : #assume arity() attribute is present currmax = i.arity() if i.arity() > currmax else currmax Is there a clean pythonic way of doing this? Thanks!

    Read the article

  • How can this code be made more Pythonic?

    - by usethedeathstar
    This next part of code does exactly what I want it to do. dem_rows and dem_cols contain float values for a number of things i can identify in an image, but i need to get the nearest pixel for each of them, and than to make sure I only get the unique points, and no duplicates. The problem is that this code is ugly and as far as I get it, as unpythonic as it gets. If there would be a pure-numpy-solution (without for-loops) that would be even better. # next part is to make sure that we get the rounding done correctly, and than to get the integer part out of it # without the annoying floatingpoint-error, and without duplicates fielddic={} for i in range(len(dem_rows)): # here comes the ugly part: abusing the fact that i overwrite dictionary keys if I get duplicates fielddic[int(round(dem_rows[i]) + 0.1), int(round(dem_cols[i]) + 0.1)] = None # also very ugly: to make two arrays of integers out of the first and second part of the keys field_rows = numpy.zeros((len(fielddic.keys())), int) field_cols = numpy.zeros((len(fielddic.keys())), int) for i, (r, c) in enumerate(fielddic.keys()): field_rows[i] = r field_cols[i] = c

    Read the article

  • Concatenation of many lists in Python

    - by Space_C0wb0y
    Suppose i have a function like this: def getNeighbors(vertex) which returns a list of vertices that are neighbors of the given vertex. Now i want to create a list with all the neighbors of the neighbors. I do that like this: listOfNeighborsNeighbors = [] for neighborVertex in getNeighbors(vertex): listOfNeighborsNeighbors.append(getNeighbors(neighborsVertex)) Is there a more pythonic way to do that?

    Read the article

  • The Zen of Python distils the guiding principles for Python into 20 aphorisms but lists only 19. What's the twentieth?

    - by Jeff Walden
    From PEP 20, The Zen of Python: Long time Pythoneer Tim Peters succinctly channels the BDFL's guiding principles for Python's design into 20 aphorisms, only 19 of which have been written down. What is this twentieth aphorism? Does it exist, or is the reference merely a rhetorical device to make the reader think? (One potential answer that occurs to me is that "You aren't going to need it" is the remaining aphorism. If that were the case, it would both exist and act to make the reader think, and it would be characteristically playful, thus fitting the list all the better. But web searches suggest this to be an extreme programming mantra, not intrinsically Pythonic wisdom, so I'm stumped.)

    Read the article

  • How to improve my Python regex syntax?

    - by FarmBoy
    I very new to Python, and fairly new to regex. (I have no Perl experience.) I am able to use regular expressions in a way that works, but I'm not sure that my code is particularly Pythonic or consise. For example, If I wanted to read in a text file and print out text that appears directly between the words 'foo' and 'bar' in each line (presuming this occurred one or zero times a line) I would write the following: fileList = open(inFile, 'r') pattern = re.compile(r'(foo)(.*)(bar)') for line in fileList: result = pattern.search(line) if (result != None): print result.groups()[1] Is there a better way? The if is necessary to avoid calling groups() on None. But I suspect there is a more concise way to obtain the matching String when there is one, without throwing errors when there isn't. I'm not hoping for Perl-like unreadability. I just want to accomplish this common task in the commonest and simplest way.

    Read the article

  • Fast iterating over first n items of an iterable in python

    - by martinthenext
    Hello! I'm looking for a pythonic way of iterating over first n items of a list, and it's quite important to do this as fast as possible. This is how I do it now: count = 0 for item in iterable: do_somethin(item) count += 1 if count >= n: break Doesn't seem neat to me. Another way of doing this is: for item in itertools.islice(iterable, n): do_something(item) This looks good, the question is it fast enough to use with some generator(s)? For example: pair_generator = lambda iterable: itertools.izip(*[iter(iterable)]*2) for item in itertools.islice(pair_generator(iterable), n): so_something(item) Will it run fast enough as compared to the first method? Is there some easier way to do it?

    Read the article

  • Python: does it make sense to refactor this check into it's own method?

    - by Jeff Fry
    I'm still learning python. I just wrote this method to determine if a player has won a game of tic-tac-toe yet, given a board state like:'[['o','x','x'],['x','o','-'],['x','o','o']]' def hasWon(board): players = ['x', 'o'] for player in players: for row in board: if row.count(player) == 3: return player top, mid, low = board for i in range(3): if [ top[i],mid[i],low[i] ].count(player) == 3: return player if [top[0],mid[1],low[2]].count(player) == 3: return player if [top[2],mid[1],low[0]].count(player) == 3: return player return None It occurred to me that I check lists of 3 chars several times and could refactor the checking to its own method like so: def check(list, player): if list.count(player) == 3: return player ...but then realized that all that really does is change lines like: if [ top[i],mid[i],low[i] ].count(player) == 3: return player to: if check( [top[i],mid[i],low[i]], player ): return player ...which frankly doesn't seem like much of an improvement. Do you see a better way to refactor this? Or in general a more Pythonic option? I'd love to hear it!

    Read the article

  • How can this verbose, unpythonic routine be improved?

    - by fmark
    Is there a more pythonic way of doing this? I am trying to find the eight neighbours of an integer coordinate lying within an extent. I am interested in reducing its verbosity without sacrificing execution speed. def fringe8((px, py), (x1, y1, x2, xy)): f = [(px - 1, py - 1), (px - 1, py), (px - 1, py + 1), (px, py - 1), (px, py + 1), (px + 1, py - 1), (px + 1, py), (px + 1, py + 1)] f_inrange = [] for fx, fy in f: if fx < x1: continue if fx >= x2: continue if fy < y1: continue if fy >= y2: continue f_inrange.append((fx, fy)) return f_inrange

    Read the article

  • Fast iterating over first n items of an iterable (not a list) in python

    - by martinthenext
    Hello! I'm looking for a pythonic way of iterating over first n items of an iterable (upd: not a list in a common case, as for lists things are trivial), and it's quite important to do this as fast as possible. This is how I do it now: count = 0 for item in iterable: do_something(item) count += 1 if count >= n: break Doesn't seem neat to me. Another way of doing this is: for item in itertools.islice(iterable, n): do_something(item) This looks good, the question is it fast enough to use with some generator(s)? For example: pair_generator = lambda iterable: itertools.izip(*[iter(iterable)]*2) for item in itertools.islice(pair_generator(iterable), n): so_something(item) Will it run fast enough as compared to the first method? Is there some easier way to do it?

    Read the article

  • Working with multiple input and output files in Python

    - by Morlock
    I need to open multiple files (2 input and 2 output files), do complex manipulations on the lines from input files and then append results at the end of 2 output files. I am currently using the following approach: in_1 = open(input_1) in_2 = open(input_2) out_1 = open(output_1, "w") out_2 = open(output_2, "w") # Read one line from each 'in_' file # Do many operations on the DNA sequences included in the input files # Append one line to each 'out_' file in_1.close() in_2.close() out_1.close() out_2.close() The files are huge (each potentially approaching 1Go, that is why I am reading through these input files one at a time. I am guessing that this is not a very Pythonic way to do things. :) Would using the following form good? with open("file1") as f1: with open("file2") as f2: # etc. If yes, could I do it while avoiding the highly indented code that would result? Thanks for the insights!

    Read the article

  • Can I avoid explicit 'self'?

    - by bguiz
    I am fairly new to Python and trying to pick it up, so please forgive if this question has a very obvious answer! I have been learning Python by following some pygame tutorials. Therein I found extensive use of the keyword self, and coming from a primarily Java background, I find that I keep forgetting to type self. For example, instead of self.rect.centerx I would type rect.centerx, because, to me, rect is already a member variable of the class. The Java parallel I can think of for this situation is having to prefix all references to member variables with this. Am I stuck prefixing all member variables with self, or is there a way to declare them that would allow me to avoid having to do so? Even if what I am suggesting isn't "pythonic", I'd still like to know if it is possible. I have taken a look at these related SO questions, but they don't quite answer what I am after: Python - why use “self” in a class? Why do you need explicitly have the “self” argument into a Python method?

    Read the article

  • Joining the previous and next sentence using python

    - by JudoWill
    I'm trying to join a set of sentences contained in a list. I have a function which determines whether a sentence in worth saving. However, in order to keep the context of the sentence I need to also keep the sentence before and after it. In the edge cases, where its either the first or last sentence then, I'll just keep the sentence and its only neighbor. An example is best: ex_paragraph = ['The quick brown fox jumps over the fence.', 'Where there is another red fox.', 'They run off together.', 'They live hapily ever after.'] t1 = lambda x: x.startswith('Where') t2 = lambda x: x'startswith('The ') The result for t1 should be: ['The quick brown fox jumps over the fence. Where there is another red fox. They run off together.'] The result for t2 should be: ['The quick brown fox jumps over the fence. Where there is another red fox.'] My solution is: def YieldContext(sent_list, cont_fun): def JoinSent(sent_list, ind): if ind == 0: return sent_list[ind]+sent_list[ind+1] elif ind == len(sent_list)-1: return sent_list[ind-1]+sent_list[ind] else: return ' '.join(sent_list[ind-1:ind+1]) for sent, sentnum in izip(sent_list, count(0)): if cont_fun(sent): yield JoinSent(sent_list, sent_num) Does anyone know a "cleaner" or more pythonic way to do something like this. The if-elif-else seems a little forced. Thanks, Will PS. I'm obviously doing this with a more complicated "context-function" but this is just for a simple example.

    Read the article

  • Creating a simple command line interface (CLI) using a python server (TCP sock) and few scripts

    - by VN44CA
    I have a Linux box and I want to be able to telnet into it (port 77557) and run few required commands without having to access to the whole Linux box. So, I have a server listening on that port, and echos the entered command on the screen. (for now) Telnet 192.168.1.100 77557 Trying 192.168.1.100... Connected to 192.168.1.100. Escape character is '^]'. hello<br /> You typed: "hello"<br /> NOW: I want to create lot of commands that each take some args and have error codes. Anyone has done this before? It would be great if I can have the server upon initialization go through each directory and execute the init.py file and in turn, the init.py file of each command call into a main template lib API (e.g. RegisterMe()) and register themselves with the server as function call backs. At least this is how I would do it in C/C++. But I want the best Pythonic way of doing this. /cmd/ /cmd/myreboot/ /cmd/myreboot/ini.py (note underscore don't show for some reason) /cmd/mylist/ /cmd/mylist/init.py ... etc IN: /cmd/myreboot/_ini_.py: from myMainCommand import RegisterMe RegisterMe(name="reboot",args=Arglist, usage="Use this to reboot the box", desc="blabla") So, repeating this creates a list of commands and when you enter the command in the telnet session, then the server goes through the list, matches the command and passed the args to that command and the command does the job and print the success or failure to stdout. Thx

    Read the article

  • Connecting an overloaded PyQT signal using new-style syntax

    - by Claudio
    I am designing a custom widget which is basically a QGroupBox holding a configurable number of QCheckBox buttons, where each one of them should control a particular bit in a bitmask represented by a QBitArray. In order to do that, I added the QCheckBox instances to a QButtonGroup, with each button given an integer ID: def populate(self, num_bits, parent = None): """ Adds check boxes to the GroupBox according to the bitmask size """ self.bitArray.resize(num_bits) layout = QHBoxLayout() for i in range(num_bits): cb = QCheckBox() cb.setText(QString.number(i)) self.buttonGroup.addButton(cb, i) layout.addWidget(cb) self.setLayout(layout) Then, each time a user would click on a checkbox contained in self.buttonGroup, I'd like self.bitArray to be notified so I can set/unset the corresponding bit in the array. For that I intended to connect QButtonGroup's buttonClicked(int) signal to QBitArray's toggleBit(int) method and, to be as pythonic as possible, I wanted to use new-style signals syntax, so I tried this: self.buttonGroup.buttonClicked.connect(self.bitArray.toggleBit) The problem is that buttonClicked is an overloaded signal, so there is also the buttonClicked(QAbstractButton*) signature. In fact, when the program is executing I get this error when I click a check box: The debugged program raised the exception unhandled TypeError "QBitArray.toggleBit(int): argument 1 has unexpected type 'QCheckBox'" which clearly shows the toggleBit method received the buttonClicked(QAbstractButton*) signal instead of the buttonClicked(int) one. So, the question is, how can we specify, using new-style syntax, that self.buttonGroup emits the buttonClicked(int) signal instead of the default overload - buttonClicked(QAbstractButton*)?

    Read the article

  • How to write this snippet in Python?

    - by morpheous
    I am learning Python (I have a C/C++ background). I need to write something practical in Python though, whilst learning. I have the following pseudocode (my first attempt at writing a Python script, since reading about Python yesterday). Hopefully, the snippet details the logic of what I want to do. BTW I am using python 2.6 on Ubuntu Karmic. Assume the script is invoked as: script_name.py directory_path import csv, sys, os, glob # Can I declare that the function accepts a dictionary as first arg? def getItemValue(item, key, defval) return !item.haskey(key) ? defval : item[key] dirname = sys.argv[1] # declare some default values here weight, is_male, default_city_id = 100, true, 1 # fetch some data from a database table into a nested dictionary, indexed by a string curr_dict = load_dict_from_db('foo') #iterate through all the files matching *.csv in the specified folder for infile in glob.glob( os.path.join(dirname, '*.csv') ): #get the file name (without the '.csv' extension) code = infile[0:-4] # open file, and iterate through the rows of the current file (a CSV file) f = open(infile, 'rt') try: reader = csv.reader(f) for row in reader: #lookup the id for the code in the dictionary id = curr_dict[code]['id'] name = row['name'] address1 = row['address1'] address2 = row['address2'] city_id = getItemValue(row, 'city_id', default_city_id) # insert row to database table finally: f.close() I have the following questions: Is the code written in a Pythonic enough way (is there a better way of implementing it)? Given a table with a schema like shown below, how may I write a Python function that fetches data from the table and returns is in a dictionary indexed by string (name). How can I insert the row data into the table (actually I would like to use a transaction if possible, and commit just before the file is closed) Table schema: create table demo (id int, name varchar(32), weight float, city_id int); BTW, my backend database is postgreSQL

    Read the article

  • hosting simple python scripts in a container to handle concurrency, configuration, caching, etc.

    - by Justin Grant
    My first real-world Python project is to write a simple framework (or re-use/adapt an existing one) which can wrap small python scripts (which are used to gather custom data for a monitoring tool) with a "container" to handle boilerplate tasks like: fetching a script's configuration from a file (and keeping that info up to date if the file changes and handle decryption of sensitive config data) running multiple instances of the same script in different threads instead of spinning up a new process for each one expose an API for caching expensive data and storing persistent state from one script invocation to the next Today, script authors must handle the issues above, which usually means that most script authors don't handle them correctly, causing bugs and performance problems. In addition to avoiding bugs, we want a solution which lowers the bar to create and maintain scripts, especially given that many script authors may not be trained programmers. Below are examples of the API I've been thinking of, and which I'm looking to get your feedback about. A scripter would need to build a single method which takes (as input) the configuration that the script needs to do its job, and either returns a python object or calls a method to stream back data in chunks. Optionally, a scripter could supply methods to handle startup and/or shutdown tasks. HTTP-fetching script example (in pseudocode, omitting the actual data-fetching details to focus on the container's API): def run (config, context, cache) : results = http_library_call (config.url, config.http_method, config.username, config.password, ...) return { html : results.html, status_code : results.status, headers : results.response_headers } def init(config, context, cache) : config.max_threads = 20 # up to 20 URLs at one time (per process) config.max_processes = 3 # launch up to 3 concurrent processes config.keepalive = 1200 # keep process alive for 10 mins without another call config.process_recycle.requests = 1000 # restart the process every 1000 requests (to avoid leaks) config.kill_timeout = 600 # kill the process if any call lasts longer than 10 minutes Database-data fetching script example might look like this (in pseudocode): def run (config, context, cache) : expensive = context.cache["something_expensive"] for record in db_library_call (expensive, context.checkpoint, config.connection_string) : context.log (record, "logDate") # log all properties, optionally specify name of timestamp property last_date = record["logDate"] context.checkpoint = last_date # persistent checkpoint, used next time through def init(config, context, cache) : cache["something_expensive"] = get_expensive_thing() def shutdown(config, context, cache) : expensive = cache["something_expensive"] expensive.release_me() Is this API appropriately "pythonic", or are there things I should do to make this more natural to the Python scripter? (I'm more familiar with building C++/C#/Java APIs so I suspect I'm missing useful Python idioms.) Specific questions: is it natural to pass a "config" object into a method and ask the callee to set various configuration options? Or is there another preferred way to do this? when a callee needs to stream data back to its caller, is a method like context.log() (see above) appropriate, or should I be using yield instead? (yeild seems natural, but I worry it'd be over the head of most scripters) My approach requires scripts to define functions with predefined names (e.g. "run", "init", "shutdown"). Is this a good way to do it? If not, what other mechanism would be more natural? I'm passing the same config, context, cache parameters into every method. Would it be better to use a single "context" parameter instead? Would it be better to use global variables instead? Finally, are there existing libraries you'd recommend to make this kind of simple "script-running container" easier to write?

    Read the article

  • Python performance: iteration and operations on nested lists

    - by J.J.
    Problem Hey folks. I'm looking for some advice on python performance. Some background on my problem: Given: A mesh of nodes of size (x,y) each with a value (0...255) starting at 0 A list of N input coordinates each at a specified location within the range (0...x, 0...y) Increment the value of the node at the input coordinate and the node's neighbors within range Z up to a maximum of 255. Neighbors beyond the mesh edge are ignored. (No wrapping) BASE CASE: A mesh of size 1024x1024 nodes, with 400 input coordinates and a range Z of 75 nodes. Processing should be O(x*y*Z*N). I expect x, y and Z to remain roughly around the values in the base case, but the number of input coordinates N could increase up to 100,000. My goal is to minimize processing time. Current results I have 2 current implementations: f1, f2 Running speed on my 2.26 GHz Intel Core 2 Duo with Python 2.6.1: f1: 2.9s f2: 1.8s f1 is the initial naive implementation: three nested for loops. f2 is replaces the inner for loop with a list comprehension. Code is included below for your perusal. Question How can I further reduce the processing time? I'd prefer sub-1.0s for the test parameters. Please, keep the recommendations to native Python. I know I can move to a third-party package such as numpy, but I'm trying to avoid any third party packages. Also, I've generated random input coordinates, and simplified the definition of the node value updates to keep our discussion simple. The specifics have to change slightly and are outside the scope of my question. thanks much! f1 is the initial naive implementation: three nested for loops. 2.9s def f1(x,y,n,z): rows = [] for i in range(x): rows.append([0 for i in xrange(y)]) for i in range(n): inputX, inputY = (int(x*random.random()), int(y*random.random())) topleft = (inputX - z, inputY - z) for i in xrange(max(0, topleft[0]), min(topleft[0]+(z*2), x)): for j in xrange(max(0, topleft[1]), min(topleft[1]+(z*2), y)): if rows[i][j] <= 255: rows[i][j] += 1 f2 is replaces the inner for loop with a list comprehension. 1.8s def f2(x,y,n,z): rows = [] for i in range(x): rows.append([0 for i in xrange(y)]) for i in range(n): inputX, inputY = (int(x*random.random()), int(y*random.random())) topleft = (inputX - z, inputY - z) for i in xrange(max(0, topleft[0]), min(topleft[0]+(z*2), x)): l = max(0, topleft[1]) r = min(topleft[1]+(z*2), y) rows[i][l:r] = [j+1 for j in rows[i][l:r] if j < 255]

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8  | Next Page >