Search Results

Search found 13676 results on 548 pages for 'python dude'.

Page 389/548 | < Previous Page | 385 386 387 388 389 390 391 392 393 394 395 396  | Next Page >

  • [genshi] Print string as HTML

    - by infinito
    Hello, I would like to know if is there any way to convert a plain unicode string to HTML in Genshi, so, for example, it renders newlines as <br/>. I want this to render some text entered in a textarea. Thanks in advance!

    Read the article

  • How to create a non-persistent Elixir/SQLAlchemy object?

    - by siebert
    Hi, because of legacy data which is not available in the database but some external files, I want to create a SQLAlchemy object which contains data read from the external files, but isn't written to the database if I execute session.flush() My code looks like this: try: return session.query(Phone).populate_existing().filter(Phone.mac == ident).one() except: return self.createMockPhoneFromLicenseFile(ident) def createMockPhoneFromLicenseFile(self, ident): # Some code to read necessary data from file deleted.... phone = Phone() phone.mac = foo phone.data = bar phone.state = "Read from legacy file" phone.purchaseOrderPosition = self.getLegacyOrder(ident) # SQLAlchemy magic doesn't seem to work here, probably because we don't insert the created # phone object into the database. So we set the id fields manually. phone.order_id = phone.purchaseOrderPosition.order_id phone.order_position_id = phone.purchaseOrderPosition.order_position_id return phone Everything works fine except that on a session.flush() executed later in the application SQLAlchemy tries to write the created Phone object to the database (which fortunatly doesn't succeed, because phone.state is longer than the data type allows), which breaks the function which issues the flush. Is there any way to prevent SQLAlchemy from trying to write such an object? Ciao, Steffen

    Read the article

  • Online job-searching is tedious. Help me automate it.

    - by ehsanul
    Many job sites have broken searches that don't let you narrow down jobs by experience level. Even when they do, it's usually wrong. This requires you to wade through hundreds of postings that you can't apply for before finding a relevant one, quite tedious. Since I'd rather focus on writing cover letters etc., I want to write a program to look through a large number of postings, and save the URLs of just those jobs that don't require years of experience. I don't require help writing the scraper to get the html bodies of possibly relevant job posts. The issue is accurately detecting the level of experience required for the job. This should not be too difficult as job posts are usually very explicit about this ("must have 5 years experience in..."), but there may be some issues with overly simple solutions. In my case, I'm looking for entry-level positions. Often they don't say "entry-level", but inclusion of the words probably means the job should be saved. Next, I can safely exclude a job the says it requires "5 years" of experience in whatever, so a regex like /\d\syears/ seems reasonable to exclude jobs. But then, I realized some jobs say they'll take 0-2 years of experience, matches the exclusion regex but is clearly a job I want to take a look at. Hmmm, I can handle that with another regex. But some say "less than 2 years" or "fewer than 2 years". Can handle that too, but it makes me wonder what other patterns I'm not thinking of, and possibly excluding many jobs. That's what brings me here, to find a better way to do this than regexes, if there is one. I'd like to minimize the false negative rate and save all the jobs that seem like they might not require many years of experience. Does excluding anything that matches /[3-9]\syears|1\d\syears/ seem reasonable? Or is there a better way? Training a bayesian filter maybe?

    Read the article

  • how to set Content-Type automatically when i download the data that i uploaded.

    - by zjm1126
    this is my code : import os from google.appengine.ext import webapp from google.appengine.ext.webapp import template from google.appengine.ext.webapp.util import run_wsgi_app from google.appengine.ext import db #from login import htmlPrefix,get_current_user class MyModel(db.Model): blob = db.BlobProperty() class BaseRequestHandler(webapp.RequestHandler): def render_template(self, filename, template_args=None): if not template_args: template_args = {} path = os.path.join(os.path.dirname(__file__), 'templates', filename) self.response.out.write(template.render(path, template_args)) class upload(BaseRequestHandler): def get(self): self.render_template('index.html',) def post(self): file=self.request.get('file') obj = MyModel() obj.blob = db.Blob(file.encode('utf8')) obj.put() self.response.out.write('upload ok') class download(BaseRequestHandler): def get(self): #id=self.request.get('id') o = MyModel.all().get() #self.response.out.write(''.join('%s: %s <br/>' % (a, getattr(o, a)) for a in dir(o))) self.response.out.write(o) application = webapp.WSGIApplication( [ ('/?', upload), ('/download',download), ], debug=True ) def main(): run_wsgi_app(application) if __name__ == "__main__": main() my index.html is : <form action="/" method="post"> <input type="file" name="file" /> <input type="submit" /> </form> and it show : <__main__.MyModel object at 0x02506830> but ,i don't want to see this , i want to download it , how to change my code to run, thanks updated it is ok now : class upload(BaseRequestHandler): def get(self): self.render_template('index.html',) def post(self): file=self.request.get('file') obj = MyModel() obj.blob = db.Blob(file) obj.put() self.response.out.write('upload ok') class download(BaseRequestHandler): def get(self): #id=self.request.get('id') o = MyModel.all().order('-').get() #self.response.out.write(''.join('%s: %s <br/>' % (a, getattr(o, a)) for a in dir(o))) self.response.headers['Content-Type'] = "image/png" self.response.out.write(o.blob) and new question is : if you upload a 'png' file ,it will show successful , but ,when i upload a rar file ,i will run error , so how to set Content-Type automatically , and what is the Content-Type of the 'rar' file thanks

    Read the article

  • How to disable translations during unit tests in django?

    - by Denilson Sá
    I'm using Django Internationalization tools to translate some strings from my application. The code looks like this: from django.utils.translation import ugettext as _ def my_view(request): output = _("Welcome to my site.") return HttpResponse(output) Then, I'm writing unit tests using the Django test client. These tests make a request to the view and compare the returned contents. How can I disable the translations while running the unit tests? I'm aiming to do this: class FoobarTestCase(unittest.TestCase): def setUp(self): # Do something here to disable the string translation. But what? # I've already tried this, but it didn't work: django.utils.translation.deactivate_all() def testFoobar(self): c = Client() response = c.get("/foobar") # I want to compare to the original string without translations. self.assertEquals(response.content.strip(), "Welcome to my site.")

    Read the article

  • Empty dict-like collection problem in SQLAlchemy

    - by maksymko
    I have a mapping in SQLAlchemy that looks like this: t_property_value = sa.Table('property_value', MetaData, autoload = True, autoload_with = engine) orm.mapper(PropertyValue, t_property_value) t_estate = sa.Table('estate', MetaData, autoload = True, autoload_with = engine) orm.mapper(Estate, t_estate, properties = dict( property_hash = orm.relation(PropertyValue, collection_class = column_mapped_collection(t_property_value.c.property_id)) )) Now, everything seems to be fine, when I load the Estate object and it has some relations to PropertyValue objects. However, when it does not, then property_hash attribute is None, instead of being something dict-like, so I can not add new relations like this: estate.property_hash[prop_id] = PropertyValue(...) because I get the "'NoneType' object does not support item assignment" error. So, is there any way to force SQLAlchemy to create proper empty collection?

    Read the article

  • Paramiko ssh output stops at --more--

    - by Anesh
    The output stops printing at --more-- any idea how to get the end of the output >>> import paramiko >>> ssh = paramiko.SSHClient() >>> ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy()) >>> conn=ssh.connect("ipaddress",username="user", password="pass") >>> channel = ssh.invoke_shell() >>> channel.send("en\n") 3 >>> channel.send("password\n") 9 >>> channel.send("show security local-user-list\n") 30 >>> results = '' >>> channel.send("\n") 1 >>> results += channel.recv(5000) >>> print results bluecoat>en Password: bluecoat#show security local-user-list Default List: local_user_database Append users loaded from file to default list: false local_user_database Lockout parameters: Max failed attempts: 60 Lockout duration: 3600 Reset interval: 7200 Users: Groups: admin_local Lockout parameters: Max failed attempts: 60 Lockout duration: 3600 Reset interval: 7200 Users: <username> Hashed Password: Enabled: true Groups: <username> Hashed Password: Enabled: true **--More--** As you can see above the output stops printing at --more-- any idea how to get the output to print till the end.

    Read the article

  • Qt: How to autoexpand parents of a new QTreeView item when using a QSortFilterProxyModel

    - by taynaron
    I'm making an app wherein the user can add new data to a QTreeModel at any time. The parent under which it gets placed is automatically expanded to show the new item: self.tree = DiceModel(headers) self.treeView.setModel(self.tree) expand_node = self.tree.addRoll() #addRoll makes a node, adds it, and returns the (parent) note to be expanded self.treeView.expand(expand_node) This works as desired. If I add a QSortFilterProxyModel to the mix: self.tree = DiceModel(headers) self.sort = DiceSort(self.tree) self.treeView.setModel(self.sort) expand_node = self.tree.addRoll() #addRoll makes a node, adds it, and returns the (parent) note to be expanded self.treeView.expand(expand_node) the parent no longer expands. Any idea what I am doing wrong?

    Read the article

  • Is there a performance gain from defining routes in app.yaml versus one large mapping in a WSGIAppli

    - by jgeewax
    Scenario 1 This involves using one "gateway" route in app.yaml and then choosing the RequestHandler in the WSGIApplication. app.yaml - url: /.* script: main.py main.py from google.appengine.ext import webapp class Page1(webapp.RequestHandler): def get(self): self.response.out.write("Page 1") class Page2(webapp.RequestHandler): def get(self): self.response.out.write("Page 2") application = webapp.WSGIApplication([ ('/page1/', Page1), ('/page2/', Page2), ], debug=True) def main(): wsgiref.handlers.CGIHandler().run(application) if __name__ == '__main__': main() Scenario 2: This involves defining two routes in app.yaml and then two separate scripts for each (page1.py and page2.py). app.yaml - url: /page1/ script: page1.py - url: /page2/ script: page2.py page1.py from google.appengine.ext import webapp class Page1(webapp.RequestHandler): def get(self): self.response.out.write("Page 1") application = webapp.WSGIApplication([ ('/page1/', Page1), ], debug=True) def main(): wsgiref.handlers.CGIHandler().run(application) if __name__ == '__main__': main() page2.py from google.appengine.ext import webapp class Page2(webapp.RequestHandler): def get(self): self.response.out.write("Page 2") application = webapp.WSGIApplication([ ('/page2/', Page2), ], debug=True) def main(): wsgiref.handlers.CGIHandler().run(application) if __name__ == '__main__': main() Question What are the benefits and drawbacks of each pattern? Is one much faster than the other?

    Read the article

  • How can I configure different worker pools using celery?

    - by Chris R
    I need to deploy a queued execution service with (generally) the following three classes of worker: A periodic, low-priority job class that takes a long time and can be processed serially; these jobs should only use 0..2 workers in the system at most. A periodic, deadline-sensitive job class that take a short to medium amount of time (say, topping out at 5 minutes) An ad-hoc job class, that is higher priority than #1, but can interleave with #2. Any workers from class #2 that are inactive when this type of job comes in should handle it, without ever starving the pool of workers for #2 All three job classes are the same task, the only difference between them is how they're requested; they'll take the same input and generate the same output, but each one has different performance guarantees. How can I implement this using celery?

    Read the article

  • Most secure way to generate a random session ID for a cookie?

    - by ensnare
    I'm writing my own sessions controller that issues a unique id to a user once logged in, and then verifies and authenticates that unique id at every page load. What is the most secure way to generate such an id? Should the unique id be completely random? Is there any downside to including the user id as part of the unique id?

    Read the article

  • list comprehension example

    - by self
    can we use elif in list comprehension? example : l = [1, 2, 3, 4, 5] for values in l: if values==1: print 'yes' elif values==2: print 'no' else: print 'idle' can we use list comprehension for such 2 if conditions and one else condition? foe example answer like : ['yes', 'no', 'idle', 'idle', 'idle'] I have done till now only if else in list comprehension.

    Read the article

  • foo and _foo - about variables inside a class

    - by kame
    class ClassName(object): """ """ def __init__(self, foo, bar): """ """ self.foo = foo # read-write property self.bar = bar # simple attribute def _set_foo(self, value): self._foo = value def _get_foo(self): return self._foo foo = property(_get_foo, _set_foo) a = ClassName(1,2) #a._set_foo(3) print a._get_foo() When I print a._get_foo() the function _get_foo prints the variable self._foo . But where does it come from? self._foo and self.foo are different, aren't they?

    Read the article

  • How refresh a DrawingArea in PyGTK ?

    - by Lialon
    I have an interface created with Glade. It contains a DrawingArea and buttons. I tried to create a Thread to refresh every X time my Canva. After a few seconds, I get error messages like: "X Window Server 0.0", "Fatal Error IO 11" Here is my code : import pygtk pygtk.require("2.0") import gtk import Canvas import threading as T import time import Map gtk.gdk.threads_init() class Interface(object): class ThreadCanvas(T.Thread): """Thread to display the map""" def __init__(self, interface): T.Thread.__init__(self) self.interface = interface self.started = True self.start() def run(self): while self.started: time.sleep(2) self.interface.on_canvas_expose_event() def stop(self): self.started = False def __init__(self): self.interface = gtk.Builder() self.interface.add_from_file("interface.glade") #Map self.map = Map.Map(2,2) #Canva self.canvas = Canvas.MyCanvas(self.interface.get_object("canvas"),self.game) self.interface.connect_signals(self) #Thread Canvas self.render = self.ThreadCanvas(self) def on_btnChange_clicked(self, widget): #Change map self.map.change() def on_interface_destroy(self, widget): self.render.stop() self.render.join() self.render._Thread__stop() gtk.main_quit() def on_canvas_expose_event(self): st = time.time() self.canvas.update(self.map) et = time.time() print "Canvas refresh in : %f times" %(et-st) def main(self): gtk.main() How can i fix these errors ?

    Read the article

  • How fast are App Engine db.get(keys) and A.all(keys_only=True).filter('b =', b).fetch(1000)?

    - by Liron Shapira
    A db.get() of 50 keys seems to take me 5-6 seconds. Is that normal? What is the time a function of? I also did a A.all(keys_only=True).filter('b =', b).fetch(1000) where A.b is a ReferenceProperty. I did 50 such round trips to the datastore, with different values of b, and the total time was only 3-4 seconds. How is this possible? db.get() is done in parallel, with only one trip to the datastore, and I would think that looking up an entity by key is a faster operation than fetch.

    Read the article

  • scrapping blog contents

    - by goh
    Hi lads, After obtaining the urls for various blogspots, tumblr and wordpress pages, I faced some problems processing the html pages. The thing is, i wish to distinguish between the content,title and date for each blog post. I might be able to get the date through regex, but there are so many custom scripts people are using now that the html classes and structure is so different. Does anyone has a solution that may help?

    Read the article

  • Django - partially validating form

    - by aeter
    I'm new to Django, trying to process some forms. I have this form for entering information (creating a new ad) in one template: class Ad(models.Model): ... category = models.CharField("Category",max_length=30, choices=CATEGORIES) sub_category = models.CharField("Subcategory",max_length=4, choices=SUBCATEGORIES) location = models.CharField("Location",max_length=30, blank=True) title = models.CharField("Title",max_length=50) ... I validate it with "is_valid()" just fine. Basically for the second validation (another template) I want to validate only against "category" and "sub_category": In another template, I want to use 2 fields from the same form ("category" and "sub_category") for filtering information - and now the "is_valid()" method would not work correctly, cause it validates the entire form, and I need to validate only 2 fields. I have tried with the following: ... if request.method == 'POST': # If a filter for data has been submitted: form = AdForm(request.POST) try: form = form.clean() category = form.category sub_category = form.sub_category latest_ads_list = Ad.objects.filter(category=category) except ValidationError: latest_ads_list = Ad.objects.all().order_by('pub_date') else: latest_ads_list = Ad.objects.all().order_by('pub_date') form = AdForm() ... but it doesn't work. How can I validate only the 2 fields category and sub_category?

    Read the article

  • Get the path to Django itself

    - by andybak
    I've got some code that runs on every (nearly) every admin request but doesn't have access to the 'request' object. I need to find the path to Django installation. I could do: import django django_path = django.__file__ but that seems rather wasteful in the middle of a request. Does putting the import at the start of the module waste memory? I'm fairly sure I'm missing an obvious trick here.

    Read the article

< Previous Page | 385 386 387 388 389 390 391 392 393 394 395 396  | Next Page >