Search Results

Search found 20677 results on 828 pages for 'python team'.

Page 363/828 | < Previous Page | 359 360 361 362 363 364 365 366 367 368 369 370  | Next Page >

  • Best practise when using httplib2.Http() object

    - by tomaz
    I'm writing a pythonic web API wrapper with a class like this import httplib2 import urllib class apiWrapper: def __init__(self): self.http = httplib2.Http() def _http(self, url, method, dict): ''' Im using this wrapper arround the http object all the time inside the class ''' params = urllib.urlencode(dict) response, content = self.http.request(url,params,method) as you can see I'm using the _http() method to simplify the interaction with the httplib2.Http() object. This method is called quite often inside the class and I'm wondering what's the best way to interact with this object: create the object in the __init__ and then reuse it when the _http() method is called (as shown in the code above) or create the httplib2.Http() object inside the method for every call of the _http() method (as shown in the code sample below) import httplib2 import urllib class apiWrapper: def __init__(self): def _http(self, url, method, dict): '''Im using this wrapper arround the http object all the time inside the class''' http = httplib2.Http() params = urllib.urlencode(dict) response, content = http.request(url,params,method)

    Read the article

  • Underscore characters disappears

    - by pocoa
    I'm using jEdit 4.3 pre 16. As I've mentioned on the title, when I'm typing, sometimes underscore characters disappears. I tried to change fonts, line highlighting etc. but it didn't work. Is there any solution of this problem?

    Read the article

  • Django BigInteger auto-increment field as primary key?

    - by Alex Letoosh
    Hi all, I'm currently building a project which involves a lot of collective intelligence. Every user visiting the web site gets created a unique profile and their data is later used to calculate best matches for themselves and other users. By default, Django creates an INT(11) id field to handle models primary keys. I'm concerned with this being overflown very quickly (i.e. ~2.4b devices visiting the page without prior cookie set up). How can I change it to be represented as BIGINT in MySQL and long() inside Django itself? I've found I could do the following (http://docs.djangoproject.com/en/dev/ref/models/fields/#bigintegerfield): class MyProfile(models.Model): id = BigIntegerField(primary_key=True) But is there a way to make it autoincrement, like usual id fields? Additionally, can I make it unsigned so that I get more space to fill in? Thanks!

    Read the article

  • File Uploads with Turbogears 2

    - by William Chambers
    I've been trying to work out the 'best practices' way to manage file uploads with Turbogears 2 and have thus far not really found any examples. I've figured out a way to actually upload the file, but I'm not sure how reliable it us. Also, what would be a good way to get the uploaded files name? file = request.POST['file'] permanent_file = open(os.path.join(asset_dirname, file.filename.lstrip(os.sep)), 'w') shutil.copyfileobj(file.file, permanent_file) file.file.close() this_file = self.request.params["file"].filename permanent_file.close() So assuming I'm understanding correctly, would something like this avoid the core 'naming' problem? id = UUID. file = request.POST['file'] permanent_file = open(os.path.join(asset_dirname, id.lstrip(os.sep)), 'w') shutil.copyfileobj(file.file, permanent_file) file.file.close() this_file = file.filename permanent_file.close()

    Read the article

  • Creative ways to punish (or just curb) laziness in coworkers

    - by FerretallicA
    Like the subject suggests, what are some creative ways to curb laziness in co-workers? By laziness I'm talking about things like using variable names like "inttheemplrcd" instead of "intEmployerCode" or not keeping their projects synced with SVN, not just people who use the last of the sugar in the coffee room and don't refill the jar. So far the two most effective things I've done both involve the core library my company uses. Since most of our programs are in VB.net the lack of case sensitivity is abused a lot. I've got certain features of the library using Reflection to access data in the client apps, which has a negligible performance hit and introduces case sensitivity in a lot places where it is used. In instances where we have an agreed standard which is compromised by blatant laziness I take it a step further, like the DatabaseController class which will blatantly reject any DataTable passed to it which isn't named dtSomething (ie- must begin with dt and third letter must be capitalised). It's frustrating to have to resort to things like this but it has also gradually helped drill more attention to detail into their heads. Another is adding some code to the library's initialisation function to display a big and potentially embarrassing (only if seen by a client) message advising that the program is running in debug mode. We have had many instances where projects are sent to clients built in debug mode which has a lot of implications for us (especially with regard to error recovery) and doing that has made sure they always build to release before distributing. Any other creative (ie- not StyleCop etc) approaches like this?

    Read the article

  • Getting a RichTextCtrl's default font size in wxPython

    - by Sam
    I have a RichTextCtrl, which I've modified to accept HTML input. The HTML parsing code needs to be able to increase and decrease the font size as it gets tags like <font size="-1">, but I can't work out how to get the control's default font size to adjust. I tried the following (where self is my RichTextCtrl): fred = wx.richtext.RichTextAttr() self.GetStyle(0,fred) print fred.GetFontSize() However, the final instruction fails, because GetStyle turns fred into a TextAttrEx and so I get AttributeError: 'TextAttrEx' object has no attribute 'GetFontSize'. Am I missing a vastly easier way of getting the default font size?

    Read the article

  • Sqlite / SQLAlchemy: how to enforce Foreign Keys?

    - by Nick Perkins
    The new version of SQLite has the ability to enforce Foreign Key constraints, but for the sake of backwards-compatibility, you have to turn it on for each database connection separately! sqlite> PRAGMA foreign_keys = ON; I am using SQLAlchemy -- how can I make sure this always gets turned on? What I have tried is this: engine = sqlalchemy.create_engine('sqlite:///:memory:', echo=True) engine.execute('pragma foreign_keys=on') ...but it is not working!...What am I missing?

    Read the article

  • Trace/BPT trap when running feedparser inside a Thread object

    - by simao
    Hello, I am trying to run a Thread to parse a list of links using the universal feed parser, but when I start the thread I get a Trace/BPT trap. Here's the code I am using: class parseRssFiles(Thread): def __init__ (self,rssLinks): Thread.__init__(self) self.rssLinks = rssLinks def run(self): self.rssContents = [ feedparser.parse(link) for link in rssLinks] Is there any other way to do this? Link to the report generated by Mac OS X 10.6.2: http://simaom.com/trace.txt Thanks

    Read the article

  • Dynamically expanding Django forms

    - by RexE
    I would like to create a form where a user can enter an arbitrary # of items in separate textboxes. The user could add (and potentially remove) fields as needed. Something like this: I found the following different solutions: http://www.eggdrop.ch/blog/2007/02/15/django-dynamicforms/ http://dewful.com/?p=100 Is there another best practice I might not be aware of?

    Read the article

  • How to customize pickle for django model objects

    - by muudscope
    I need to pickle a complex object that refers to django model objects. The standard pickling process stores a denormalized object in the pickle. So if the object changes on the database between pickling and unpickling, the model is now out of date. (I know this is true with in-memory objects too, but the pickling is a convenient time to address it.) So what I'd like is a way to not pickle the full django model object. Instead just store its class and id, and re-fetch the contents from the database on load. Can I specify a custom pickle method for this class? I'm happy to write a wrapper class around the django model to handle the lazy fetching from db, if there's a way to do the pickling.

    Read the article

  • How to deserialize an object with pyYaml using safe_load?

    - by systempuntoout
    Having a snippet like this: import yaml class User(object): def __init__(self, name, surname): self.name= name self.surname= surname user = User('spam', 'eggs') serialized_user = yaml.dump(user) #Network deserialized_user = yaml.load(serialized_user) print "name: %s, sname: %s" % (deserialized_user.name, deserialized_user.surname) Yaml docs says that it is not safe to call yaml.load with any data received from an untrusted source; so, what do i need to modify to my snippet\class to use safe_load method? Is it possible?

    Read the article

  • Uploading file from file object with PyCurl

    - by Tom
    I'm attempting to upload a file like this: import pycurl c = pycurl.Curl() values = [ ("name", "tom"), ("image", (pycurl.FORM_FILE, "tom.png")) ] c.setopt(c.URL, "http://upload.com/submit") c.setopt(c.HTTPPOST, values) c.perform() c.close() This works fine. However, this only works if the file is local. If I was to fetch the image such that: import urllib2 resp = urllib2.urlopen("http://upload.com/people/tom.png") How would I pass resp.fp as a file object instead of writing it to a file and passing the filename? Is this possible?

    Read the article

  • Open/Close database connection in django

    - by mp0int
    I am using Django and Postgresql as my DBMS. I wish to set a setting that enables to enable/disable database connection. When the connection is set to closed (in settings.py) the site will display a message such as "meintanence mode" or something like that. Django will not show any db connection error message (or mail them to admins). It is appreciated if django do not try to connect to the database at all.

    Read the article

  • Hide deprecated methods from tab completion

    - by Morgoth
    I would like to control which methods appear when a user uses tab-completion on a custom object in ipython - in particular, I want to hide functions that I have deprecated. I still want these methods to be callable, but I don't want users to see them and start using them if they are inspecting the object. Is this something that is possible?

    Read the article

  • Math on Django Templates

    - by Leandro Abilio
    Here's another question about Django. I have this code: views.py cursor = connections['cdr'].cursor() calls = cursor.execute("SELECT * FROM cdr where calldate > '%s'" %(start_date)) result = [SQLRow(cursor, r) for r in cursor.fetchall()] return render_to_response("cdr_user.html", {'calls':result }, context_instance=RequestContext(request)) I use a MySQL query like that because the database is not part of a django project. My cdr table has a field called duration, I need to divide that by 60 and multiply the result by a float number like 0.16. Is there a way to multiply this values using the template tags? If not, is there a good way to do it in my views? My template is like this: {% for call in calls %} <tr class="{% cycle 'odd' 'even' %}"><h3> <td valign="middle" align="center"><h3>{{ call.calldate }}</h3></td> <td valign="middle" align="center"><h3>{{ call.disposition }}</h3></td> <td valign="middle" align="center"><h3>{{ call.dst }}</h3></td> <td valign="middle" align="center"><h3>{{ call.billsec }}</h3></td> <td valign="middle" align="center">{{ (call.billsec/60)*0.16 }}</td></h3> </tr> {% endfor %} The last is where I need to show the value, I know the "(call.billsec/60)*0.16" is impossible to be done there. I wrote it just to represent what I need to show.

    Read the article

  • Repetitive content in docstrings

    - by Morgoth
    What are good ways to deal with repetitive content in docstrings? I have many functions that take 'standard' arguments, which have to be explained in the docstring, but it would be nice to write the relevant parts of the docstring only once, as this would be much easier to maintain and update. I naively tried the following: arg_a = "a: a very common argument" def test(a): ''' Arguments: %s ''' % arg_a pass But this does not work, because when I do help(test) I don't see the docstring. Is there a good way to do this?

    Read the article

  • Is it possible to calculate distance on GeoDjango in a SELECT statement?

    - by alex
    I am using MYSQL. I have a table with 1 column, a Point field. I want to SELECT all rows that have a point with a distance less than 50 meters of my given point. Simple enough, right? Below is how it's done in RAW SQL. But of course, I want to use GeoDjango to do this. cursor.execute("SELECT * FROM project_location WHERE\ (GLength(LineStringFromWKB(LineString(asbinary(utm), asbinary(PointFromWKB(point(%s, %s)))))) < 50)\

    Read the article

  • a more pythonic way to express conditionally bounded loop?

    - by msw
    I've got a loop that wants to execute to exhaustion or until some user specified limit is reached. I've got a construct that looks bad yet I can't seem to find a more elegant way to express it; is there one? def ello_bruce(limit=None): for i in xrange(10**5): if predicate(i): if not limit is None: limit -= 1 if limit <= 0: break def predicate(i): # lengthy computation return True Holy nesting! There has to be a better way. For purposes of a working example, xrange is used where I normally have an iterator of finite but unknown length (and predicate sometimes returns False).

    Read the article

  • Running an allocation simulation repeatedly breaks after the first run.

    - by Az
    Background I have a bunch of students, their desired projects and the supervisors for the respective projects. I'm running a battery of simulations to see which projects the students end up with, which will allow me to get some useful statistics required for feedback. So, this is essentially a Monte-Carlo simulation where I'm randomising the list of students and then iterating through it, allocating projects until I hit the end of the list. Then the process is repeated again. Note that, within a single session, after each successful allocation of a project the following take place: + the project is set to allocated and cannot be given to another student + the supervisor has a fixed quota of students he can supervise. This is decremented by 1 + Once the quota hits 0, all the projects from that supervisor become blocked and this has the same effect as a project being allocated Code def resetData(): for student in students.itervalues(): student.allocated_project = None for supervisor in supervisors.itervalues(): supervisor.quota = 0 for project in projects.itervalues(): project.allocated = False project.blocked = False The role of resetData() is to "reset" certain bits of the data. For example, when a project is successfully allocated, project.allocated for that project is flipped to True. While that's useful for a single run, for the next run I need to be deallocated. Above I'm iterating through thee three dictionaries - one each for students, projects and supervisors - where the information is stored. The next bit is the "Monte-Carlo" simulation for the allocation algorithm. sesh_id = 1 for trial in range(50): for id in randomiseStudents(1): stud_id = id student = students[id] if not student.preferences: # Ignoring the students who've not entered any preferences for rank in ranks: temp_proj = random.choice(list(student.preferences[rank])) if not (temp_proj.allocated or temp_proj.blocked): alloc_proj = student.allocated_proj_ref = temp_proj.proj_id alloc_proj_rank = student.allocated_rank = rank successActions(temp_proj) temp_alloc = Allocated(sesh_id, stud_id, alloc_proj, alloc_proj_rank) print temp_alloc # Explained break sesh_id += 1 resetData() # Refer to def resetData() above All randomiseStudents(1) does is randomise the order of students. Allocated is a class defined as such: class Allocated(object): def __init__(self, sesh_id, stud_id, alloc_proj, alloc_proj_rank): self.sesh_id = sesh_id self.stud_id = stud_id self.alloc_proj = alloc_proj self.alloc_proj_rank = alloc_proj_rank def __repr__(self): return str(self) def __str__(self): return "%s - Student: %s (Project: %s - Rank: %s)" %(self.sesh_id, self.stud_id, self.alloc_proj, self.alloc_proj_rank) Output and problem Now if I run this I get an output such as this (truncated): 1 - Student: 7720 (Project: 1100241 - Rank: 1) 1 - Student: 7832 (Project: 1100339 - Rank: 1) 1 - Student: 7743 (Project: 1100359 - Rank: 1) 1 - Student: 7820 (Project: 1100261 - Rank: 2) 1 - Student: 7829 (Project: 1100270 - Rank: 1) . . . 1 - Student: 7822 (Project: 1100280 - Rank: 1) 1 - Student: 7792 (Project: 1100141 - Rank: 7) 2 - Student: 7739 (Project: 1100267 - Rank: 1) 3 - Student: 7806 (Project: 1100272 - Rank: 1) . . . 45 - Student: 7806 (Project: 1100272 - Rank: 1) 46 - Student: 7714 (Project: 1100317 - Rank: 1) 47 - Student: 7930 (Project: 1100343 - Rank: 1) 48 - Student: 7757 (Project: 1100358 - Rank: 1) 49 - Student: 7759 (Project: 1100269 - Rank: 1) 50 - Student: 7778 (Project: 1100301 - Rank: 1) Basically, it works perfectly for the first run, but on subsequent runs leading upto the nth run, in this case 50, only a single student-project allocation pair is returned. Thus, the main issue I'm having trouble with is figuring out what is causing this anomalous behaviour especially since the first run works smoothly. Thanks in advance, Az

    Read the article

  • List as a key to a dictionary

    - by williamx
    Let's say I have a list: a = ['apple', 'orange'] and a dictionary: d ={'apple': [2,4], 'carrot': [44,33], 'orange': [345,667]} How can I use the list a as a key to lookup in the dictionary d? I want the result to be written to a comma-separated textfile like this apple, orange 2, 44 4, 33

    Read the article

  • Why is numpy's einsum faster than numpy's built in functions?

    - by Ophion
    Lets start with three arrays of dtype=np.double. Timings are performed on a intel CPU using numpy 1.7.1 compiled with icc and linked to intel's mkl. A AMD cpu with numpy 1.6.1 compiled with gcc without mkl was also used to verify the timings. Please note the timings scale nearly linearly with system size and are not due to the small overhead incurred in the numpy functions if statements these difference will show up in microseconds not milliseconds: arr_1D=np.arange(500,dtype=np.double) large_arr_1D=np.arange(100000,dtype=np.double) arr_2D=np.arange(500**2,dtype=np.double).reshape(500,500) arr_3D=np.arange(500**3,dtype=np.double).reshape(500,500,500) First lets look at the np.sum function: np.all(np.sum(arr_3D)==np.einsum('ijk->',arr_3D)) True %timeit np.sum(arr_3D) 10 loops, best of 3: 142 ms per loop %timeit np.einsum('ijk->', arr_3D) 10 loops, best of 3: 70.2 ms per loop Powers: np.allclose(arr_3D*arr_3D*arr_3D,np.einsum('ijk,ijk,ijk->ijk',arr_3D,arr_3D,arr_3D)) True %timeit arr_3D*arr_3D*arr_3D 1 loops, best of 3: 1.32 s per loop %timeit np.einsum('ijk,ijk,ijk->ijk', arr_3D, arr_3D, arr_3D) 1 loops, best of 3: 694 ms per loop Outer product: np.all(np.outer(arr_1D,arr_1D)==np.einsum('i,k->ik',arr_1D,arr_1D)) True %timeit np.outer(arr_1D, arr_1D) 1000 loops, best of 3: 411 us per loop %timeit np.einsum('i,k->ik', arr_1D, arr_1D) 1000 loops, best of 3: 245 us per loop All of the above are twice as fast with np.einsum. These should be apples to apples comparisons as everything is specifically of dtype=np.double. I would expect the speed up in an operation like this: np.allclose(np.sum(arr_2D*arr_3D),np.einsum('ij,oij->',arr_2D,arr_3D)) True %timeit np.sum(arr_2D*arr_3D) 1 loops, best of 3: 813 ms per loop %timeit np.einsum('ij,oij->', arr_2D, arr_3D) 10 loops, best of 3: 85.1 ms per loop Einsum seems to be at least twice as fast for np.inner, np.outer, np.kron, and np.sum regardless of axes selection. The primary exception being np.dot as it calls DGEMM from a BLAS library. So why is np.einsum faster that other numpy functions that are equivalent? The DGEMM case for completeness: np.allclose(np.dot(arr_2D,arr_2D),np.einsum('ij,jk',arr_2D,arr_2D)) True %timeit np.einsum('ij,jk',arr_2D,arr_2D) 10 loops, best of 3: 56.1 ms per loop %timeit np.dot(arr_2D,arr_2D) 100 loops, best of 3: 5.17 ms per loop The leading theory is from @sebergs comment that np.einsum can make use of SSE2, but numpy's ufuncs will not until numpy 1.8 (see the change log). I believe this is the correct answer, but have not been able to confirm it. Some limited proof can be found by changing the dtype of input array and observing speed difference and the fact that not everyone observes the same trends in timings.

    Read the article

  • How can I test to see if a class contains a particular attribute?

    - by BryanWheelock
    How can I test to see if a class contains a particular attribute? In [14]: user = User.objects.get(pk=2) In [18]: user.__dict__ Out[18]: {'date_joined': datetime.datetime(2010, 3, 17, 15, 20, 45), 'email': u'[email protected]', 'first_name': u'', 'id': 2L, 'is_active': 1, 'is_staff': 0, 'is_superuser': 0, 'last_login': datetime.datetime(2010, 3, 17, 16, 15, 35), 'last_name': u'', 'password': u'sha1$44a2055f5', 'username': u'DickCheney'} In [25]: hasattr(user, 'username') Out[25]: True In [26]: hasattr(User, 'username') Out[26]: False I'm having a weird bug where more attributes are showing up than I actually define. I want to conditionally stop this. e.g. if not hasattr(User, 'karma'): User.add_to_class('karma', models.PositiveIntegerField(default=1))

    Read the article

  • My method is being recognized within my own program. Newbie mistake probably.

    - by Sergio Tapia
    Here's my code: sentenceToTranslate = raw_input("Please write in the sentence you want to translate: ") words = sentenceToTranslate.split(" ") for word in words: if isVowel(word[0]): print "TEST" def isVowel(letter): if letter.lower() == "a" or letter.lower() == "e" or letter.lower() == "i" or letter.lower() == "o" or letter.lower() == "u": return True else: return False The error I get is: NameError: name 'isVowel' is not defined What am I doing wrong?

    Read the article

  • C#, create virtual directory on remote system

    - by sankar
    The following code create only virtual directory on local system , but i need to create on remote sytem ..help me.. Thanks, Sankar DirectoryEntry iisServer; string VirDirSchemaName = "IIsWebVirtualDir"; public DirectoryEntry Connect() { try { if (txtPath.Text.ToLower().Trim() == "localhost") iisServer = new DirectoryEntry("IIS://" + txtPath.Text.Trim() + "/W3SVC/1/Root"); else iisServer = new DirectoryEntry("IIS://" + txtPath.Text + "/Schema/AppIsolated", "XYZ", "xyz"); iisServer.Dispose(); } catch (Exception e) { throw new Exception("Could not connect to: " + txtPath.Text.Trim(), e); } return iisServer; } public void CreateVirtualDirectory(DirectoryEntry iisServer) { DirectoryEntry folderRoot = new DirectoryEntry("IIS://" + txtPath.Text + "/W3SVC/1/Root", "XYZ", "xyz"); folderRoot.RefreshCache(); folderRoot.CommitChanges(); try { DirectoryEntry newVirDir = folderRoot.Children.Add(txtName.Text, VirDirSchemaName); newVirDir.CommitChanges(); newVirDir.Properties["AccessRead"].Add(true); newVirDir.Properties["Path"].Add(@"\\abc\abc"); newVirDir.Invoke("AppCreate", true); newVirDir.CommitChanges(); folderRoot.CommitChanges(); newVirDir.Close(); folderRoot.CommitChanges(); } catch (Exception e) { throw new Exception("Error! Virtual Directory Not Created", e); } } protected void btnCreate_Click(object sender, EventArgs e) { try { CreateVirtualDirectory(Connect()); } catch (Exception ex) { Response.Write(ex.Message); } } protected void Page_Load(object sender, EventArgs e) { }

    Read the article

  • Sizers... - wxPython

    - by Francisco Aleixo
    Ok, so I'm learning about sizers in wxPython and I was wondering if it was possible to do something like: ============================================== |WINDOW TITLE _ [] X| |============================================| |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx| |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx| |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx| |xxxxxxxxxxxxxxxxxxNOTEBOOKxxxxxxxxxxxxxxxxxx| |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx| |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx| |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx| |________ ___________| |IMAGE | |LoginForm | |________| |___________| ============================================== NOTE:Yeah, I literally got this from http://stackoverflow.com/questions/1892110/wxpython-picking-the-right-sizer-to-use-in-an-application With NOTEBOOK expanded to left and bottom, IMAGE to align to left and bottom and loginform align to right and bottom and I managed to do almost everything but now I have a problem.. The problem is that I can't align Loginform and Image separately (im using Box Sizers), and I would like to. This is the code I'm using that is causing the problem at the moment, any help is appreciated. NOTE:The code might be (HUGELY) sloppy as I'm still learning box sizers. sizer = wx.BoxSizer(wx.VERTICAL) sizer1 = wx.BoxSizer(wx.HORIZONTAL) sizer1.Add(self.nb,1, wx.EXPAND) sizer.Add(sizer1,1, wx.LEFT | wx.RIGHT | wx.EXPAND, 10) sizer.Add((-1, 25)) sizer2 = wx.BoxSizer(wx.VERTICAL) sizer2.Add(self.userLabel, 0) sizer2.Add(self.userText, 0) sizer2.Add(self.pwdLabel, 0) sizer2.Add(self.pwdText, 0) sizer2.Add(self.rem, 0) sizer3 = wx.BoxSizer(wx.HORIZONTAL) sizer3.Add(self.login, 0) sizer3.Add(self.sair,0, wx.LEFT, 5) sizer2.Add(sizer3, 0) sizer4 = wx.BoxSizer(wx.HORIZONTAL) sizer4.Add(image, 1, wx.LEFT | wx.BOTTOM) sizer4.Add(sizer2,0, wx.RIGHT | wx.BOTTOM , 5) sizer.Add(sizer4,0, wx.ALIGN_RIGHT | wx.RIGHT, 10)

    Read the article

< Previous Page | 359 360 361 362 363 364 365 366 367 368 369 370  | Next Page >