Search Results

Search found 13534 results on 542 pages for 'python 3 x'.

Page 413/542 | < Previous Page | 409 410 411 412 413 414 415 416 417 418 419 420  | Next Page >

  • how to trigger a script located on a machine in one domain from a machine on another domain

    - by user326814
    Hi, I am basically from QA. What we testers do each day is 1. Open a web browser. Type in http://11.12.13.27.8080/cruisecontrol (since we are in a particular network, only we can access this) 2. Check if the latest nightly build has been successful. If it is successful, deploy it on a test environment by clicking on 'Deploy this build' link. This deploying takes around 1-1.5 hours. During this time we cannot use our machines to work on anything else. Only after this deploying can we begin to test. Now, i wanted to know if its possible to do the below. When at home in the morning, i use something which will trigger a script (which will be on my machine at workplace). This script will inturn automatically deploy the build. I already have such a similar script. What i want to know is how is it possible to trigger this script from my home machine? Is it even possible? For e.g the external trigger will say "Deploy xxx branch on yyy test environment". So the script on my workplace machine will be invoked and it will automatically deploy it before i actually come to my desk. Please help. I am from QA and have no idea about all this.

    Read the article

  • How can I receive percent encoded slashes with Django on App Engine?

    - by J. Frankenstein
    I'm using Django with Google's App Engine. I want to send information to the server with percent encoded slashes. A request like http:/localhost/turtle/waxy%2Fsmooth that would match against a URL like r'^/turtle/(?P<type>([A-Za-z]|%2F)+)$'. The request gets to the server intact, but sometime before it is compared against the regex the %2F is converted into a forward slash. What can I do to stop the %2Fs from being converted into forward slashes? Thanks!

    Read the article

  • Why use threading data race will occur, but will not use gevent

    - by onlytiancai
    My test code is as follows, using threading, count is not 5,000,000 , so there has been data race, but using gevent, count is 5,000,000, there was no data race . Is not gevent coroutine execution will atom "count + = 1", rather than split into a one CPU instruction to execute? # -*- coding: utf-8 -*- import threading use_gevent = True use_debug = False cycles_count = 100*10000 if use_gevent: from gevent import monkey monkey.patch_thread() count = 0 class Counter(threading.Thread): def __init__(self, name): self.thread_name = name super(Counter, self).__init__(name=name) def run(self): global count for i in xrange(cycles_count): if use_debug: print '%s:%s' % (self.thread_name, count) count = count + 1 counters = [Counter('thread:%s' % i) for i in range(5)] for counter in counters: counter.start() for counter in counters: counter.join() print 'count=%s' % count

    Read the article

  • ValueError: setting an array element with a sequence.

    - by MedicalMath
    This code: import numpy as p def firstfunction(): UnFilteredDuringExSummaryOfMeansArray = [] MeanOutputHeader=['TestID','ConditionName','FilterType','RRMean','HRMean','dZdtMaxVoltageMean','BZMean','ZXMean' ,'LVETMean','Z0Mean','StrokeVolumeMean','CardiacOutputMean','VelocityIndexMean'] dataMatrix = BeatByBeatMatrixOfMatrices[column] roughTrimmedMatrix = p.array(dataMatrix[1:,1:17]) trimmedMatrix = p.array(roughTrimmedMatrix,dtype=p.float64) myMeans = p.mean(trimmedMatrix,axis=0,dtype=p.float64) conditionMeansArray = [TestID,testCondition,'UnfilteredBefore',myMeans[3], myMeans[4], myMeans[6], myMeans[9] , myMeans[10], myMeans[11], myMeans[12], myMeans[13], myMeans[14], myMeans[15]] UnFilteredDuringExSummaryOfMeansArray.append(conditionMeansArray) secondfunction(UnFilteredDuringExSummaryOfMeansArray) return def secondfunction(UnFilteredDuringExSummaryOfMeansArray): RRDuringArray = p.array(UnFilteredDuringExSummaryOfMeansArray,dtype=p.float64)[1:,3] return firstfunction() Throws this error message: File "mypath\mypythonscript.py", line 3484, in secondfunction RRDuringArray = p.array(UnFilteredDuringExSummaryOfMeansArray,dtype=p.float64)[1:,3] ValueError: setting an array element with a sequence. However, this code works: import numpy as p a=range(24) b = p.reshape(a,(6,4)) c=p.array(b,dtype=p.float64)[:,2] I re-arranged the code a bit to put it into a cogent posting, but it should more or less have the same result. Can anyone show me what to do to fix the problem in the broken code above so that it stops throwing an error message?

    Read the article

  • what is the 'extra' mean in this django code..

    - by zjm1126
    TOPIC_COUNT_SQL = """ SELECT COUNT(*) FROM topics_topic WHERE topics_topic.object_id = maps_map.id AND topics_topic.content_type_id = %s """ MEMBER_COUNT_SQL = """ SELECT COUNT(*) FROM maps_map_members WHERE maps_map_members.map_id = maps_map.id """ maps = maps.extra(select=SortedDict([ ('member_count', MEMBER_COUNT_SQL), ('topic_count', TOPIC_COUNT_SQL), ]), select_params=(content_type.id,)) i don't know this mean, thanks

    Read the article

  • Converting Numpy Lstsq residual value to R^2

    - by whatnick
    I am performing a least squares regression as below (univariate). I would like to express the significance of the result in terms of R^2. Numpy returns a value of unscaled residual, what would be a sensible way of normalizing this. field_clean,back_clean = rid_zeros(backscatter,field_data) num_vals = len(field_clean) x = field_clean[:,row:row+1] y = 10*log10(back_clean) A = hstack([x, ones((num_vals,1))]) soln = lstsq(A, y ) m, c = soln [0] residues = soln [1] print residues

    Read the article

  • How can I measure distance with tastypie and geodjango?

    - by Twitch
    Using Tastypie and GeoDjango, I'm trying to return results of buildings located within 1 mile of a point. The TastyPie documentation states that distance lookups are not yet supported, but I am finding examples of people getting it work, such as this discussion and this discussion on StackOverflow, but no working code examples that can be applied. The idea that I am trying to work with is if I append a GET command to the end of a URL, then nearby locations are returned, for example: http://website.com/api/?format=json&building_point__distance_lte=[{"type": "Point", "coordinates": [153.09537, -27.52618]},{"type": "D", "m" : 1}] But when I try that, all I get back is: {"error": "Invalid resource lookup data provided (mismatched type)."} I've been pouring over the Tastypie document for days now and just can't figure out how to implement this. I'd provide more examples, but I know they'd be all terrible. All advice is appreciated, thank you!

    Read the article

  • Detect a tag in hook-script SVN

    - by Mark
    Is there a way that I can detect a tag/branch in SVN? Could I find out where the commits destination is? I want to check that all externals are set to a specific version of the folder they are pointing to, I don't want to prevent commits to a tag with this script. I am writing the script with the c-pyhton bindings.

    Read the article

  • How do I parse youtube xml for a specific entry?

    - by sharataka
    I am trying to return the duration of the video but am having trouble. #YOUTUBE FEED #download the file: file = urllib2.urlopen('http://gdata.youtube.com/feeds/api/videos/2s0vk2wEMtA') #convert to string: data = file.read() #close file because we dont need it anymore: file.close() #entire feed root = etree.fromstring(data) for entry in root: for item in entry: print item When I print item, I see as the last element: Element '{http://gdata.youtube.com/schemas/2007}duration' at 0x10c4fb7d0 But I don't know how to get the value from this. Any advice?

    Read the article

  • sip.conf configuration file - add new line to each record

    - by Flukey
    I have a sip configuration file which looks like this: [1664] username=1664 mailbox=1664@8360 host=192.168.254.3 type=friend subscribemwi=no [1679] username=1679 mailbox=1679@8360 host=192.168.254.3 type=friend subscribemwi=no [1700] username=1700 mailbox=1700@8360 host=192.168.254.3 type=friend subscribemwi=no [1701] username=1701 mailbox=1701@8360 host=192.168.254.3 type=friend subscribemwi=no For each record I need to add another line (vmxten for each record) for example the above becomes: [1664] username=1664 mailbox=1664@8360 host=192.168.254.3 type=friend subscribemwi=no vmexten=1664 [1679] username=1679 mailbox=1679@8360 host=192.168.254.3 type=friend subscribemwi=no vmexten=1679 [1700] username=1700 mailbox=1700@8360 host=192.168.254.3 type=friend subscribemwi=no vmexten=1700 [1701] username=1701 mailbox=1701@8360 host=192.168.254.3 type=friend subscribemwi=no vmexten=1701 What would you say would be the quickest way to do this? there are hundreds of records in the file, therefore modifying all of the records by hand would take a long time. Would you use Regex? Would you use sed? I'm interested to know how you would approach the problem. Thanks

    Read the article

  • Obtain distances to nearest landmarks (Mall, Hospital, and Airport etc.) using google map

    - by user227290
    I am working on a project in which I have around 100000 addreses in major cities in India(it is a table in a database). I want to know if it is possible to obtain the distances to to nearest landmarks (Mall, Hospital, and Airport etc.). Ideally I want these distances to be mergeed to the parent table. We have Java and Php coders to get it done once we find out how to go about it. Any pointers will be of great help. Thank you.

    Read the article

  • Detecting and interacting with long running process

    - by jacquesb
    I want a script to start and interact with a long running process. The process is started first time the script is executed, after that the script can be executed repeatedly, but will detect that the process is already running. The script should be able to interact with the process. I would like this to work on Unix and Windows. I am unsure how I do this. Specifically how do I detect if the process is already running and open a pipe to it? Should I use sockets (e.g. registering the server process on a known port and then check if it responds) or should I use "named pipes"? Or is there some easier way?

    Read the article

  • How to check if a file is a textfile?

    - by daniels
    I have a folder full of files and i want to search some string inside them. The issue is that some files may be zip,exe,ogg,etc. Can i check somehow what kind of file is it so i only open and search through txt, php, etc files. I can't rely on the file extension.

    Read the article

  • wxWidgets/wxPython: Do two identical events cause two handlings?

    - by cool-RR
    When there are two identical events in the event loop, will wxPython handle both of them, or will it call the handler only once for them both? I mean, in my widget I want to have an event like EVT_NEED_TO_RECALCULATE_X. I want this event to be posted in all kinds of different circumstances that require x to be recalculated. However, even if there are two different reasons to recalculate x, only one recalculation needs to be done. How do I do this?

    Read the article

  • Is it possible to use a back reference to specify the number of replications in a regular expression

    - by user307894
    Is it possible to use a back reference to specify the number of replications in a regular expression? foo= 'ADCKAL+2AG.+2AG.+2AG.+2AGGG+.+G+3AGGa4.' The substrings that start with '+[0-9]' followed by '[A-z]{n}.' need to be replaced with simply '+' where the variable n is the digit from earlier in the substring. Can that n be back referenced? For example (doesn't work) '+([0-9])[A-z]{/1}.' is the pattern I want replaced with "+" (that last dot can be any character and represents a quality score) so that foo should come out to ADCKAL++++G.G+. foo = 'ADCKAL+2AG.+2AG.+2AG.+2AGGG^+.+G+3AGGa4.' indelpatt = re.compile('\+([0-9])') while indelpatt.search(foo): indelsize=int(indelpatt.search(foo).group(1)) new_regex = '\+%s[ACGTNacgtn]{%s}.' % (indelsize,indelsize) newpatt=re.compile(new_regex) foo = newpatt.sub("+", foo) I'm probably missing an easier way to parse the string.

    Read the article

  • Converting time period strings to value/unit pair

    - by randomtoor
    I need to parse the contents of a string that represents a time period. The format of the string is value/unit, e.g.: 1s, 60min, 24h. I would separate the actual value (an int) and unit (a str) to separated variables. At the moment I do it like this: def validate_time(time): binsize = time.strip() unit = re.sub('[0-9]','',binsize) if unit not in ['s','m','min','h','l']: print "Error: unit {0} is not valid".format(unit) sys.exit(2) tmp = re.sub('[^0-9]','',binsize) try: value = int(tmp) except ValueError: print "Error: {0} is not valid".format(time) sys.exit(2) return value,unit However, it is not ideal as things like 1m0 are also (wrongly) validated (value=10,unit=m). What is the best way to validate/parse this input?

    Read the article

  • Django - count date between

    - by DJPy
    I have many record in my database wich contains datetime field (e.g. 2010-05-23 17:45:57). I want to count all records between e.g. 15:00 and 15:59 (it all can by from other day, month or year). How can I do this?

    Read the article

  • Return more then One field from database SQLAlchemy

    - by David Neudorfer
    This line: used_emails = [row.email for row in db.execute(select([halo4.c.email], halo4.c.email!=''))] Returns: ['[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]'] I use this to find a match: if recipient in used_emails: If it finds a match I need to pull another field (halo4.c.code) from the database in the same row. Any suggestions on how to do this?

    Read the article

  • sql select from a large number of IDs

    - by Claudiu
    I have a table, Foo. I run a query on Foo to get the ids from a subset of Foo. I then want to run a more complicated set of queries, but only on those IDs. Is there an efficient way to do this? The best I can think of is creating a query such as: SELECT ... --complicated stuff WHERE ... --more stuff AND id IN (1, 2, 3, 9, 413, 4324, ..., 939393) That is, I construct a huge "IN" clause. Is this efficient? Is there a more efficient way of doing this, or is the only way to JOIN with the inital query that gets the IDs? If it helps, I'm using SQLObject to connect to a PostgreSQL database, and I have access to the cursor that executed the query to get all the IDs.

    Read the article

< Previous Page | 409 410 411 412 413 414 415 416 417 418 419 420  | Next Page >