I need it to open 10 processes, and each time one of them finishes I want to wait few seconds and start another one.
It seems pretty simple, but somehow I can make it work.
I wonder why would a C++, C#, Java developer want to learn a dynamic language?
Assuming the company won't switch its main development language from C++/C#/Java to a dynamic one what use is there for a dynamic language?
What helper tasks can be done by the dynamic languages faster or better after only a few days of learning than with the static language that you have been using for several years?
Update
After seeing the first few responses it is clear that there two issues.
My main interest would be something that is justifiable to the employer as an expense.
That is, I am looking for justifications for the employer to finance the learning of a dynamic language. Aside from the obvious that the employee will have broader view, the
employers are usually looking for some "real" benefit.
Let's say you had a string
test = 'wow, hello, how, are, you, doing'
and you wanted
full_list = ['wow','hello','how','are','you','doing']
i know you would start out with an empty list:
empty_list = []
and would create a for loop to append the items into a list
i'm just confused on how to go about this,
I was trying something along the lines of:
for i in test:
if i == ',':
then I get stuck . . .
I've sometimes seen code like this:
class Something(object):
class Else(object):
def __init__(self):
pass
def __init__(self):
# Do something with self.Else...
pass
Is it a good idea to define classes inside related classes? Is this an acceptable way to group related code?
I've always thought of the if not x is None version to be more clear, but Google's style guide implies (based on this excerpt) that they use if x is not None. Is there any minor performance difference (I'm assuming not), and is there any case where one really doesn't fit (making the other a clear winner for my convention)?*
*I'm referring to any singleton, rather than just None.
...to compare singletons like
None. Use is or is not.
I'm optimizing some code whose main bottleneck is running through and accessing a very large list of struct-like objects. Currently I'm using namedtuples, for readability. But some quick benchmarking using 'timeit' shows that this is really the wrong way to go where performance is a factor:
Named tuple with a, b, c:
>>> timeit("z = a.c", "from __main__ import a")
0.38655471766332994
Class using __slots__, with a, b, c:
>>> timeit("z = b.c", "from __main__ import b")
0.14527461047146062
Dictionary with keys a, b, c:
>>> timeit("z = c['c']", "from __main__ import c")
0.11588272541098377
Tuple with three values, using a constant key:
>>> timeit("z = d[2]", "from __main__ import d")
0.11106188992948773
List with three values, using a constant key:
>>> timeit("z = e[2]", "from __main__ import e")
0.086038238242508669
Tuple with three values, using a local key:
>>> timeit("z = d[key]", "from __main__ import d, key")
0.11187358437882722
List with three values, using a local key:
>>> timeit("z = e[key]", "from __main__ import e, key")
0.088604143037173344
First of all, is there anything about these little timeit tests that would render them invalid? I ran each several times, to make sure no random system event had thrown them off, and the results were almost identical.
It would appear that dictionaries offer the best balance between performance and readability, with classes coming in second. This is unfortunate, since, for my purposes, I also need the object to be sequence-like; hence my choice of namedtuple.
Lists are substantially faster, but constant keys are unmaintainable; I'd have to create a bunch of index-constants, i.e. KEY_1 = 1, KEY_2 = 2, etc. which is also not ideal.
Am I stuck with these choices, or is there an alternative that I've missed?
I have a list of sets:
setlist = [s1,s2,s3...]
I want s1 n s2 n s3 ...
I can write a function to do it by performing a series of pairwise s1.intersection(s2), etc., but is there a recommended, better, or built-in way?
have file data of format
3.343445 1
3.54564 1
4.345535 1
2.453454 1
and so on upto 1000 lines and i have number given such as a=2.44443 for the given file i need to find the row number of the numbers in file which is most close to the given number "a" how can i do this i am presently doing by loading whole file into list and comparing each element and finding the closest one any other better faster method?
my code:i need to ru this for different file each time around 20000 times so want a fast method
p=os.path.join("c:/begpython/wavnk/",str(str(str(save_a[1]).replace('phone','text'))+'.pm'))
x=open(p , 'r')
for i in range(6):
x.readline()
j=0
o=[]
for line in x:
oj=str(str(line).rstrip('\n')).split(' ')
o=o+[oj]
j=j+1
temp=long(1232332)
end_time=save_a[4]
for i in range((j-1)):
diff=float(o[i][0])-float(end_time)
if diff<0:
diff=diff*(-1)
if temp>diff:
temp=diff
pm_row=i
I'm working on a class that basically allows for method chaining, for setting some attrbutes for different dictionaries stored.
The syntax is as follows:
d = Test()
d.connect().setAttrbutes(Message=Blah, Circle=True, Key=True)
But there can also be other instances, so, for example:
d = Test()
d.initialise().setAttrbutes(Message=Blah)
Now I believe that I can overwrite the "setattrbutes" function; I just don't want to create a function for each of the dictionary. Instead I want to capture the name of the previous chained function. So in the example above I would then be given "connect" and "initialise" so I know which dictionary to store these inside.
I hope this makes sense. Any ideas would be greatly appreciated :)
I can not figure out why my code does not filter out lists from a predefined list.
I am trying to remove specific list using the following code.
data = [[1,1,1],[1,1,2],[1,2,1],[1,2,2],[2,1,1],[2,1,2],[2,2,1],[2,2,2]]
data = [x for x in data if x[0] != 1 and x[1] != 1]
print data
My result:
data = [[2, 2, 1], [2, 2, 2]]
Expected result:
data = [[1,2,1],[1,2,2],[2,1,1],[2,1,2],[2,2,1],[2,2,2]]
def revert_dict(d):
rd = {}
for key in d:
val = d[key]
if val in rd:
rd[val].append(key)
else:
rd[val] = [key]
return rd
>>> revert_dict({'srvc3': '1', 'srvc2': '1', 'srvc1': '2'})
{'1': ['srvc3', 'srvc2'], '2': ['srvc1']}
This obviously isn't simple exchange of keys with values: this would overwrite some values (as new keys) which is NOT what I'm after.
If 2 or more values are the same for different keys, keys are supposed to be grouped in a list.
The above function works, but I wonder if there is a smarter / faster way?
I need to do some macros and I wanna know what is the most recommended way to do it...
So, I need to write somethings and click some places with it and I need to emulate the TAB key to..
Thank you
Hello. I'm having trouble getting my list to return in my code. Instead of returning the list, it keeps returning None, but if I replace the return with print in the elif statement, it prints the list just fine. How can I repair this?
def makeChange2(amount, coinDenomination, listofcoins = None):
#makes a list of coins from an amount given by using a greedy algorithm
coinDenomination.sort()
#reverse the list to make the largest position 0 at all times
coinDenomination.reverse()
#assigns list
if listofcoins is None:
listofcoins = []
if amount >= coinDenomination[0]:
listofcoins = listofcoins + [coinDenomination[0]]
makeChange2((amount - coinDenomination[0]), coinDenomination, listofcoins)
elif amount == 0:
return listofcoins
else:
makeChange2(amount, coinDenomination[1:], listofcoins)
have an excel based CSV file with two columns (or rows, Pythonically) that I am working on. What I need to do is to perform some operations so that I can compare the two data entries in each 'row'. To be more precise, one column has constant numbers all the way down, whereas the other column has varying values. So I need to count the number of times the varying column data entry values crosses the constant value on the other column.
For example, fro the csv file i have two columns:
Varying Column; Constant Column
24 25
26 25 crossed
27 25
26 25
25.5 25
23 25 crossed
26 25 crossed
Thus, the varying column data entries have crossed 25 three times. I need to generate a code that can count the number of the crosses. Please do help out, Thanks.
I'm stuck on how to formulate this problem properly and the following is:
What if we had the following values:
{('A','B','C','D'):3,
('A','C','B','D'):2,
('B','D','C','A'):4,
('D','C','B','A'):3,
('C','B','A','D'):1,
('C','D','A','B'):1}
When we sum up the first place values: [5,4,2,3] (5 people picked for A first, 4 people picked for B first, and so on like A = 5, B = 4, C = 2, D = 3)
The maximum values for any alphabet is 5, which isn't a majority (5/14 is less than half), where 14 is the sum of total values.
So we remove the alphabet with the fewest first place picks. Which in this case is C.
I want to return a dictionary where {'A':5, 'B':4, 'C':2, 'D':3} without importing anything.
This is my work:
def popular(letter):
'''(dict of {tuple of (str, str, str, str): int}) -> dict of {str:int}
'''
my_dictionary = {}
counter = 0
for (alphabet, picks) in letter.items():
if (alphabet[0]):
my_dictionary[alphabet[0]] = picks
else:
my_dictionary[alphabet[0]] = counter
return my_dictionary
This returns duplicate of keys which I cannot get rid of.
Thanks.
I'm getting some trouble binding a date from QueryString :
I have the following model
public class QueryParms
{
public DateTime Date { get; set; }
}
And the following controller action :
public ActionResult Search( QueryParms query );
I have a form, with a field where I can type my date. If the form is FormMethod.Post, everything is fine, my date is correctly bound to my model.
If the form is FormMethod.Get, it is not working anymore. The date is left to the default value (01/01/0001)
I think it is a culture issue :
When i look into the value provider, the FormValueProvider has a culture property set for my date : {fr-FR}. The QueryStringValueProvider doesn't have the culture property set.
Is there a way to set this property ?
This is the piece of code I have:
choice = ""
while choice != "1" and choice != "2" and choice != "3":
choice = raw_input("pick 1, 2 or 3")
if choice == "1":
print "1 it is!"
elif choice == "2":
print "2 it is!"
elif choice == "3":
print "3 it is!"
else:
print "You should choose 1, 2 or 3"
While it works, I feel that it's really clumsy, specifically the while clause. What if I have more acceptable choices? Is there a better way to make the clause?
I have a dictionary of data, the key is the file name and the value is another dictionary of its attribute values. Now I'd like to pass this data structure to various functions, each of which runs some test on the attribute and returns True/False.
One approach would be to call each function one by one explicitly from the main code. However I can do something like this:
#MYmodule.py
class Mymodule:
def MYfunc1(self):
...
def MYfunc2(self):
...
#main.py
import Mymodule
...
#fill the data structure
...
#Now call all the functions in Mymodule one by one
for funcs in dir(Mymodule):
if funcs[:2]=='MY':
result=Mymodule.__dict__.get(funcs)(dataStructure)
The advantage of this approach is that implementation of main class needn't change when I add more logic/tests to MYmodule.
Is this a good way to solve the problem at hand? Are there better alternatives to this solution?
Hi, I'm working a script that will upload videos to YouTube with different accounts. Is there a way to use HTTPS or SOCKS proxies to filter all the requests. My client doesn't want to leave any footprints for Google. The only way I found was to set the proxy environment variable beforehand but this seems cumbersome. Is there some way I'm missing?
Thanks :)
I have to process a file everyday. This file is sent to my Email once everyday. If I can get to this email once every day and download the attachment, that had be awesome. Is it even remotely possible to do such a thing?
Thanks!