Search Results

Search found 9 results on 1 pages for 'gotgenes'.

Page 1/1 | 1 

  • Phishing site uses subdomain that I never registered

    - by gotgenes
    I recently received the following message from Google Webmaster Tools: Dear site owner or webmaster of http://gotgenes.com/, [...] Below are one or more example URLs on your site which may be part of a phishing attack: http://repair.gotgenes.com/~elmsa/.your-account.php [...] What I don't understand is that I never had a subdomain repair.gotgenes.com, but visiting it in the web browser gives an actual My DNS is FreeDNS, which does not list a repair subdomain. My domain name is registered with GoDaddy, and the nameservers are correctly set to NS1.AFRAID.ORG, NS2.AFRAID.ORG, NS3.AFRAID.ORG, and NS4.AFRAID.ORG. I have the following questions: Where is repair.gotgenes.com actually registered? How was it registered? What action can I take to have it removed from DNSs? How can I prevent this from happening in the future? This is pretty disconcerting; I feel like my domain has been hijacked. Any help would be much appreciated.

    Read the article

  • How to tell from what Ubuntu or Debian repository a package comes?

    - by gotgenes
    On a Debian-based system, including Ubuntu, how can one tell which repository a package will be downloaded from, without actually beginning the download? aptitude show and apt-cache info will show the section (e.g., metapackage, base, graphics), but not the repository to which a package belongs (e.g., http://ppa.launchpad.net/mactel-support/ppa/ubuntu or http://us.archive.ubuntu.com/ubuntu/). When installing the package, the actual repository appears during the download (it is printed out in the "downloading from ..." output from apt and similar programs), but how can one obtain information on the repository containing the package (or a specific version of a package) without downloading and installing it first? Additionally, how can one determine the source repository for a package that is already installed?

    Read the article

  • Google Visualization API chart in Blogger / Blogspot

    - by gotgenes
    Is it possible to embed (chart) visualizations created by the Google Visualization API in a Blogger post? I tried stripping out the <head> and <body> tags (and closing tags) from the pie chart example, however, the pie chart visualization fails to render, even on a published post. NOTE: I'm asking about the Visualization API, rather than the Google Image Charts (Charts API).

    Read the article

  • Solving embarassingly parallel problems using Python multiprocessing

    - by gotgenes
    How does one use multiprocessing to tackle embarrassingly parallel problems? Embarassingly parallel problems typically consist of three basic parts: Read input data (from a file, database, tcp connection, etc.). Run calculations on the input data, where each calculation is independent of any other calculation. Write results of calculations (to a file, database, tcp connection, etc.). We can parallelize the program in two dimensions: Part 2 can run on multiple cores, since each calculation is independent; order of processing doesn't matter. Each part can run independently. Part 1 can place data on an input queue, part 2 can pull data off the input queue and put results onto an output queue, and part 3 can pull results off the output queue and write them out. This seems a most basic pattern in concurrent programming, but I am still lost in trying to solve it, so let's write a canonical example to illustrate how this is done using multiprocessing. Here is the example problem: Given a CSV file with rows of integers as input, compute their sums. Separate the problem into three parts, which can all run in parallel: Process the input file into raw data (lists/iterables of integers) Calculate the sums of the data, in parallel Output the sums Below is traditional, single-process bound Python program which solves these three tasks: #!/usr/bin/env python # -*- coding: UTF-8 -*- # basicsums.py """A program that reads integer values from a CSV file and writes out their sums to another CSV file. """ import csv import optparse import sys def make_cli_parser(): """Make the command line interface parser.""" usage = "\n\n".join(["python %prog INPUT_CSV OUTPUT_CSV", __doc__, """ ARGUMENTS: INPUT_CSV: an input CSV file with rows of numbers OUTPUT_CSV: an output file that will contain the sums\ """]) cli_parser = optparse.OptionParser(usage) return cli_parser def parse_input_csv(csvfile): """Parses the input CSV and yields tuples with the index of the row as the first element, and the integers of the row as the second element. The index is zero-index based. :Parameters: - `csvfile`: a `csv.reader` instance """ for i, row in enumerate(csvfile): row = [int(entry) for entry in row] yield i, row def sum_rows(rows): """Yields a tuple with the index of each input list of integers as the first element, and the sum of the list of integers as the second element. The index is zero-index based. :Parameters: - `rows`: an iterable of tuples, with the index of the original row as the first element, and a list of integers as the second element """ for i, row in rows: yield i, sum(row) def write_results(csvfile, results): """Writes a series of results to an outfile, where the first column is the index of the original row of data, and the second column is the result of the calculation. The index is zero-index based. :Parameters: - `csvfile`: a `csv.writer` instance to which to write results - `results`: an iterable of tuples, with the index (zero-based) of the original row as the first element, and the calculated result from that row as the second element """ for result_row in results: csvfile.writerow(result_row) def main(argv): cli_parser = make_cli_parser() opts, args = cli_parser.parse_args(argv) if len(args) != 2: cli_parser.error("Please provide an input file and output file.") infile = open(args[0]) in_csvfile = csv.reader(infile) outfile = open(args[1], 'w') out_csvfile = csv.writer(outfile) # gets an iterable of rows that's not yet evaluated input_rows = parse_input_csv(in_csvfile) # sends the rows iterable to sum_rows() for results iterable, but # still not evaluated result_rows = sum_rows(input_rows) # finally evaluation takes place as a chain in write_results() write_results(out_csvfile, result_rows) infile.close() outfile.close() if __name__ == '__main__': main(sys.argv[1:]) Let's take this program and rewrite it to use multiprocessing to parallelize the three parts outlined above. Below is a skeleton of this new, parallelized program, that needs to be fleshed out to address the parts in the comments: #!/usr/bin/env python # -*- coding: UTF-8 -*- # multiproc_sums.py """A program that reads integer values from a CSV file and writes out their sums to another CSV file, using multiple processes if desired. """ import csv import multiprocessing import optparse import sys NUM_PROCS = multiprocessing.cpu_count() def make_cli_parser(): """Make the command line interface parser.""" usage = "\n\n".join(["python %prog INPUT_CSV OUTPUT_CSV", __doc__, """ ARGUMENTS: INPUT_CSV: an input CSV file with rows of numbers OUTPUT_CSV: an output file that will contain the sums\ """]) cli_parser = optparse.OptionParser(usage) cli_parser.add_option('-n', '--numprocs', type='int', default=NUM_PROCS, help="Number of processes to launch [DEFAULT: %default]") return cli_parser def main(argv): cli_parser = make_cli_parser() opts, args = cli_parser.parse_args(argv) if len(args) != 2: cli_parser.error("Please provide an input file and output file.") infile = open(args[0]) in_csvfile = csv.reader(infile) outfile = open(args[1], 'w') out_csvfile = csv.writer(outfile) # Parse the input file and add the parsed data to a queue for # processing, possibly chunking to decrease communication between # processes. # Process the parsed data as soon as any (chunks) appear on the # queue, using as many processes as allotted by the user # (opts.numprocs); place results on a queue for output. # # Terminate processes when the parser stops putting data in the # input queue. # Write the results to disk as soon as they appear on the output # queue. # Ensure all child processes have terminated. # Clean up files. infile.close() outfile.close() if __name__ == '__main__': main(sys.argv[1:]) These pieces of code, as well as another piece of code that can generate example CSV files for testing purposes, can be found on github. I would appreciate any insight here as to how you concurrency gurus would approach this problem. Here are some questions I had when thinking about this problem. Bonus points for addressing any/all: Should I have child processes for reading in the data and placing it into the queue, or can the main process do this without blocking until all input is read? Likewise, should I have a child process for writing the results out from the processed queue, or can the main process do this without having to wait for all the results? Should I use a processes pool for the sum operations? If yes, what method do I call on the pool to get it to start processing the results coming into the input queue, without blocking the input and output processes, too? apply_async()? map_async()? imap()? imap_unordered()? Suppose we didn't need to siphon off the input and output queues as data entered them, but could wait until all input was parsed and all results were calculated (e.g., because we know all the input and output will fit in system memory). Should we change the algorithm in any way (e.g., not run any processes concurrently with I/O)?

    Read the article

  • Using IN with sets of tuples in SQL (SQLite3)

    - by gotgenes
    I have the following table in a SQLite3 database: CREATE TABLE overlap_results ( neighbors_of_annotation varchar(20), other_annotation varchar(20), set1_size INTEGER, set2_size INTEGER, jaccard REAL, p_value REAL, bh_corrected_p_value REAL, PRIMARY KEY (neighbors_of_annotation, other_annotation) ); I would like to perform the following query: SELECT * FROM overlap_results WHERE (neighbors_of_annotation, other_annotation) IN (('16070', '8150'), ('16070', '44697')); That is, I have a couple of tuples of annotation IDs, and I'd like to fetch records for each of those tuples. The sqlite3 prompt gives me the following error: SQL error: near ",": syntax error How do I properly express this as a SQL statement?

    Read the article

  • Typesetting LaTeX fraction terms to be larger in an equation

    - by gotgenes
    I have the following formula in LaTeX, based on Fisher's Exact Test. (NOTE: requires the use of the amsmath package for \binom.) \begin{equation} P(i,j) = \sum_{x=|N(V_i) \cap V_j|}^{\min\{|V_j|, |N(V_i)|} \frac{ \binom{|V_j|}{x} \binom{|V - V_j|}{|N(V_i)| - x}} {\binom{|V|}{|N(V_i)|}} \end{equation} This renders the fraction portion with very small, difficult to read text: I would like my text more readable, as in the following example: What trickery can I use to get LaTeX to render my equation similarly?

    Read the article

  • Adding an equation or formula to a figure caption in LaTeX

    - by gotgenes
    I have a figure in LaTeX with a caption to which I need to add a formula (equation* or displaymath environments). For example: \documentclass[12pt]{article} \begin{document} \begin{figure}[tbph] \begin{center} %... \end{center} \caption{As you can see \begin{displaymath}4 \ne 5\end{displaymath} } \label{fig:somefig} \end{figure} \end{document} This makes pdflatex angry, though it will produce a PDF. ! Argument of \@caption has an extra }. <inserted text> \par l.9 } What's the right way to go about adding an equation to a figure caption? NOTE: Please do not suggest simply using the $ ... $ math environment; the equation shown is a toy example; my real equation is much more intricate. See also: Adding a caption to an equation in LaTeX (the reverse of this question)

    Read the article

  • Optimizing Python code with many attribute and dictionary lookups

    - by gotgenes
    I have written a program in Python which spends a large amount of time looking up attributes of objects and values from dictionary keys. I would like to know if there's any way I can optimize these lookup times, potentially with a C extension, to reduce the time of execution, or if I need to simply re-implement the program in a compiled language. The program implements some algorithms using a graph. It runs prohibitively slowly on our data sets, so I profiled the code with cProfile using a reduced data set that could actually complete. The vast majority of the time is being burned in one function, and specifically in two statements, generator expressions, within the function: The generator expression at line 202 is neighbors_in_selected_nodes = (neighbor for neighbor in node_neighbors if neighbor in selected_nodes) and the generator expression at line 204 is neighbor_z_scores = (interaction_graph.node[neighbor]['weight'] for neighbor in neighbors_in_selected_nodes) The source code for this function of context provided below. selected_nodes is a set of nodes in the interaction_graph, which is a NetworkX Graph instance. node_neighbors is an iterator from Graph.neighbors_iter(). Graph itself uses dictionaries for storing nodes and edges. Its Graph.node attribute is a dictionary which stores nodes and their attributes (e.g., 'weight') in dictionaries belonging to each node. Each of these lookups should be amortized constant time (i.e., O(1)), however, I am still paying a large penalty for the lookups. Is there some way which I can speed up these lookups (e.g., by writing parts of this as a C extension), or do I need to move the program to a compiled language? Below is the full source code for the function that provides the context; the vast majority of execution time is spent within this function. def calculate_node_z_prime( node, interaction_graph, selected_nodes ): """Calculates a z'-score for a given node. The z'-score is based on the z-scores (weights) of the neighbors of the given node, and proportional to the z-score (weight) of the given node. Specifically, we find the maximum z-score of all neighbors of the given node that are also members of the given set of selected nodes, multiply this z-score by the z-score of the given node, and return this value as the z'-score for the given node. If the given node has no neighbors in the interaction graph, the z'-score is defined as zero. Returns the z'-score as zero or a positive floating point value. :Parameters: - `node`: the node for which to compute the z-prime score - `interaction_graph`: graph containing the gene-gene or gene product-gene product interactions - `selected_nodes`: a `set` of nodes fitting some criterion of interest (e.g., annotated with a term of interest) """ node_neighbors = interaction_graph.neighbors_iter(node) neighbors_in_selected_nodes = (neighbor for neighbor in node_neighbors if neighbor in selected_nodes) neighbor_z_scores = (interaction_graph.node[neighbor]['weight'] for neighbor in neighbors_in_selected_nodes) try: max_z_score = max(neighbor_z_scores) # max() throws a ValueError if its argument has no elements; in this # case, we need to set the max_z_score to zero except ValueError, e: # Check to make certain max() raised this error if 'max()' in e.args[0]: max_z_score = 0 else: raise e z_prime = interaction_graph.node[node]['weight'] * max_z_score return z_prime Here are the top couple of calls according to cProfiler, sorted by time. ncalls tottime percall cumtime percall filename:lineno(function) 156067701 352.313 0.000 642.072 0.000 bpln_contextual.py:204(<genexpr>) 156067701 289.759 0.000 289.759 0.000 bpln_contextual.py:202(<genexpr>) 13963893 174.047 0.000 816.119 0.000 {max} 13963885 69.804 0.000 936.754 0.000 bpln_contextual.py:171(calculate_node_z_prime) 7116883 61.982 0.000 61.982 0.000 {method 'update' of 'set' objects}

    Read the article

  • Using code generated by Py++ as a Python extension

    - by gotgenes
    I have a need to wrap an existing C++ library for use in Python. After reading through this answer on choosing an appropriate method to wrap C++ for use in Python, I decided to go with Py++. I walked through the tutorial for Py++, using the tutorial files, and I got the expected output in generated.cpp, but I haven't figured out what to do in order to actually use the generated code as an extension I can import in Python. I'm sure I have to compile the code, now, but with what? Am I supposed to use bjam?

    Read the article

1