Search Results

Search found 16838 results on 674 pages for 'writing patterns dita cms'.

Page 657/674 | < Previous Page | 653 654 655 656 657 658 659 660 661 662 663 664  | Next Page >

  • Faster method for Matrix vector product for large matrix in C or C++ for use in GMRES

    - by user35959
    I have a large, dense matrix A, and I aim to find the solution to the linear system Ax=b using an iterative method (in MATLAB was the plan using its built in GMRES). For more than 10,000 rows, this is too much for my computer to store in memory, but I know that the entries in A are constructed by two known vectors x and y of length N and the entries satisfy: A(i,j) = .5*(x[i]-x[j])^2+([y[i]-y[j])^2 * log(x[i]-x[j])^2+([y[i]-y[j]^2). MATLAB's GMRES command accepts as input a function call that can compute the matrix vector product A*x, which allows me to handle larger matrices than I can store in memory. To write the matrix-vecotr product function, I first tried this in matlab by going row by row and using some vectorization, but I avoid spawning the entire array A (since it would be too large). This was fairly slow unfortnately in my application for GMRES. My plan was to write a mex file for MATLAB to, which is in C, and ideally should be significantly faster than the matlab code. I'm rather new to C, so this went rather poorly and my naive attempt at writing the code in C was slower than my partially vectorized attempt in Matlab. #include <math.h> #include "mex.h" void Aproduct(double *x, double *ctrs_x, double *ctrs_y, double *b, mwSize n) { mwSize i; mwSize j; double val; for (i=0; i<n; i++) { for (j=0; j<i; j++) { val = pow(ctrs_x[i]-ctrs_x[j],2)+pow(ctrs_y[i]-ctrs_y[j],2); b[i] = b[i] + .5* val * log(val) * x[j]; } for (j=i+1; j<n; j++) { val = pow(ctrs_x[i]-ctrs_x[j],2)+pow(ctrs_y[i]-ctrs_y[j],2); b[i] = b[i] + .5* val * log(val) * x[j]; } } } The above is the computational portion of the code for the matlab mex file (which is slightly modified C, if I understand correctly). Please note that I skip the case i=j, since in that case the variable val will be a 0*log(0), which should be interpreted as 0 for me, so I just skip it. Is there a more efficient or faster way to write this? When I call this C function via the mex file in matlab, it is quite slow, slower even than the matlab method I used. This surprises me since I suspected that C code should be much faster than matlab. The alternative matlab method which is partially vectorized that I am comparing it with is function Ax = Aprod(x,ctrs) n = length(x); Ax = zeros(n,1); for j=1:(n-3) v = .5*((ctrs(j,1)-ctrs(:,1)).^2+(ctrs(j,2)-ctrs(:,2)).^2).*log((ctrs(j,1)-ctrs(:,1)).^2+(ctrs(j,2)-ctrs(:,2)).^2); v(j)=0; Ax(j) = dot(v,x(1:n-3); end (the n-3 is because there is actually 3 extra components, but they are dealt with separately,so I excluded that code). This is partly vectorized and only needs one for loop, so it makes some sense that it is faster. However, I was hoping I could go even faster with C+mex file. Any suggestions or help would be greatly appreciated! Thanks! EDIT: I should be more clear. I am open to any faster method that can help me use GMRES to invert this matrix that I am interested in, which requires a faster way of doing the matrix vector product without explicitly loading the array into memory. Thanks!

    Read the article

  • Monorail - Form submission using GET instead of POST

    - by Septih
    Hello, I'm writing some additions to a Castle MonoRail based site involving an Add and an Edit form. The add form works fine and uses POST but the edit form uses GET. The only major difference I can see is that the edit method is called with the Id of the object being edited in the query string. When the submit button is pressed on the edit form, the only argument passed is this object Id again. Here is the code for the edit form: <form action="edit.ashx" method="post"> <h3>Coupon Description</h3> <textarea name="comments" width="200">$comments</textarea> <br/><br/> <h3>Coupon Actions</h3> <br/> <div>Give Stories</div> <ul class="checklist" style="overflow:auto;height:144px;width:100%"> #foreach ($story in $stories.Values) <li> <label> #set ($associated = "") #foreach($storyId in $storyIds) #if($story.Id == $storyId) #set($associated = " checked='true'") #end #end <input type="checkbox" name="chk_${story.Id}" id="chk_${story.Id}" value="true" class="checkbox" $associated/> $story.Name </label> </li> #end </ul> <br/><br/> <div>Give Credit Amount</div> <input type="text" name="credit" value="$credit" /> <br/><br/> <h3>Coupon Uses</h3> <input type="checkbox" name="multi" #if($multi) checked="true" #end /> Multi-Use Coupon?<br/><br/> <b>OR</b> <br/> <br/> Number of Uses per Coupon: <input type="text" name="uses" value="$uses" /> <br/> <input type="submit" name="Save" /> </form> The differences between this and the add form is the velocity stuff to do with association and the values of the inputs being from the PropertyBag. The Method dealing with this on the controller starts like this: public void Edit(int id) { Coupon coupon = Coupon.GetRepository(User.Site.Id).FindById(id).Value; if(coupon == null) { RedirectToReferrer(); return; } IFutureQueryOfList<Story> stories = Story.GetRepository(User.Site.Id).OnlyReturnEnabled().FindAll("Name", true); if (Params["Save"] == null) { ... } } It reliably gets called but a breakpoint on the Params["Save"] lets me see that the HttpMethod is "GET" and the only arguments passed (In the Form and the Request) are the object Id and additional HTTP headers. At the end of the day I'm not that familiar with MonoRail and this may be a stupid mistake on my behalf, but I would very much appreciate being made a fool out of if it fixes the problem! :) Thanks

    Read the article

  • MIME "Content-Type" folding and parameter question regarding RFCs?

    - by BastiBense
    Hello, I'm trying to implement a basic MIME parser for the multipart/related in C++/Qt. So far I've been writing some basic parser code for headers, and I'm reading the RFCs to get an idea how to do everything as close to the specification as possible. Unfortunately there is a part in the RFC that confuses me a bit: From RFC882 Section 3.1.1: Each header field can be viewed as a single, logical line of ASCII characters, comprising a field-name and a field-body. For convenience, the field-body portion of this conceptual entity can be split into a multiple-line representation; this is called "folding". The general rule is that wherever there may be linear-white-space (NOT simply LWSP-chars), a CRLF immediately followed by AT LEAST one LWSP-char may instead be inserted. Thus, the single line Alright, so I simply parse a header field and if a CRLF follows with linear whitespace, I simply concat those in a useful manner to result in a single header line. Let's proceed... From RFC2045 Section 5.1: In the Augmented BNF notation of RFC 822, a Content-Type header field value is defined as follows: content := "Content-Type" ":" type "/" subtype *(";" parameter) ; Matching of media type and subtype ; is ALWAYS case-insensitive. [...] parameter := attribute "=" value attribute := token ; Matching of attributes ; is ALWAYS case-insensitive. value := token / quoted-string token := 1*<any (US-ASCII) CHAR except SPACE, CTLs, or tspecials> Okay. So it seems if you want to specify a Content-Type header with parameters, simple do it like this: Content-Type: multipart/related; foo=bar; something=else ... and a folded version of the same header would look like this: Content-Type: multipart/related; foo=bar; something=else Correct? Good. As I kept reading the RFCs, I came across the following in RFC2387 Section 5.1 (Examples): Content-Type: Multipart/Related; boundary=example-1 start="<[email protected]>"; type="Application/X-FixedRecord" start-info="-o ps" --example-1 Content-Type: Application/X-FixedRecord Content-ID: <[email protected]> [data] --example-1 Content-Type: Application/octet-stream Content-Description: The fixed length records Content-Transfer-Encoding: base64 Content-ID: <[email protected]> [data] --example-1-- Hmm, this is odd. Do you see the Content-Type header? It has a number of parameters, but not all have a ";" as parameter delimiter. Maybe I just didn't read the RFCs correctly, but if my parser works strictly like the specification defines, the type and start-info parameters would result in a single string or worse, a parser error. Guys, what's your thought on this? Just a typo in the RFCs? Or did I miss something? Thanks!

    Read the article

  • Calling system commands from Perl

    - by Dan J
    In an older version of our code, we called out from Perl to do an LDAP search as follows: # Pass the base DN in via the ldapsearch-specific environment variable # (rather than as the "-b" paramater) to avoid problems of shell # interpretation of special characters in the DN. $ENV{LDAP_BASEDN} = $ldn; $lcmd = "ldapsearch -x -T -1 -h $gLdapServer" . <snip> " > $lworkfile 2>&1"; system($lcmd); if (($? != 0) || (! -e "$lworkfile")) { # Handle the error } The code above would result in a successful LDAP search, and the output of that search would be in the file $lworkfile. Unfortunately, we recently reconfigured openldap on this server so that a "BASE DC=" is specified in /etc/openldap/ldap.conf and /etc/ldap.conf. That change seems to mean ldapsearch ignores the LDAP_BASEDN environment variable, and so my ldapsearch fails. I've tried a couple of different fixes but without success so far: (1) I tried going back to using the "-b" argument to ldapsearch, but escaping the shell metacharacters. I started writing the escaping code: my $ldn_escaped = $ldn; $ldn_escaped =~ s/\/\\/g; $ldn_escaped =~ s/`/\`/g; $ldn_escaped =~ s/$/\$/g; $ldn_escaped =~ s/"/\"/g; That threw up some Perl errors because I haven't escaped those regexes properly in Perl (the line number matches the regex with the backticks in). Backticks found where operator expected at /tmp/mycommand line 404, at end of line At the same time I started to doubt this approach and looked for a better one. (2) I then saw some Stackoverflow questions (here and here) that suggested a better solution. Here's the code: print("Processing..."); # Pass the arguments to ldapsearch by invoking open() with an array. # This ensures the shell does NOT interpret shell metacharacters. my(@cmd_args) = ("-x", "-T", "-1", "-h", "$gLdapPool", "-b", "$ldn", <snip> ); $lcmd = "ldapsearch"; open my $lldap_output, "-|", $lcmd, @cmd_args; while (my $lline = <$lldap_output>) { # I can parse the contents of my file fine } $lldap_output->close; The two problems I am having with approach (2) are: a) Calling open or system with an array of arguments does not let me pass > $lworkfile 2>&1 to the command, so I can't stop the ldapsearch output being sent to screen, which makes my output look ugly: Processing...ldap_bind: Success (0) additional info: Success b) I can't figure out how to choose which location (i.e. path and file name) to the file handle passed to open, i.e. I don't know where $lldap_output is. Can I move/rename it, or inspect it to find out where it is (or is it not actually saved to disk)? Based on the problems with (2), this makes me think I should return back to approach (1), but I'm not quite sure how to

    Read the article

  • Explicitly instantiating a generic member function of a generic structure

    - by Dennis Zickefoose
    I have a structure with a template parameter, Stream. Within that structure, there is a function with its own template parameter, Type. If I try to force a specific instance of the function to be generated and called, it works fine, if I am in a context where the exact type of the structure is known. If not, I get a compile error. This feels like a situation where I'm missing a typename, but there are no nested types. I suspect I'm missing something fundamental, but I've been staring at this code for so long all I see are redheads, and frankly writing code that uses templates has never been my forte. The following is the simplest example I could come up with that illustrates the issue. #include <iostream> template<typename Stream> struct Printer { Stream& str; Printer(Stream& str_) : str(str_) { } template<typename Type> Stream& Exec(const Type& t) { return str << t << std::endl; } }; template<typename Stream, typename Type> void Test1(Stream& str, const Type& t) { Printer<Stream> out = Printer<Stream>(str); /****** vvv This is the line the compiler doesn't like vvv ******/ out.Exec<bool>(t); /****** ^^^ That is the line the compiler doesn't like ^^^ ******/ } template<typename Type> void Test2(const Type& t) { Printer<std::ostream> out = Printer<std::ostream>(std::cout); out.Exec<bool>(t); } template<typename Stream, typename Type> void Test3(Stream& str, const Type& t) { Printer<Stream> out = Printer<Stream>(str); out.Exec(t); } int main() { Test2(5); Test3(std::cout, 5); return 0; } As it is written, gcc-4.4 gives the following: test.cpp: In function 'void Test1(Stream&, const Type&)': test.cpp:22: error: expected primary-expression before 'bool' test.cpp:22: error: expected ';' before 'bool' Test2 and Test3 both compile cleanly, and if I comment out Test1 the program executes, and I get "1 5" as I expect. So it looks like there's nothing wrong with the idea of what I want to do, but I've botched something in the implementation. If anybody could shed some light on what I'm overlooking, it would be greatly appreciated.

    Read the article

  • Warning: comparison with string literals results in unspecified behaviour

    - by nunos
    So I starting the project of writing a simplified sheel for linux in c. I am not at all proficient with c nor with Linux that's exactly the reason I decided it would be a good idea. Starting with the parser, I have already encountered some problems. The code should be straightforward that's why I didn't include any comments. I am getting a warning with gcc: "comparison with string literals results in unspecified behaviour" at the lines commented with "WARNING HERE" (see code below). I have no idea why this causes an warning, but the real problem is that even though I am comparing an "<" to an "<" is doesn't get inside the if... I am looking for an answer for the problem explained, however if there's something that you see in the code that should be improved please say so. Just take in mind I am not that proficient and that this is still a work in progress (or better yet, a work in start). Thanks in advance. #include <stdio.h> #include <unistd.h> #include <string.h> typedef enum {false, true} bool; typedef struct { char **arg; char *infile; char *outfile; int background; } Command_Info; int parse_cmd(char *cmd_line, Command_Info *cmd_info) { char *arg; char *args[100]; int i = 0; arg = strtok(cmd_line, " \n"); while (arg != NULL) { args[i] = arg; arg = strtok(NULL, " \n"); i++; } int num_elems = i; cmd_info->infile = NULL; cmd_info->outfile = NULL; cmd_info->background = 0; int iarg = 0; for (i = 0; i < num_elems; i++) { if (args[i] == "&") //WARNING HERE return -1; else if (args[i] == "<") //WARNING HERE if (args[i+1] != NULL) cmd_info->infile = args[i+1]; else return -1; else if (args[i] == ">") //WARNING HERE if (args[i+1] != NULL) cmd_info->outfile = args[i+1]; else return -1; else cmd_info->arg[iarg++] = args[i]; } cmd_info->arg[iarg] = NULL; return 0; } void print_cmd(Command_Info *cmd_info) { int i; for (i = 0; cmd_info->arg[i] != NULL; i++) printf("arg[%d]=\"%s\"\n", i, cmd_info->arg[i]); printf("arg[%d]=\"%s\"\n", i, cmd_info->arg[i]); printf("infile=\"%s\"\n", cmd_info->infile); printf("outfile=\"%s\"\n", cmd_info->outfile); printf("background=\"%d\"\n", cmd_info->background); } int main(int argc, char* argv[]) { char cmd_line[100]; Command_Info cmd_info; printf(">>> "); fgets(cmd_line, 100, stdin); parse_cmd(cmd_line, &cmd_info); print_cmd(&cmd_info); return 0; }

    Read the article

  • How to write a bison grammer for WDI?

    - by Rizo
    I need some help in bison grammar construction. From my another question: I'm trying to make a meta-language for writing markup code (such as xml and html) wich can be directly embedded into C/C++ code. Here is a simple sample written in this language, I call it WDI (Web Development Interface): /* * Simple wdi/html sample source code */ #include <mySite> string name = "myName"; string toCapital(string str); html { head { title { mySiteTitle; } link(rel="stylesheet", href="style.css"); } body(id="default") { // Page content wrapper div(id="wrapper", class="some_class") { h1 { "Hello, " + toCapital(name) + "!"; } // Lists post ul(id="post_list") { for(post in posts) { li { a(href=post.getID()) { post.tilte; } } } } } } } Basically it is a C source with a user-friendly interface for html. As you can see the traditional tag-based style is substituted by C-like, with blocks delimited by curly braces. I need to build an interpreter to translate this code to html and posteriorly insert it into C, so that it can be compiled. The C part stays intact. Inside the wdi source it is not necessary to use prints, every return statement will be used for output (in printf function). The program's output will be clean html code. So, for example a heading 1 tag would be transformed like this: h1 { "Hello, " + toCapital(name) + "!"; } // would become: printf("<h1>Hello, %s!</h1>", toCapital(name)); My main goal is to create an interpreter to translate wdi source to html like this: tag(attributes) {content} = <tag attributes>content</tag> Secondly, html code returned by the interpreter has to be inserted into C code with printfs. Variables and functions that occur inside wdi should also be sorted in order to use them as printf parameters (the case of toCapital(name) in sample source). Here are my flex/bison files: id [a-zA-Z_]([a-zA-Z0-9_])* number [0-9]+ string \".*\" %% {id} { yylval.string = strdup(yytext); return(ID); } {number} { yylval.number = atoi(yytext); return(NUMBER); } {string} { yylval.string = strdup(yytext); return(STRING); } "(" { return(LPAREN); } ")" { return(RPAREN); } "{" { return(LBRACE); } "}" { return(RBRACE); } "=" { return(ASSIGN); } "," { return(COMMA); } ";" { return(SEMICOLON); } \n|\r|\f { /* ignore EOL */ } [ \t]+ { /* ignore whitespace */ } . { /* return(CCODE); Find C source */ } %% %start wdi %token LPAREN RPAREN LBRACE RBRACE ASSIGN COMMA SEMICOLON CCODE QUOTE %union { int number; char *string; } %token <string> ID STRING %token <number> NUMBER %% wdi : /* empty */ | blocks ; blocks : block | blocks block ; block : head SEMICOLON | head body ; head : ID | ID attributes ; attributes : LPAREN RPAREN | LPAREN attribute_list RPAREN ; attribute_list : attribute | attribute COMMA attribute_list ; attribute : key ASSIGN value ; key : ID {$$=$1} ; value : STRING {$$=$1} /*| NUMBER*/ /*| CCODE*/ ; body : LBRACE content RBRACE ; content : /* */ | blocks | STRING SEMICOLON | NUMBER SEMICOLON | CCODE ; %% I am having difficulties on defining a proper grammar for the language, specially in splitting WDI and C code . I just started learning language processing techniques so I need some orientation. Could someone correct my code or give some examples of what is the right way to solve this problem?

    Read the article

  • What are good CLI tools for JSON?

    - by jasonmp85
    General Problem Though I may be diagnosing the root cause of an event, determining how many users it affected, or distilling timing logs in order to assess the performance and throughput impact of a recent code change, my tools stay the same: grep, awk, sed, tr, uniq, sort, zcat, tail, head, join, and split. To glue them all together, Unix gives us pipes, and for fancier filtering we have xargs. If these fail me, there's always perl -e. These tools are perfect for processing CSV files, tab-delimited files, log files with a predictable line format, or files with comma-separated key-value pairs. In other words, files where each line has next to no context. XML Analogues I recently needed to trawl through Gigabytes of XML to build a histogram of usage by user. This was easy enough with the tools I had, but for more complicated queries the normal approaches break down. Say I have files with items like this: <foo user="me"> <baz key="zoidberg" value="squid" /> <baz key="leela" value="cyclops" /> <baz key="fry" value="rube" /> </foo> And let's say I want to produce a mapping from user to average number of <baz>s per <foo>. Processing line-by-line is no longer an option: I need to know which user's <foo> I'm currently inspecting so I know whose average to update. Any sort of Unix one liner that accomplishes this task is likely to be inscrutable. Fortunately in XML-land, we have wonderful technologies like XPath, XQuery, and XSLT to help us. Previously, I had gotten accustomed to using the wonderful XML::XPath Perl module to accomplish queries like the one above, but after finding a TextMate Plugin that could run an XPath expression against my current window, I stopped writing one-off Perl scripts to query XML. And I just found out about XMLStarlet which is installing as I type this and which I look forward to using in the future. JSON Solutions? So this leads me to my question: are there any tools like this for JSON? It's only a matter of time before some investigation task requires me to do similar queries on JSON files, and without tools like XPath and XSLT, such a task will be a lot harder. If I had a bunch of JSON that looked like this: { "firstName": "Bender", "lastName": "Robot", "age": 200, "address": { "streetAddress": "123", "city": "New York", "state": "NY", "postalCode": "1729" }, "phoneNumber": [ { "type": "home", "number": "666 555-1234" }, { "type": "fax", "number": "666 555-4567" } ] } And wanted to find the average number of phone numbers each person had, I could do something like this with XPath: fn:avg(/fn:count(phoneNumber)) Questions Are there any command-line tools that can "query" JSON files in this way? If you have to process a bunch of JSON files on a Unix command line, what tools do you use? Heck, is there even work being done to make a query language like this for JSON? If you do use tools like this in your day-to-day work, what do you like/dislike about them? Are there any gotchas? I'm noticing more and more data serialization is being done using JSON, so processing tools like this will be crucial when analyzing large data dumps in the future. Language libraries for JSON are very strong and it's easy enough to write scripts to do this sort of processing, but to really let people play around with the data shell tools are needed. Related Questions Grep and Sed Equivalent for XML Command Line Processing Is there a query language for JSON? JSONPath or other XPath like utility for JSON/Javascript; or Jquery JSON

    Read the article

  • surfaceDestroyed called out of turn

    - by Avasulthiris
    I'm currently developing on minimum sdk version 3 (Android 1.5 - cupcake) and I'm having a strange unexplained issue that I have not been able to solve on my own. It is now becoming a rather urgent issue as I've already missed 1 deadline... I'm writing a high-level library to make long term android development easier and quicker. The one specific module has to capture images for a application... I've gotten everything right so far over the last couple months, except this one little thing and I don't know what to do any more: When I use the Camera object and implement a SurfaceHolder.Callback, the methods surfaceCreated() and surfaceChanged() are called one after the other. Then when the activity finishes, surfaceDestroyed() is called. This is how it should be, but when I stick the exact same code in my library (plain Java library that references the Android API - not in an activity), surfaceDestroyed() is called directly after created and changed. As a result - the camera object is closed before I can use it and the application force closes. What a pain. I can't do anything! This method call is controlled by the device.. Why does the surface close for no reason? Even when I post it to run on the activity thread through my own invokeAndWait(Runnable) method, like I do for many other things. I have 5 different working examples of different ways and implementations of capturing images in android but I still get the same issue when I plug it into my library. I don't understand what the difference is. The code is pretty much the same - and I post all the related code to the UI thread so its not a thread handling issue or anything like that. I've rewritten it about 20 times in different ways - same issue every time.. The only other way to approach it that I know of is creating a new Camera and setting it to the VideoView. The android source (c++ native code) however provides no Camera constructor, only an open() method which automatically forwards the camera's state to 'prepared' but I can only set the camera to the VideoView from the 'initialized' state. Pretty silly, I know, but there is no way around it unless I modify the Android library source code haha. not an option! The API does not allow for this method - you are expected to use it like my first example. So essentially - i just need to understand exactly why surfaceDestroyed() is called out of turn and if there is anything I can do to avoid it closing? If i can just understand the exact logic behind it and how it works! The documentation isn't much help! Secondly, if someone knows of any alternative ways to do it, as my second example, but hopefully one which the API actually allows for? haha Thanks guys. I would post code, but its fairly complicated, a couple thousand lines for this specific class and it would probably take a couple days to explain with all the threading and event listeners and what not. I just need help with this 1 single thing. Please let me know if you have any questions.

    Read the article

  • Optimizing Python code with many attribute and dictionary lookups

    - by gotgenes
    I have written a program in Python which spends a large amount of time looking up attributes of objects and values from dictionary keys. I would like to know if there's any way I can optimize these lookup times, potentially with a C extension, to reduce the time of execution, or if I need to simply re-implement the program in a compiled language. The program implements some algorithms using a graph. It runs prohibitively slowly on our data sets, so I profiled the code with cProfile using a reduced data set that could actually complete. The vast majority of the time is being burned in one function, and specifically in two statements, generator expressions, within the function: The generator expression at line 202 is neighbors_in_selected_nodes = (neighbor for neighbor in node_neighbors if neighbor in selected_nodes) and the generator expression at line 204 is neighbor_z_scores = (interaction_graph.node[neighbor]['weight'] for neighbor in neighbors_in_selected_nodes) The source code for this function of context provided below. selected_nodes is a set of nodes in the interaction_graph, which is a NetworkX Graph instance. node_neighbors is an iterator from Graph.neighbors_iter(). Graph itself uses dictionaries for storing nodes and edges. Its Graph.node attribute is a dictionary which stores nodes and their attributes (e.g., 'weight') in dictionaries belonging to each node. Each of these lookups should be amortized constant time (i.e., O(1)), however, I am still paying a large penalty for the lookups. Is there some way which I can speed up these lookups (e.g., by writing parts of this as a C extension), or do I need to move the program to a compiled language? Below is the full source code for the function that provides the context; the vast majority of execution time is spent within this function. def calculate_node_z_prime( node, interaction_graph, selected_nodes ): """Calculates a z'-score for a given node. The z'-score is based on the z-scores (weights) of the neighbors of the given node, and proportional to the z-score (weight) of the given node. Specifically, we find the maximum z-score of all neighbors of the given node that are also members of the given set of selected nodes, multiply this z-score by the z-score of the given node, and return this value as the z'-score for the given node. If the given node has no neighbors in the interaction graph, the z'-score is defined as zero. Returns the z'-score as zero or a positive floating point value. :Parameters: - `node`: the node for which to compute the z-prime score - `interaction_graph`: graph containing the gene-gene or gene product-gene product interactions - `selected_nodes`: a `set` of nodes fitting some criterion of interest (e.g., annotated with a term of interest) """ node_neighbors = interaction_graph.neighbors_iter(node) neighbors_in_selected_nodes = (neighbor for neighbor in node_neighbors if neighbor in selected_nodes) neighbor_z_scores = (interaction_graph.node[neighbor]['weight'] for neighbor in neighbors_in_selected_nodes) try: max_z_score = max(neighbor_z_scores) # max() throws a ValueError if its argument has no elements; in this # case, we need to set the max_z_score to zero except ValueError, e: # Check to make certain max() raised this error if 'max()' in e.args[0]: max_z_score = 0 else: raise e z_prime = interaction_graph.node[node]['weight'] * max_z_score return z_prime Here are the top couple of calls according to cProfiler, sorted by time. ncalls tottime percall cumtime percall filename:lineno(function) 156067701 352.313 0.000 642.072 0.000 bpln_contextual.py:204(<genexpr>) 156067701 289.759 0.000 289.759 0.000 bpln_contextual.py:202(<genexpr>) 13963893 174.047 0.000 816.119 0.000 {max} 13963885 69.804 0.000 936.754 0.000 bpln_contextual.py:171(calculate_node_z_prime) 7116883 61.982 0.000 61.982 0.000 {method 'update' of 'set' objects}

    Read the article

  • Perl: Deleting multiple re-occuring lines where a certain criteria is met

    - by george-lule
    Dear all, I have data that looks like below, the actual file is thousands of lines long. Event_time Cease_time Object_of_reference -------------------------- -------------------------- ---------------------------------------------------------------------------------- Apr 5 2010 5:54PM NULL SubNetwork=ONRM_RootMo,SubNetwork=AXE,ManagedElement=BSJN1,BssFunction= BSS_ManagedFunction,BtsSiteMgr=LUGALAMBO_900 Apr 5 2010 5:55PM Apr 5 2010 6:43PM SubNetwork=ONRM_RootMo,SubNetwork=AXE,ManagedElement=BSJN1,BssFunction= BSS_ManagedFunction,BtsSiteMgr=LUGALAMBO_900 Apr 5 2010 5:58PM NULL SubNetwork=ONRM_RootMo,SubNetwork=AXE,ManagedElement=BSCC1,BssFunction= BSS_ManagedFunction,BtsSiteMgr=BULAGA Apr 5 2010 5:58PM Apr 5 2010 6:01PM SubNetwork=ONRM_RootMo,SubNetwork=AXE,ManagedElement=BSCC1,BssFunction= BSS_ManagedFunction,BtsSiteMgr=BULAGA Apr 5 2010 6:01PM NULL SubNetwork=ONRM_RootMo,SubNetwork=AXE,ManagedElement=BSCC1,BssFunction= BSS_ManagedFunction,BtsSiteMgr=BULAGA Apr 5 2010 6:03PM NULL SubNetwork=ONRM_RootMo,SubNetwork=AXE,ManagedElement=BSJN1,BssFunction= BSS_ManagedFunction,BtsSiteMgr=KAPKWAI_900 Apr 5 2010 6:03PM Apr 5 2010 6:04PM SubNetwork=ONRM_RootMo,SubNetwork=AXE,ManagedElement=BSJN1,BssFunction= BSS_ManagedFunction,BtsSiteMgr=KAPKWAI_900 Apr 5 2010 6:04PM NULL SubNetwork=ONRM_RootMo,SubNetwork=AXE,ManagedElement=BSJN1,BssFunction= BSS_ManagedFunction,BtsSiteMgr=KAPKWAI_900 Apr 5 2010 6:03PM Apr 5 2010 6:03PM SubNetwork=ONRM_RootMo,SubNetwork=AXE,ManagedElement=BSCC1,BssFunction= BSS_ManagedFunction,BtsSiteMgr=BULAGA Apr 5 2010 6:03PM NULL SubNetwork=ONRM_RootMo,SubNetwork=AXE,ManagedElement=BSCC1,BssFunction= BSS_ManagedFunction,BtsSiteMgr=BULAGA Apr 5 2010 6:03PM Apr 5 2010 7:01PM SubNetwork=ONRM_RootMo,SubNetwork=AXE,ManagedElement=BSCC1,BssFunction= BSS_ManagedFunction,BtsSiteMgr=BULAGA As you can see, each file has a header which describes what the various fields stand for(event start time, event cease time, affected element). The header is followed by a number of dashes. My issue is that, in the data, you see a number of entries where the cease time is NULL i.e event is still active. All such entries must go i.e for each element where the alarm cease time is NULL, the start time, the cease time(in this case NULL) and the actual element must be deleted from the file. In the remaining data, all the text starting from word SubNetwork upto BtsSiteMgr= must also go. Along with the headers and the dashes. Final output should look like below: Apr 5 2010 5:55PM Apr 5 2010 6:43PM LUGALAMBO_900 Apr 5 2010 5:58PM Apr 5 2010 6:01PM BULAGA Apr 5 2010 6:03PM Apr 5 2010 6:04PM KAPKWAI_900 Apr 5 2010 6:03PM Apr 5 2010 6:03PM BULAGA Apr 5 2010 6:03PM Apr 5 2010 7:01PM BULAGA Below is a Perl script that I have written. It has taken care of the headers, the dashes, the NULL entries but I have failed to delete the lines following the NULL entries so as to produce the above output. #!/usr/bin/perl use strict; use warnings; $^I=".bak" #Backup the file before messing it up. open (DATAIN,"<george_perl.txt")|| die("can't open datafile: $!"); # Read in the data open (DATAOUT,">gen_results.txt")|| die("can't open datafile: $!"); #Prepare for the writing while (<DATAIN>) { s/Event_time//g; s/Cease_time//g; s/Object_of_reference//g; s/\-//g; #Preceding 4 statements are for cleaning out the headers my $theline=$_; if ($theline =~ /NULL/){ next; next if $theline =~ /SubN/; } else{ print DATAOUT $theline; } } close DATAIN; close DATAOUT; Kindly help point out any modifications I need to make on the script to make it produce the necessary output. Will be very glad for your help Kind regards George.

    Read the article

  • clojure.algo.monad strange m-plus behaviour with parser-m - why is second m-plus evaluated?

    - by Mark Fisher
    I'm getting unexpected behaviour in some monads I'm writing. I've created a parser-m monad with (def parser-m (state-t maybe-m)) which is pretty much the example given everywhere (here, here and here) I'm using m-plus to act a kind of fall-through query mechanism, in my case, it first reads values from a cache (database), if that returns nil, the next method is to read from "live" (a REST call). However, the second value in the m-plus list is always called, even though its value is disgarded (if the cache hit was good) and the final return is that of the first monadic function. Here's a cutdown version of the issue i'm seeing, and some solutions I found, but I don't know why. My questions are: Is this expected behaviour or a bug in m-plus? i.e. will the 2nd method in a m-plus list always be evaluated if the first item returns a value? Minor in comparison to the above, but if i remove the call _ (fetch-state) from checker, when i evaluate that method, it prints out the messages for the functions the m-plus is calling (when i don't think it should). Is this also a bug? Here's a cut-down version of the code in question highlighting the problem. It simply checks key/value pairs passed in are same as the initial state values, and updates the state to mark what it actually ran. (ns monods.monad-test (:require [clojure.algo.monads :refer :all])) (def parser-m (state-t maybe-m)) (defn check-k-v [k v] (println "calling with k,v:" k v) (domonad parser-m [kv (fetch-val k) _ (do (println "k v kv (= kv v)" k v kv (= kv v)) (m-result 0)) :when (= kv v) _ (do (println "passed") (m-result 0)) _ (update-val :ran #(conj % (str "[" k " = " v "]"))) ] [k v])) (defn filler [] (println "filler called") (domonad parser-m [_ (fetch-state) _ (do (println "filling") (m-result 0)) :when nil] nil)) (def checker (domonad parser-m [_ (fetch-state) result (m-plus ;; (filler) ;; intitially commented out deliberately (check-k-v :a 1) (check-k-v :b 2) (check-k-v :c 3))] result)) (checker {:a 1 :b 2 :c 3 :ran []}) When I run this as is, the output is: > (checker {:a 1 :b 2 :c 3 :ran []}) calling with k,v: :a 1 calling with k,v: :b 2 calling with k,v: :c 3 k v kv (= kv v) :a 1 1 true passed k v kv (= kv v) :b 2 2 true passed [[:a 1] {:a 1, :b 2, :c 3, :ran ["[:a = 1]"]}] I don't expect the line k v kv (= kv v) :b 2 2 true to show at all. The first function to m-plus (as seen in the final output) is what is returned from it. Now, I've found if I pass a filler into m-plus that does nothing (i.e. uncomment the (filler) line) then the output is correct, the :b value isn't evaluated. If I don't have the filler method, and make the first method test fail (i.e. change it to (check-k-v :a 2) then again everything is good, I don't get a call to check :c, only a and b are tested. From my understanding of what the state-t maybe-m transformation is giving me, then the m-plus function should look like: (defn m-plus [left right] (fn [state] (if-let [result (left state)] result (right state)))) which would mean that right isn't called unless left returns nil/false. I'd be interested to know if my understanding is correct or not, and why I have to put the filler method in to stop the extra evaluation (whose effects I don't want to happen). Apologies for the long winded post!

    Read the article

  • INSERT INTO in MS Access 2010 SOMETIMES GETS ERROR: 3073 Operation must use an updateable query

    - by Gary
    I get the ERROR: 3073 Operation must use an updateable query SOMETIMES, while performing an INSERT statment. I have no problem on my windows 7 PC, but the person I am writing this for sometimes gets the error. She also has MS Access 2010 on Windows 7. As I said I have never got it on my PC, and she only gets it sometimes. The code will insert a number of rows and then through the error, and other times not through the erro at all. The error occurs if I have the code and data in one .mdb file or seperate files. Here a snippet of code: OrderHdrInsertStmnt = " INSERT INTO ORDER_HDR " _ & "(ORDER_ID, SOURCE_CODE, ORDER_DATE, SHIP_FNAME, SHIP_LNAME, SHIP_EMAIL, SHIP_COMP, SHIP_PHONE, SHIP_ADDR, SHIP_CITY, SHIP_STATE, SHIP_ZIP, SHIP_CNTRY, " _ & " BILL_FNAME, BILL_LNAME, BILL_EMAIL, BILL_COMP, BILL_PHONE, BILL_ADDR, BILL_CITY, BILL_STATE, BILL_ZIP, BILL_CNTRY, " _ & " TAX, SHIPPING, TOTAL, MOD_DATE, INSERT_DATE) " _ & " VALUES (" _ & "'" & OrderId & "','" & SourceCode & "','" & Orderdate & "','" & ShipFName & "','" & ShipLName & "','" & ShipEmail & "','" & ShipComp & "','" & ShipPhone & "','" & ShipAddr & "','" & ShipCity & "','" & ShipState & "','" & ShipZip & "','" & ShipCntry _ & "','" & BillFName & "','" & BillLName & "','" & BillEmail & "','" & BillComp & "','" & BillPhone & "','" & BillAddr & "','" & BillCity & "','" & BillState & "','" & BillZip & "','" & BillCntry _ & "','" & OrderTax & "','" & OrderShipping & "','" & OrderTotal & "','" & ImportDate & "','" & ImportDate & "');" then I use dbsCurrent.Execute OrderHdrInsertStmnt, dbFailOnError Any assistance would be great!

    Read the article

  • Why does this XSL fails to retrieve value from xml ?

    - by Pavitar
    Following is an extract of the XSL stylesheet that I have written. It says my '.xsl' is well formed but it somehow does not retrieve values from the xml,instead jus pastes the header and table headings. EDIT <?xml version="1.0" encoding="UTF-8"?> <xsl:stylesheet version="2.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:fo="http://www.w3.org/1999/XSL/Format" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:fn="http://www.w3.org/2005/xpath-functions"> <xsl:template match="/" > <html> <head> <title>GlenMark Pharma</title> </head> <h1 align="center"><font face="Monotype Corsiva" color="red">GlenMarkPharma</font></h1> <body> <table> <tbody> <tr> <th>ID</th> <th>ENAME</th> <th>Mobile</th> <th>EMAIL</th> <th>PWD</th> </tr> <xsl:for-each select="employees/employee"> <tr> <td><xsl:value-of select="@empID" /></td> <td><xsl:value-of select="name" /></td> <td><xsl:value-of select="mobile" /></td> <td><xsl:value-of select="email" /></td> <td><xsl:value-of select="pwd" /></td> </tr> </xsl:for-each> </tbody> </table> </body> </html> </xsl:template> </xsl:stylesheet> Here is my XML doc: <?xml version="1.0" encoding="UTF-8"?> <?xml-stylesheet type="text/xsl" href="MyfirstXsl.xsl"?> <!DOCTYPE employees SYSTEM "Mydtd.dtd"> <me:employees xmlns:me="[email protected]" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="[email protected] Mycomplex.xsd"> <me:employee empID="EMP101"> <me:name>Vicky</me:name> <me:mobile>9870582356</me:mobile> <me:email>[email protected]</me:email> <me:pwd>&defPWD;</me:pwd> </me:employee> </me:employees> I have also tried writing: <xsl:for-each select="employees/employee"> <tr> <td><xsl:value-of select="@empID" /></td> <td><xsl:value-of select="me:name" /></td> in the XSL since I have an alias to the namespace in my XML Doc.But it gives me an error of Invalid Pre-fix.I don't know what I'm doing wrong.

    Read the article

  • Why might a System.String object not cache its hash code?

    - by Dan Tao
    A glance at the source code for string.GetHashCode using Reflector reveals the following (for mscorlib.dll version 4.0): public override unsafe int GetHashCode() { fixed (char* str = ((char*) this)) { char* chPtr = str; int num = 0x15051505; int num2 = num; int* numPtr = (int*) chPtr; for (int i = this.Length; i > 0; i -= 4) { num = (((num << 5) + num) + (num >> 0x1b)) ^ numPtr[0]; if (i <= 2) { break; } num2 = (((num2 << 5) + num2) + (num2 >> 0x1b)) ^ numPtr[1]; numPtr += 2; } return (num + (num2 * 0x5d588b65)); } } Now, I realize that the implementation of GetHashCode is not specified and is implementation-dependent, so the question "is GetHashCode implemented in the form of X or Y?" is not really answerable. I'm just curious about a few things: If Reflector has disassembled the DLL correctly and this is the implementation of GetHashCode (in my environment), am I correct in interpreting this code to indicate that a string object, based on this particular implementation, would not cache its hash code? Assuming the answer is yes, why would this be? It seems to me that the memory cost would be minimal (one more 32-bit integer, a drop in the pond compared to the size of the string itself) whereas the savings would be significant, especially in cases where, e.g., strings are used as keys in a hashtable-based collection like a Dictionary<string, [...]>. And since the string class is immutable, it isn't like the value returned by GetHashCode will ever even change. What could I be missing? UPDATE: In response to Andras Zoltan's closing remark: There's also the point made in Tim's answer(+1 there). If he's right, and I think he is, then there's no guarantee that a string is actually immutable after construction, therefore to cache the result would be wrong. Whoa, whoa there! This is an interesting point to make (and yes it's very true), but I really doubt that this was taken into consideration in the implementation of GetHashCode. The statement "therefore to cache the result would be wrong" implies to me that the framework's attitude regarding strings is "Well, they're supposed to be immutable, but really if developers want to get sneaky they're mutable so we'll treat them as such." This is definitely not how the framework views strings. It fully relies on their immutability in so many ways (interning of string literals, assignment of all zero-length strings to string.Empty, etc.) that, basically, if you mutate a string, you're writing code whose behavior is entirely undefined and unpredictable. I guess my point is that for the author(s) of this implementation to worry, "What if this string instance is modified between calls, even though the class as it is publicly exposed is immutable?" would be like for someone planning a casual outdoor BBQ to think to him-/herself, "What if someone brings an atomic bomb to the party?" Look, if someone brings an atom bomb, party's over.

    Read the article

  • php sorting a seriously multidimensional array...

    - by BigDogsBarking
    I'm trying to sort a multidimensional object, and, after looking on php.net and around here, I get that I should write a function that I can then call via usort. I'm having some trouble with the syntax. I haven't ever written something this complicated before, and trying to figure it out feels like a mindbender... I'm working with the array posted at the end of this post. I want to filter out duplicate [n] values. But, and this is the tricky part for me, I want to keep the [n] value that has the smallest [d] value. So, if I have (and this example is simplified, the real array is at the end of this post): Array ( [7777] => Array ( [0] => Array ( [n] => '12345' [d] => 1 ) [1] => Array ( [n] => '67890' [d] => 4 ) ) [8888] => Array ( [2] => Array ( [n] => '12345' [d] => 10 ) [3] => Array ( [n] => '67890' [d] => 2 ) ) ) I want to filter out duplicate [n] values based on the [d] value, so that I wind up with this: Array ( [7777] => Array ( [0] => Array ( [n] => '12345' [d] => 1 ) ) [8888] => Array [3] => Array ( [n] => '67890' [d] => 2 ) ) ) I've tried writing different variations of the function cmp example posted on php.net, but I haven't been able to get any to work, and I think it's because I'm not altogether clear on how to traverse it using their example... I tried: function cmp($a, $b) { if($a['n'] == $b['n']) { if($a['d'] == $b['d']) { return 0; } } return ($a['n'] < $b['n']) ? -1 : 1; } But, that really did not work at all... Anyway, here's the real array I'm trying to work with... Help is greatly appreciated! Array ( [32112] => Array ( [0] => Array ( [n] => '02124' [d] => '0' ) [1] => Array ( [n] => '02124' [d] => '0.240101905123744' ) [2] => Array ( [n] => '11050' [d] => '0.441758632682761' ) [3] => Array ( [n] => '02186' [d] => '0.317514080260304' ) ) [43434] => Array ( [4] => Array ( [n] => '02124' [d] => '5.89936971664429e-05' ) [5] => Array ( [n] => '02124' [d] => '0.145859264792549' ) [6] => Array ( [n] => '11050' [d] => '0.327864593457739' ) [7] => Array ( [n] => '11050' [d] => '0.312135345168295' ) ) )

    Read the article

  • segmented controls mangled during initial transition animation

    - by dLux
    greetings and salutations folks, i'm relatively new to objective c & iphone programming, so bare with me if i've overlooked something obvious.. i created a simple app to play with the different transition animations, setting up a couple segmented controls and a slider.. (Flip/Curl), (left/right) | (up/down), (EaseInOut/EaseIn/EaseOut/Linear) i created a view controller class, and the super view controller switches between 2 instances of the sub class. as you can see from the following image, the first time switching to the 2nd instance, while the animation is occurring the segmented controls are mangled; i'd guess they haven't had enuff time to draw themselves completely.. http://img689.imageshack.us/img689/2320/mangledbuttonsduringtra.png they're fine once the animation is done, and any subsequent times.. if i specify cache:NO in the setAnimationTransition it helps, but there still seems to be some sort of progressive reveal for the text in the segmented controls; they still don't seem to be pre-rendered or initialized properly.. (and surely there's a way to do this while caching the view being transitioned to, since in this case the view isn't changing and should be cacheable.) i'm building my code based on a couple tutorials from a book, so i updated the didReceiveMemoryWarning to set the instanced view controllers to nil; when i invoke a memory warning in the simulator, i assume it's purging the other view, and it acts like a first transition after loading, the view being transitioned to appears just like the image above.. i guess it can't hurt to include the code (sorry if it's considered spamming), this is basically half of it, with a similar chunk following this in an else statement, for the case of the 2nd side being present, switching back to the 1st..: - (IBAction)switchViews:(id)sender { [UIView beginAnimations:@"Transition Animation" context:nil]; if (self.sideBViewController.view.superview == nil) // sideA is active, sideB is coming { if (self.sideBViewController == nil) { SideAViewController *sBController = [[SideAViewController alloc] initWithNibName:@"SideAViewController" bundle:nil]; self.sideBViewController = sBController; [sBController release]; } [UIView setAnimationDuration:sideAViewController.transitionDurationSlider.value]; if ([sideAViewController.transitionAnimation selectedSegmentIndex] == 0) { // flip: 0 == left, 1 == right if ([sideAViewController.flipDirection selectedSegmentIndex] == 0) [UIView setAnimationTransition:UIViewAnimationTransitionFlipFromLeft forView:self.view cache:YES]; else [UIView setAnimationTransition:UIViewAnimationTransitionFlipFromRight forView:self.view cache:YES]; } else { // curl: 0 == up, 1 == down if ([sideAViewController.curlDirection selectedSegmentIndex] == 0) [UIView setAnimationTransition:UIViewAnimationTransitionCurlUp forView:self.view cache:YES]; else [UIView setAnimationTransition:UIViewAnimationTransitionCurlDown forView:self.view cache:YES]; } if ([sideAViewController.animationCurve selectedSegmentIndex] == 0) [UIView setAnimationCurve:UIViewAnimationCurveEaseInOut]; else if ([sideAViewController.animationCurve selectedSegmentIndex] == 1) [UIView setAnimationCurve:UIViewAnimationCurveEaseIn]; else if ([sideAViewController.animationCurve selectedSegmentIndex] == 2) [UIView setAnimationCurve:UIViewAnimationCurveEaseOut]; else if ([sideAViewController.animationCurve selectedSegmentIndex] == 3) [UIView setAnimationCurve:UIViewAnimationCurveLinear]; [sideBViewController viewWillAppear:YES]; [sideAViewController viewWillDisappear:YES]; [sideAViewController.view removeFromSuperview]; [self.view insertSubview:sideBViewController.view atIndex:0]; [sideBViewController viewDidAppear:YES]; [sideAViewController viewDidDisappear:YES]; } any other tips or pointers about writing good clean code is also appreciated, i realize i still have a lot to learn.. thank u for ur time, -- d

    Read the article

  • MVC using ODP.NET getting ORA-01840

    - by sse
    I am writing a simple MVC Application using ODP.NET. I am trying to call a Pl/Sql proc that inserts a record. Here is the simple Pl/Sql: procedure spAddCountry(pGisRecid in country.GISRECID%type, pCountryCode in country.COUNTRYCODE%type, pCountryName in country.COUNTRYNAME%type, pCurrencyCode in country.CURRENCYCODE%type, pEUTerritory in country.EUTERRITORY%type, pFatCAStatus in country.FATCASTATUS%type, pFATF in country.FATF%type, pFSCountryCode in country.COUNTRYCODE%type, pInsertedBy in country.INSERTEDBY%type, pInsertedOn in country.INSERTEDON%type, pLanguages in country.LANGUAGES%type, pNCCT in country.NCCT%type) is PRAGMA AUTONOMOUS_TRANSACTION; begin INSERT INTO COUNTRY (GISRECID, COUNTRYCODE, COUNTRYNAME, CURRENCYCODE, EUTERRITORY, FATCASTATUS, FATF, FSCOUNTRYCODE, INSERTEDBY, INSERTEDON, LANGUAGES, NCCT) VALUES(pGISRECID, pCOUNTRYCODE, pCOUNTRYNAME, pCURRENCYCODE, pEUTERRITORY, pFATCASTATUS, pFATF, pFSCOUNTRYCODE, pINSERTEDBY, pINSERTEDON, pLANGUAGES, pNCCT); Commit; end; I am having difficulty passing the date parameter, pInsertedOn, to the Stored Proc. I have verified that the web form retrieves the form data successfully and calls the AddCountry method below, which in turns calls the stored proc, spAddCountry, after populating all of the parms. Here is a snippet of the MVC C# code. I get the following exception: "ORA-01840 input value not long enough for date format". public void AddCountry(Country aCountry) //because the country object field names match the form field names they automatically get bound!! { string oradb = "Data Source=XYZ;User Id=XYZ;Password=xyz;"; OracleConnection conn = new OracleConnection(oradb); OracleCommand cmd = conn.CreateCommand(); cmd.CommandText = "tstpack.spAddCountry"; cmd.CommandType = CommandType.StoredProcedure; ... OracleParameter paramInsertedBy = new OracleParameter(); paramInsertedBy.ParameterName = "pInsertedBy"; paramInsertedBy.Value = aCountry.InsertedBy; cmd.Parameters.Add(paramInsertedBy); // CultureInfo ci = new CultureInfo("en-US"); OracleParameter paramInsertedOn = new OracleParameter(); paramInsertedOn.ParameterName = "pInsertedOn"; // paramInsertedOn.Value = DateTime.Now; //just testing to see if it's WebForm issue // paramInsertedOn.Value = Convert.ToDateTime(DateTime.Now.ToString(), ci); //flail! paramInsertedOn.Value = aCountry.InsertedOn; cmd.Parameters.Add(paramInsertedOn); ... conn.Open(); cmd.ExecuteNonQuery(); //CRASH! ORA-01840 conn.Close(); } Just to verify that the flow of the program is working, I tried removing the date parm "pInsertedOn" from the pl/sql and from the parm list above, and everything worked fine. I know I am going off of the rails with the date. Can someone tell me how to pass a date to Oracle from an MVC WebForm? Is there some sort of type cast needed? I would really appreciate an example too. Thanks so much! ps, I did try changing the parm type to Varchar2 in the Pl/Sql and doing some conversions myself in the Pl/Sql, the automatic MVC binder was getting in my way, forcing the property of paramInsertedOn.OracleType to DateTime. I tried forcing it to Varchar2, but no luck there either...

    Read the article

  • Recommended integration mechanism for bi-directional, authenticated, encrypted connection in C clien

    - by rcampbell
    Let me first give an example. Imagine you have a single server running a JVM application. This server keeps a collection of N equations, once for each client: Client #1: 2x Client #2: 1 + y Client #3: z/4 This server includes an HTTP interface so that random visitors can type https://www.acme.com/client/3 int their browsers and see the latest evaluated result of z/4. The tricky part is that either the client or the server may change the variable value at any time, informing the other party immediately. More specifically, Client #3 - a C app - can initially tell the server that z = 20. An hour later that same client informs the server that z = 23. Likewise the server can later inform the client that z = 28. As caf pointed out in the comments, there can be a race condition when values are changed by the client and server simultaneously. The solution would be for both client and server to send the operation performed in their message, which would need to be executed by the other party. To keep things simple, let's limit the operations to (commutative) addition, allowing us to disregard message ordering. For example, the client seeds the server with z = 20: server:z=20, client:z=20 server sends {+3} message (so z=23 locally) & client sends {-2} message (so z=18 locally) at the exact same time server receives {-2} message at some point, adds to his local copy so z=21 client receives {+3} message at some point, adds to his local copy so z=21 As long as all messages are eventually evaluated by both parties, the correct answer will eventually be given to the users of the client and server since we limited ourselves to commutative operations (addition of 3 and -2). This does mean that both client and server can be returning incorrect answers in the time it takes for messages to be exchanged and processed. While undesirable, I believe this is unavoidable. Some possible implementations of this idea include: Open an encrypted, always on TCP socket connection for communication Pros: no additional infrastructure needed, client and server know immediately if there is a problem (disconnect) with the other party, fairly straightforward (except the the encryption), native support from both JVM and C platforms Cons: pretty low-level so you end up writing a lot yourself (protocol, delivery verification, retry-on-failure logic), probably have a lot of firewall headaches during client app installation Asynchronous messaging (ex: ActiveMQ) Pros: transactional, both C & Java integration, free up the client and server apps from needing retry logic or delivery verification, pretty straightforward encryption, easy extensibility via message filters/routers/etc Cons: need additional infrastructure (message server) which must never fail, Database or file system as asynchronous integration point Same pros/cons as above but messier RESTful Web Service Pros: simple, possible reuse of the server's existing REST API, SSL figures out the encryption problem for you (maybe use RSA key a la GitHub for authentication?) Cons: Client now needs to run a C HTTP REST server w/SSL, client and server need retry logic. Axis2 has both a Java and C version, but you may be limited to SOAP. What other techniques should I be evaluating? What real world experiences have you had with these mechanisms? Which do you recommend for this problem and why?

    Read the article

  • Data munging and data import scripting

    - by morpheous
    I need to write some scripts to carry out some tasks on my server (running Ubuntu server 8.04 TLS). The tasks are to be run periodically, so I will be running the scripts as cron jobs. I have divided the tasks into "group A" and "group B" - because (in my mind at least), they are a bit different. Task Group A import data from a file and possibly reformat it - by reformatting, I mean doing things like santizing the data, possibly normalizing it and or running calculations on 'columns' of the data Import the munged data into a database. For now, I am mostly using mySQL for the vast majority of imports - although some files will be imported into a sqlLite database. Note: The files will be mostly text files, although some of the files are in a binary format (my own proprietary format, written by a C++ application I developed). Task Group B Extract data from the database Perform calculations on the data and either insert or update tables in the database. My coding experience is is primarily as a C/C++ developer, although I have been using PHP as well for the last 2 years or so. I am from a windows background so I am still finding my feet in the linux environment. My question is this - I need to write scripts to perform the tasks I described above. Although I suppose I could write a few C++ applications to be used in the shell scripts, I think it may be better to write them in a scripting language (maybe this is a flawed assumption?). My thinking is that it would be easier to modify thins in a script - no need to rebuild etc for changes to functionality. Additionally, C++ data munging in C++ tends to involve more lines of code than "natural" scripting languages such as Perl, Python etc. Assuming that the majority of people on here agree that scripting is the way to go, herein lies my dilema. Which scripting language to use to perform the tasks above (giving my background). My gut instinct tells me that Perl (shudder) would be the most obvious choice for performing all of the above tasks. BUT (and that is a big BUT). The mere mention of Perl makes my toes curl, as I had a very, very bag experience with it a while back. The syntax seems quite unnatural to me - despite how many times I have tried to learn it - so if possible, I would really like to give it a miss. PHP (which I already know), also am not sure is a good candidate for scripting on the CLI (I have not seen many examples on how to do this etc - so I may be wrong). The last thing I must mention is that IF I have to learn a new language in order to do this, I cannot afford (time constraint) to spend more than a day, in learning the key commands/features required in order to do this (I can always learn the details of the language later, once I have actually deployed the scripts). So, which scripting language would you recommend (PHP, Python, Perl, [insert your favorite here]) - and most importantly WHY?. Or, should I just stick to writing little C++ applications that I call in a shell script?. Lastly, if you have suggested a scripting language, can you please show with a FEW lines (Perl mongers - I'm looking in your direction [nothing to cryptic!] ;) ) how I can use the language you suggested to do what I want to do. Hopefully, the lines you present will convince me that it can be done easily and elegantly in the language you suggested.

    Read the article

  • Generating cache file for Twitter rss feed

    - by Kerri
    I'm working on a site with a simple php-generated twitter box with user timeline tweets pulled from the user_timeline rss feed, and cached to a local file to cut down on loads, and as backup for when twitter goes down. I based the caching on this: http://snipplr.com/view/8156/twitter-cache/. It all seemed to be working well yesterday, but today I discovered the cache file was blank. Deleting it then loading again generated a fresh file. The code I'm using is below. I've edited it to try to get it to work with what I was already using to display the feed and probably messed something crucial up. The changes I made are the following (and I strongly believe that any of these could be the cause): - Revised the time difference code (the linked example seemed to use a custom function that wasn't included in the code) Removed the "serialize" function from the "fwrites". This is purely because I couldn't figure out how to unserialize once I loaded it in the display code. I truthfully don't understand the role that serialize plays or how it works, so I'm sure I should have kept it in. If that's the case I just need to understand where/how to deserialize in the second part of the code so that it can be parsed. Removed the $rss variable in favor of just loading up the cache file in my original tweet display code. So, here are the relevant parts of the code I used: <?php $feedURL = "http://twitter.com/statuses/user_timeline/#######.rss"; // START CACHING $cache_file = dirname(__FILE__).'/cache/twitter_cache.rss'; // Start with the cache if(file_exists($cache_file)){ $mtime = (strtotime("now") - filemtime($cache_file)); if($mtime > 600) { $cache_rss = file_get_contents('http://twitter.com/statuses/user_timeline/75168146.rss'); $cache_static = fopen($cache_file, 'wb'); fwrite($cache_static, $cache_rss); fclose($cache_static); } echo "<!-- twitter cache generated ".date('Y-m-d h:i:s', filemtime($cache_file))." -->"; } else { $cache_rss = file_get_contents('http://twitter.com/statuses/user_timeline/#######.rss'); $cache_static = fopen($cache_file, 'wb'); fwrite($cache_static, $cache_rss); fclose($cache_static); } //END CACHING //START DISPLAY $doc = new DOMDocument(); $doc->load($cache_file); $arrFeeds = array(); foreach ($doc->getElementsByTagName('item') as $node) { $itemRSS = array ( 'title' => $node->getElementsByTagName('title')->item(0)->nodeValue, 'date' => $node->getElementsByTagName('pubDate')->item(0)->nodeValue ); array_push($arrFeeds, $itemRSS); } // the rest of the formatting and display code.... } ?> ETA 6/17 Nobody can help…? I'm thinking it has something to do with writing a blank cache file over a good one when twitter is down, because otherwise I imagine that this should be happening every ten minutes when the cache file is overwritten again, but it doesn't happen that frequently. I made the following change to the part where it checks how old the file is to overwrite it: $cache_rss = file_get_contents('http://twitter.com/statuses/user_timeline/75168146.rss'); if($mtime > 600 && $cache_rss != ''){ $cache_static = fopen($cache_file, 'wb'); fwrite($cache_static, $cache_rss); fclose($cache_static); } …so now, it will only write the file if it's over ten minutes old and there's actual content retrieved from the rss page. Do you think this will work?

    Read the article

  • urllib2 misbehaving with dynamically loaded content

    - by Sheena
    Some Code headers = {} headers['user-agent'] = 'User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:16.0) Gecko/20100101 Firefox/16.0' headers['Accept'] = 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8' headers['Accept-Language'] = 'en-gb,en;q=0.5' #headers['Accept-Encoding'] = 'gzip, deflate' request = urllib.request.Request(sURL, headers = headers) try: response = urllib.request.urlopen(request) except error.HTTPError as e: print('The server couldn\'t fulfill the request.') print('Error code: {0}'.format(e.code)) except error.URLError as e: print('We failed to reach a server.') print('Reason: {0}'.format(e.reason)) else: f = open('output/{0}.html'.format(sFileName),'w') f.write(response.read().decode('utf-8')) A url http://groupon.cl/descuentos/santiago-centro The situation Here's what I did: enable javascript in browser open url above and keep an eye on the console disable javascript repeat step 2 use urllib2 to grab the webpage and save it to a file enable javascript open the file with browser and observe console repeat 7 with javascript off results In step 2 I saw that a whole lot of the page content was loaded dynamically using ajax. So the HTML that arrived was a sort of skeleton and ajax was used to fill in the gaps. This is fine and not at all surprising Since the page should be seo friendly it should work fine without js. in step 4 nothing happens in the console and the skeleton page loads pre-populated rendering the ajax unnecessary. This is also completely not confusing in step 7 the ajax calls are made but fail. this is also ok since the urls they are using are not local, the calls are thus broken. The page looks like the skeleton. This is also great and expected. in step 8: no ajax calls are made and the skeleton is just a skeleton. I would have thought that this should behave very much like in step 4 question What I want to do is use urllib2 to grab the html from step 4 but I cant figure out how. What am I missing and how could I pull this off? To paraphrase If I was writing a spider I would want to be able to grab plain ol' HTML (as in that which resulted in step 4). I dont want to execute ajax stuff or any javascript at all. I don't want to populate anything dynamically. I just want HTML. The seo friendly site wants me to get what I want because that's what seo is all about. How would one go about getting plain HTML content given the situation I outlined? To do it manually I would turn off js, navigate to the page and copy the html. I want to automate this. stuff I've tried I used wireshark to look at packet headers and the GETs sent off from my pc in steps 2 and 4 have the same headers. Reading about SEO stuff makes me think that this is pretty normal otherwise techniques such as hijax wouldn't be used. Here are the headers my browser sends: Host: groupon.cl User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:16.0) Gecko/20100101 Firefox/16.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-gb,en;q=0.5 Accept-Encoding: gzip, deflate Connection: keep-alive Here are the headers my script sends: Accept-Encoding: identity Host: groupon.cl Accept-Language: en-gb,en;q=0.5 Connection: close Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 User-Agent: User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:16.0) Gecko/20100101 Firefox/16.0 The differences are: my script has Connection = close instead of keep-alive. I can't see how this would cause a problem my script has Accept-encoding = identity. This might be the cause of the problem. I can't really see why the host would use this field to determine the user-agent though. If I change encoding to match the browser request headers then I have trouble decoding it. I'm working on this now... watch this space, I'll update the question as new info comes up

    Read the article

  • R: extracting "clean" UTF-8 text from a web page scraped with RCurl

    - by SlowLearner
    Using R, I am trying to scrape a web page save the text, which is in Japanese, to a file. Ultimately this needs to be scaled to tackle hundreds of pages on a daily basis. I already have a workable solution in Perl, but I am trying to migrate the script to R to reduce the cognitive load of switching between multiple languages. So far I am not succeeding. Related questions seem to be this one on saving csv files and this one on writing Hebrew to a HTML file. However, I haven't been successful in cobbling together a solution based on the answers there. The pages are from Yahoo! Japan Finance and my Perl code that looks like this. use strict; use HTML::Tree; use LWP::Simple; #use Encode; use utf8; binmode STDOUT, ":utf8"; my @arr_links = (); $arr_links[1] = "http://stocks.finance.yahoo.co.jp/stocks/detail/?code=7203"; $arr_links[2] = "http://stocks.finance.yahoo.co.jp/stocks/detail/?code=7201"; foreach my $link (@arr_links){ $link =~ s/"//gi; print("$link\n"); my $content = get($link); my $tree = HTML::Tree->new(); $tree->parse($content); my $bar = $tree->as_text; open OUTFILE, ">>:utf8", join("","c:/", substr($link, -4),"_perl.txt") || die; print OUTFILE $bar; } This Perl script produces a CSV file that looks like the screenshot below, with proper kanji and kana that can be mined and manipulated offline: My R code, such as it is, looks like the following. The R script is not an exact duplicate of the Perl solution just given, as it doesn't strip out the HTML and leave the text (this answer suggests an approach using R but it doesn't work for me in this case) and it doesn't have the loop and so on, but the intent is the same. require(RCurl) require(XML) links <- list() links[1] <- "http://stocks.finance.yahoo.co.jp/stocks/detail/?code=7203" links[2] <- "http://stocks.finance.yahoo.co.jp/stocks/detail/?code=7201" txt <- getURL(links, .encoding = "UTF-8") Encoding(txt) <- "bytes" write.table(txt, "c:/geturl_r.txt", quote = FALSE, row.names = FALSE, sep = "\t", fileEncoding = "UTF-8") This R script generates the output shown in the screenshot below. Basically rubbish. I assume that there is some combination of HTML, text and file encoding that will allow me to generate in R a similar result to that of the Perl solution but I cannot find it. The header of the HTML page I'm trying to scrape says the chartset is utf-8 and I have set the encoding in the getURL call and in the write.table function to utf-8, but this alone isn't enough. The question How can I scrape the above web page using R and save the text as CSV in "well-formed" Japanese text rather than something that looks like line noise? Edit: I have added a further screenshot to show what happens when I omit the Encoding step. I get what look like Unicode codes, but not the graphical representation of the characters. So it may be some kind of locale-related issue, but in the exact same locale the Perl script does provide useful output. So this is still puzzling.

    Read the article

  • how can udp data can passed through RS232 in ansi c?

    - by moon
    i want to transmit and receive data on RS232 using udp and i want to know about techniques which allow me to transmit and receive data on a faster rate and also no lose of data is there? thanx in advance. i have tried but need improvements if possible #include <stdio.h> #include <dos.h> #include<string.h> #include<conio.h> #include<iostream.h> #include<stdlib.h> #define PORT1 0x3f8 void main() { int c,ch,choice,i,a=0; char filename[30],filename2[30],buf; FILE *in,*out; clrscr(); while(1){ outportb(PORT1+0,0x03); outportb(PORT1+1,0); outportb(PORT1+3,0x03); outportb(PORT1+2,0xc7); outportb(PORT1+4,0x0b); cout<<"\n==============================================================="; cout<<"\n\t*****Serial Communication By BADR-U-ZAMAN******\nCommunication between two computers By serial port"; cout<<"\nPlease select\n[1]\tFor sending file \n[2]\tFor receiving file \n[3]\tTo exit\n"; cout<<"=================================================================\n"; cin>>choice; if(choice==1) { strcpy(filename,"C:\\TC\\BIN\\badr.cpp"); cout<<filename; for(i=0;i<=strlen(filename);i++) outportb(PORT1,filename[i]); in=fopen(filename,"r"); if (in==NULL) { cout<<"cannot open a file"; a=1; } if(a!=1) cout<<"\n\nFile sending.....\n\n"; while(!feof(in)) { buf=fgetc(in); cout<<buf; outportb(PORT1,buf); delay(5); } } else { if(choice==3) exit(0); i=0; buf='a'; while(buf!=NULL) { c=inportb(PORT1+5); if(c&1) { buf=inportb(PORT1); filename2[i]=buf; i++; } } out=fopen(filename2,"t"); cout<<"\n Filename received:"<<filename[2]; cout<<"\nReading from the port..."; cout<<"writing to file"<<filename2; do { c=inportb(PORT1+5); if(c&1) { buf=inportb(PORT1); cout<<buf; fputc(buf,out); delay(5); } if(kbhit()) { ch=getch(); } }while(ch!=27); } getch(); } }

    Read the article

  • How to map code points to unicode characters depending on the font used?

    - by Alex Schröder
    The client prints labels and has been using a set of symbolic (?) fonts to do this. The application uses a single byte database (Oracle with Latin-1). The old application I am replacing was not Unicode aware. It somehow did OK. The replacement application I am writing is supposed to handle the old data. The symbols picked from the charmap application often map to particular Unicode characters, but sometimes they don't. What looks like the Moon using the LAB3 font, for example, is in fact U+2014 (EM DASH). When users paste this character into a Swing text field, the character has the code point 8212. It was "moved" into the Private Use Area (by Windows? Java?). When saving this character to the database, Oracle decides that it cannot be safely encoded and replaces it with the dreaded ¿. Thus, I started shifting the characters by 8000: -= 8000 when saving, += 8000 when displaying the field. Unfortunately I discovered that other characters were not shifted by the same amount. In one particular font, for example, ž has the code point 382, so I shifted it by +/-256 to "fix" it. By now I'm dreading the discovery of more strange offsets and I wonder: Can I get at this mapping using Java? Perhaps the TTF font has a list of the 255 glyphs it encodes and what Unicode characters those correspond to and I can do it "right"? Right now I'm using the following kludge: static String fromDatabase(String str, String fontFamily) { if (str != null && fontFamily != null) { Font font = new Font(fontFamily, Font.PLAIN, 1); boolean changed = false; char[] chars = str.toCharArray(); for (int i = 0; i < chars.length; i++) { if (font.canDisplay(chars[i] + 0xF000)) { // WE8MSWIN1252 + WinXP chars[i] += 0xF000; changed = true; } else if (chars[i] >= 128 && font.canDisplay(chars[i] + 8000)) { // WE8ISO8859P1 + WinXP chars[i] += 8000; changed = true; } else if (font.canDisplay(chars[i] + 256)) { // ž in LAB1 Eastern = 382 chars[i] += 256; changed = true; } } if (changed) str = new String(chars); } return str; } static String toDatabase(String str, String fontFamily) { if (str != null && fontFamily != null) { boolean changed = false; char[] chars = str.toCharArray(); for (int i = 0; i < chars.length; i++) { int chr = chars[i]; if (chars[i] > 0xF000) { // WE8MSWIN1252 + WinXP chars[i] -= 0xF000; changed = true; } else if (chars[i] > 8000) { // WE8ISO8859P1 + WinXP chars[i] = (char) (chars[i] - 8000); changed = true; } else if (chars[i] > 256) { // ž in LAB1 Eastern = 382 chars[i] = (char) (chars[i] - 256); changed = true; } } if (changed) return new String(chars); } return str; }

    Read the article

< Previous Page | 653 654 655 656 657 658 659 660 661 662 663 664  | Next Page >