Search Results

Search found 6814 results on 273 pages for 'payment processing'.

Page 232/273 | < Previous Page | 228 229 230 231 232 233 234 235 236 237 238 239  | Next Page >

  • Offline Mapping API

    - by Aaron M
    Are there any services available that allow me to manipulate maps in an offline setting? I am working on a project that requires me to take a map and based on features on the map, generate a game world. I have looked at a few of the API's for different providers: Google, ms, etc. The API's I looked seem to be strictly showing a user a map. I am looking for something that allows me to create a derivative of a map (the Gameworld), that will never be seen by the public, and is only used by the game engine. However one caveat is that I would like to be able to link the derivative created for use by the game engine, with something I can show the user. As an example. Think of a cross country racing sim. Users cannot control the vehicles directly in this game, they can only control the cars setup, driver, etc. I create a gameworld from a map. The gameworld data (driver position, etc) is overlayed onto a real map. A race might last several days. The only interaction users have with the real map is viewing their position on the map, and where they are in relation to the others. I don't want to violate the terms of the API here. I read Googles API TOS, and it seems to me that creating the gameworld would violdate their TOS. The features I really need are the following The ability to locate a specific place on the map by lat/long The ability/rights to grab those maps and save them as an image file temporarily for processing The ability/rights to store a gameworld that is based on the real map The ability to show a user a map with an overlay (this is optional. I can use googles API, or any other one that supports lat/long.)

    Read the article

  • Get Json and output to text file undecoded

    - by Gary
    Hi, I want to fetch json script and write it to a txt file undecoded, exactly how it was originally. I do have a script that I use that I am modifying but unsure what to commands to use. This script decodes, which is what I want to advoid. //Get Age list($bstat,$bage,$bdata) = explode("\t",check_file('./advise/roadsnow.txt',60*2+15)); //Test Age if ( $bage > $CacheMaxAge ) { //echo "The if statement evaluated to true so get new file and reset $bage"; $bage="0"; $file = file_get_contents('http://somesite.jsontxt'); $out = (json_decode($file)); $report = wordwrap($out->mainText, 100, "\n"); //$valid = $out->validTo; //write the data to a text file called roadsnow.txt $myFile = "./advise/roadsnow.txt"; $fh = fopen($myFile, 'w') or die("can't open file"); $stringData = $report; fwrite($fh, $stringData); } else { //echo the test evaluated to false; file is not stale so read local cache //print "we are at the read local cache"; $stringData = file_get_contents("./advise/roadsnow.txt"); } // if/else is done carry on with processing //Format file $data = $stringData

    Read the article

  • How to hand-over a TCP listening socket with minimal downtime?

    - by Shtééf
    While this question is tagged EventMachine, generic BSD-socket solutions in any language are much appreciated too. Some background: I have an application listening on a TCP socket. It is started and shut down with a regular System V style init script. My problem is that it needs some time to start up before it is ready to service the TCP socket. It's not too long, perhaps only 5 seconds, but that's 5 seconds too long when a restart needs to be performed during a workday. It's also crucial that existing connections remain open and are finished normally. Reasons for a restart of the application are patches, upgrades, and the like. I unfortunately find myself in the position that, every once in a while, I need to do this kind of thing in production. The question: I'm looking for a way to do a neat hand-over of the TCP listening socket, from one process to another, and as a result get only a split second of downtime. I'd like existing connections / sockets to remain open and finish processing in the old process, while the new process starts servicing new connectinos. Is there some proven method of doing this using BSD-sockets? (Bonus points for an EventMachine solution.) Are there perhaps open-source libraries out there implementing this, that I can use as is, or use as a reference? (Again, non-Ruby and non-EventMachine solutions are appreciated too!)

    Read the article

  • NHibernate. DTO -> Domain

    - by Andrew Kalashnikov
    Hello guys. I've got SOA which processing data for diff clients(asp,sl). The base of this design is domains of my business model. For transporting,showing it to clients I use DTO. For mapping domain to DTO I use AutoMapper. Now I should persist new entities from clients. I want use my DTO's at this scenario too. So i've got some questions as I'm not much familiar with this design 1) Is it a good practice build DTO on client and send it to web-service on the wire? MayBe i should pass my domains? 2) Is it possible have several DTO's to one domain (one show at grid, and one to save). For saving I need set all nonprimitive props at client. 3) DTO - to Domain. If I've got int can I use AutoMapper to generate NHibernate Proxy for this ID, or I should do i manually. Your expierence and practice are very interesting. Thanks for answer!!!

    Read the article

  • stripping default.aspx and //www from the url

    - by b0x0rz
    the code to strip /Default.aspx and //www is not working (as expected): protected void Application_BeginRequest(object sender, EventArgs e) { HttpContext context = HttpContext.Current; string url = context.Request.RawUrl.ToString(); bool doRedirect = false; // remove > default.aspx if (url.EndsWith("/default.aspx", StringComparison.OrdinalIgnoreCase)) { url = url.Substring(0, url.Length - 12); doRedirect = true; } // remove > www if (url.Contains("//www")) { url = url.Replace("//www", "//"); doRedirect = true; } // redirect if necessary if (doRedirect) { context.Response.Redirect(url); } } it works usually, but when submitting a form (sign in for example) the code above INTERCEPTS the request and then does a redirect to the same page. example: try to arrive at page: ~/SignIn/Default.aspx the requests gets intercepted and fixed to: ~/SignIn/ fill the form, click sign in the current page url goes from: ~/SignIn/ to ~/SignIn/Default.aspx and gets fixed again, thus voiding the processing of the method SignIn (which would have redirected the browser to /SignIn/Success/) and the page is reloaded as ~/SignIn/ and no sign in was done. please help. not sure what / how to fix here. the main REQUIREMENT here is: remove /Default.aspx and //www from url's thnx

    Read the article

  • Ruby Design Problem for SQL Bulk Inserter

    - by crunchyt
    This is a Ruby design problem. How can I make a reusable flat file parser that can perform different data scrubbing operations per call, return the emitted results from each scrubbing operation to the caller and perform bulk SQL insertions? Now, before anyone gets narky/concerned, I have written this code already in a very unDRY fashion. Which is why I am asking any Ruby rockstars our there for some assitance. Basically, everytime I want to perform this logic, I create two nested loops, with custom processing in between, buffer each processed line to an array, and output to the DB as a bulk insert when the buffer size limit is reached. Although I have written lots of helpers, the main pattern is being copy pasted everytime. Not very DRY! Here is a Ruby/Pseudo code example of what I am repeating. lines_from_file.each do |line| line.match(/some regex/).each do |sub_str| # Process substring into useful format # EG1: Simple gsub() call # EG2: Custom function call to do complex scrubbing # and matching, emitting results to array # EG3: Loop to match opening/closing/nested brackets # or other delimiters and emit results to array end # Add processed lines to a buffer as SQL insert statement @buffer << PREPARED INSERT STATEMENT # Flush buffer when "buffer size limit reached" or "end of file" if sql_buffer_full || last_line_reached @dbc.insert(SQL INSERTS FROM BUFFER) @buffer = nil end end I am familiar with Proc/Lambda functions. However, because I want to pass two separate procs to the one function, I am not sure how to proceed. I have some idea about how to solve this, but I would really like to see what the real Rubyists suggest? Over to you. Thanks in advance :D

    Read the article

  • Trying to get django app to work with mod_wsgi on CentOS 5

    - by David
    I'm running CentOS 5, and am trying to get a django application working with mod_wsgi. I'm using .wsgi settings I got working on Ubuntu. Here is the error: [Thu Mar 04 10:52:15 2010] [error] [client 10.1.0.251] SystemError: dynamic module not initialized properly [Thu Mar 04 10:52:15 2010] [error] [client 10.1.0.251] mod_wsgi (pid=23630): Target WSGI script '/data/hosting/cubedev/apache/django.wsgi' cannot be loaded as Python module. [Thu Mar 04 10:52:15 2010] [error] [client 10.1.0.251] mod_wsgi (pid=23630): Exception occurred processing WSGI script '/data/hosting/cubedev/apache/django.wsgi'. [Thu Mar 04 10:52:15 2010] [error] [client 10.1.0.251] Traceback (most recent call last): [Thu Mar 04 10:52:15 2010] [error] [client 10.1.0.251] File "/data/hosting/cubedev/apache/django.wsgi", line 8, in [Thu Mar 04 10:52:15 2010] [error] [client 10.1.0.251] import django.core.handlers.wsgi [Thu Mar 04 10:52:15 2010] [error] [client 10.1.0.251] File "/opt/python2.6/lib/python2.6/site-packages/django/core/handlers/wsgi.py", line 1, in [Thu Mar 04 10:52:15 2010] [error] [client 10.1.0.251] from threading import Lock [Thu Mar 04 10:52:15 2010] [error] [client 10.1.0.251] File "/opt/python2.6/lib/python2.6/threading.py", line 13, in [Thu Mar 04 10:52:15 2010] [error] [client 10.1.0.251] from functools import wraps [Thu Mar 04 10:52:15 2010] [error] [client 10.1.0.251] File "/opt/python2.6/lib/python2.6/functools.py", line 10, in [Thu Mar 04 10:52:15 2010] [error] [client 10.1.0.251] from _functools import partial, reduce [Thu Mar 04 10:52:15 2010] [error] [client 10.1.0.251] SystemError: dynamic module not initialized properly And here is my .wsgi file import os import sys os.environ['PYTHON_EGG_CACHE'] = '/tmp/django/' os.environ['DJANGO_SETTINGS_MODULE'] = 'cube.settings' sys.path.append('/data/hosting/cubedev') import django.core.handlers.wsgi application = django.core.handlers.wsgi.WSGIHandler()

    Read the article

  • Does a CS PhD Help for Software Engineering Career?

    - by SiLent SoNG
    I would like to seek advice on whether or not to take a PhD offer from a good university. My only concern is the PhD will take at least 4 year's commitment. During the period I won't have good monetary income. I am also concerned whether the PhD will help my future career development. My career goal is software engineering only. Some of the PhD info: The PhD is CS related. The research area is Information Retrieval, Machine Learning, and Nature Language Processing. More specifically, the research topic is Deep Web search. Some of backgrounds: I worked in Oracle for 3 years in database development after obtained a CS degree from some good university. In last year I received an email telling an interesting project from a professor and thereafter I was lured into his research team. The team consists of 4 PhD students; those students have little or no industry experiences and their coding skills are really really bad. By saying bad I mean they do not know some common patterns and they do not know pitfalls of the programming languages as well as idioms for doing things right. I guess a at least 4 year commitment is worth of serious consideration. I am 27 at this moment. If I take the offer, that implies I will be 31+ upon graduation. Wah... becoming.. what to say, no longer young. Hence, here I am seeking advice on whether it is good or not to take the PhD offer, and whether a CS PhD is good for my future career growth as a software engineer? I do not intent to go for academia.

    Read the article

  • Should java try blocks be scoped as tightly as possible?

    - by isme
    I've been told that there is some overhead in using the Java try-catch mechanism. So, while it is necessary to put methods that throw checked exception within a try block to handle the possible exception, it is good practice performance-wise to limit the size of the try block to contain only those operations that could throw exceptions. I'm not so sure that this is a sensible conclusion. Consider the two implementations below of a function that processes a specified text file. Even if it is true that the first one incurs some unnecessary overhead, I find it much easier to follow. It is less clear where exactly the exceptions come from just from looking at statements, but the comments clearly show which statements are responsible. The second one is much longer and complicated than the first. In particular, the nice line-reading idiom of the first has to be mangled to fit the readLine call into a try block. What is the best practice for handling exceptions in a funcion where multiple exceptions could be thrown in its definition? This one contains all the processing code within the try block: void processFile(File f) { try { // construction of FileReader can throw FileNotFoundException BufferedReader in = new BufferedReader(new FileReader(f)); // call of readLine can throw IOException String line; while ((line = in.readLine()) != null) { process(line); } } catch (FileNotFoundException ex) { handle(ex); } catch (IOException ex) { handle(ex); } } This one contains only the methods that throw exceptions within try blocks: void processFile(File f) { FileReader reader; try { reader = new FileReader(f); } catch (FileNotFoundException ex) { handle(ex); return; } BufferedReader in = new BufferedReader(reader); String line; while (true) { try { line = in.readLine(); } catch (IOException ex) { handle(ex); break; } if (line == null) { break; } process(line); } }

    Read the article

  • Point data structure for a sketching application

    - by bebraw
    I am currently developing a little sketching application based on HTML5 Canvas element. There is one particular problem I haven't yet managed to find a proper solution for. The idea is that the user will be able to manipulate existing stroke data (points) quite freely. This includes pushing point data around (ie. magnet tool) and manipulating it at whim otherwise (ie. altering color). Note that the current brush engine is able to shade by taking existing stroke data in count. It's a quick and dirty solution as it just iterates the points in the current stroke and checks them against a distance rule. Now the problem is how to do this in a nice manner. It is extremely important to be able to perform efficient queries that return all points within given canvas coordinate and radius. Other features, such as space usage, should be secondary to this. I don't mind doing some extra processing between strokes while the user is not painting. Any pointers are welcome. :)

    Read the article

  • How can I compare market data feed sources for quality and latency improvement?

    - by yves Baumes
    I am in the very first stages of implementing a tool to compare 2 market data feed sources in order to prove the quality of new developed sources to my boss ( meaning there are no regressions, no missed updates, or wrong ), and to prove latencies improvement. So the tool I need must be able to check updates differences as well as to tell which source is the best (in term of latency). Concrectly, reference source could be Reuters while the other one is a Feed handler we develop internally. People warned me that updates might not arrive in the same order as Reuters implementation could differs totally from ours. Therefore a simple algorithm based on the fact that updates could arrive in the same order is likely not to work. My very first idea would be to use fingerprint to compare feed sources, as Shazaam application does to find the title of the tube you are submitting. Google told me it is based on FFT. And I was wondering if signal processing theory could behaves well with market access applications. I wanted to know your own experience in that field, is that possible to develop a quite accurate algorithm to meet the needs? What was your own idea? What do you think about fingerprint based comparison?

    Read the article

  • iPhone TabbarController Switch Transition

    - by user269737
    I've implemented gestures (touchBegan-moved-ended) in order to allow for swiping through my tabs. It works. I'd like to add a slide-from-left and slide-from-right transition. It would be better if it could be part of the gesture if statement which tells me if the swipe is towards the right of left. Since I determine which tab is displayed from that, I could show that specific transition along with the new tab. So my question is this: what's the simplest way to simplement a slide transition at a specific instance. I don't want it to be for the whole tabbarcontrol since this is specifically for the swiping. Thanks for the help, much appreciated. For clarification purposes, this is snippet shows how I'm switching tabs: if(abs(diffx / diffy) > 2.5 && abs(diffx) > HORIZ_SWIPE_DRAG_MIN) { // It appears to be a swipe. if(isProcessingListMove) { // ignore move, we're currently processing the swipe return; } if (mystartTouchPosition.x < currentTouchPosition.x) { isProcessingListMove = YES; self.tabBarController.selectedViewController = [self.tabBarController.viewControllers objectAtIndex:0]; return; } else { isProcessingListMove = YES; self.tabBarController.selectedViewController = [self.tabBarController.viewControllers objectAtIndex:1 ]; return; }

    Read the article

  • nginx multiple domain virtual host configuration

    - by Poe
    I'm setting up nginx with multiple domain or wildcard support for convenience sake, rather than setting up 50+ different sites-available/* files. Hopefully this is enough to show you what I'm trying to do. Some are static sites, some are dynamic with usually wordpress installed. If an index.php exists, everything works as expected. If a file is requested that does not exist (missing.html), a 500 error is given due to the rewrite. The logged error is: *112 rewrite or internal redirection cycle while processing "/index.php/index.php/index.php/index.php/index.php/index.php/index.php/index.php/index.php/index.php/index.php/missing.html" The basic nginx configuration I'm currently using is: ` listen 80 default; server _; ... location / { root /var/www/$host; if (-f $request_filename) { expires max; break; } # problem, what if index.php does not exist? if (!-e $request_filename) { rewrite ^/(.*)$ /index.php/$1 last; } } ... ` If an index.php does not exist, and the file also does not exist, I would like it to error 404. Currently, nginx does not support multiple condition if's or nested if so I need a workaround.

    Read the article

  • Rails uploaded file blank

    - by Ceilingfish
    Hi chaps, I'm trying to upload a file to the server using a HTTP multipart form in rails, and for some reason it's turning up blank at the other end. I can see it being received in the rails log thusly: Processing Admin::HeadlinesController#update (for 127.0.0.1 at 2010-03-08 12:26:13) [PUT] Parameters: {"commit"=>"Save changes", "action"=>"update", "_method"=>"put", "authenticity_token"=>"mK70XRk5gOPUwXOcNboT/4K8PD9RBM7GqCOlEUKZwcA=", "headline"=>{"position"=>"1", "location"=>"primary", "attachment_id"=>"13", "headline_content"=>"questionnaires", "article_id"=>"3", "image"=>#<File:/tmp/RackMultipart20100308-63211-1vym9nj-0>}, "id"=>"140", "controller"=>"admin/headlines"} But if I have a look in /tmp/RackMultipart20100308-63211-1vym9nj-0 the file is blank. Am I right in thinking that this should be the file that I uploaded? I'm running Phusion Passenger 2.2.7 on Apache 2.2.13, with ruby 1.8.7 and rails 2.3.5, on OSX 10.6.2

    Read the article

  • Question about the evolution of interaction paradigm between web server program and content provider program?

    - by smwikipedia
    Hi experts, In my opinion, web server is responsible to deliver content to client. If it is static content like pictures and static html document, web server just deliver them as bitstream directly. If it is some dynamic content that is generated during processing client's request, the web server will not generate the conetnt itself but call some external proram to genearte the content. AFAIK, this kind of dynamice content generation technologies include the following: CGI ISAPI ... And from here, I noticed that: ...In IIS 7, modules replace ISAPI filters... Is there any others? Could anyone help me complete the above list and elabrate on or show some links to their evolution? I think it would be very helpful to understand application such as IIS, TomCat, and Apache. I once wrote a small CGI program, and though it serves as a content generator, it is still nothing but a normal standalone program. I call it normal because the CGI program has a main() entry point. But with the recenetly technology like ASP.NET, I am not writing complete program, but only some class library. Why does such radical change happens? Many thanks.

    Read the article

  • Rails 4, not saving @user.save when registering new user

    - by Yuichi
    When I try to register an user, it does not give me any error but just cannot save the user. I don't have attr_accessible. I'm not sure what I am missing. Please help me. user.rb class User < ActiveRecord::Base has_secure_password validates :email, presence: true, uniqueness: true, format: { with: /\A([^@\s]+)@((?:[-a-z0-9]+\.)+[a-z]{2,})\Z/i } validates :password, presence: true, length: {minimum: 6} validates :nickname, presence: true, uniqueness: true end users_controller.rb class UsersController < ApplicationController def new @user = User.new end def create @user = User.new(user_params) # Not saving @user ... if @user.save flash[:success] = "Successfully registered" redirect_to videos_path else flash[:error] = "Cannot create an user, check the input and try again" render :new end end private def user_params params.require(:user).permit(:email, :password, :nickname) end end Log: Processing by UsersController#create as HTML Parameters: {"utf8"=>"?", "authenticity_token"=>"x5OqMgarqMFj17dVSuA8tVueg1dncS3YtkCfMzMpOUE=", "user"=>{"email"=>"[email protected]", "password"=>"[FILTERED]", "nickname"=>"example"}, "commit"=>"Register"} (0.1ms) begin transaction User Exists (0.2ms) SELECT 1 AS one FROM "users" WHERE "users"."email" = '[email protected]' LIMIT 1 User Exists (0.1ms) SELECT 1 AS one FROM "users" WHERE "users"."nickname" = 'example' LIMIT 1 (0.1ms) rollback transaction

    Read the article

  • MySql Query lag time?

    - by Click Upvote
    When there are multiple PHP scripts running in parallel, each making an UPDATE query to the same record in the same table repeatedly, is it possible for there to be a 'lag time' before the table is updated with each query? I have basically 5-6 instances of a PHP script running in parallel, having been launched via cron. Each script gets all the records in the items table, and then loops through them and processes them. However, to avoid processing the same item more than once, I store the id of the last item being processed in a seperate table. So this is how my code works: function getCurrentItem() { $sql = "SELECT currentItemId from settings"; $result = $this->db->query($sql); return $result->get('currentItemId'); } function setCurrentItem($id) { $sql = "UPDATE settings SET currentItemId='$id'"; $this->db->query($sql); } $currentItem = $this->getCurrentItem(); $sql = "SELECT * FROM items WHERE status='pending' AND id > $currentItem'"; $result = $this->db->query($sql); $items = $result->getAll(); foreach ($items as $i) { //Check if $i has been processed by a different instance of the script, and if so, //leave it untouched. if ($this->getCurrentItem() > $i->id) continue; $this->setCurrentItem($i->id); // Process the item here } But despite of all the precautions, most items are being processed more than once. Which makes me think that there is some lag time between the update queries being run by the PHP script, and when the database actually updates the record. Is it true? And if so, what other mechanism should I use to ensure that the PHP scripts always get only the latest currentItemId even when there are multiple scripts running in parrallel? Would using a text file instead of the db help?

    Read the article

  • How to properly set relationships in Core Data when using setValue and data already exists

    - by ern
    Let's say I have two objects: Articles and Categories. For the sake of this example all relevant categories have already been added to the data store. When looping through data that holds edits for articles, there is category relationship information that needs to be saved. I was planning on using the -setValue method in the Article class in order to set the relationships like so: - (void)setValue:(id)value forUndefinedKey:(NSString *)key { if([key isEqualToString:@"categories"]){ NSLog(@"trying to set categories..."); } } The problem is that value isn't a Category, it is just a string (or array of strings) holding the title of a category. I could certainly do a lookup within this method for each category and assign it, but that seems inefficient when processing a whole bunch of articles at once. Another option is to populate an array of all possible categories and just filter, but my question is where to store that array? Should it be a class method on Article? Is there a way to pass in additional data to the -setValue method? Is there another, better option for setting the relationship I'm not thinking of? Thanks for your help.

    Read the article

  • can javascript process binary data?

    - by Johnny
    admit me describe my questions in situation-oriented way: assume IE is still the dominate web browser(the firefox have document for binary processing): the XMLHttpRequest.responseText or XMLHttpRequest.responseXML in ie desire txt or xml/xhtml/html,but what about the server response the xmlHttprequest whith MIME TYPE application/octet ? would the response string all little than 256 ?(every char of that string < 256), thanks very much for a straight answer, i have no webserver env,so i don't know how to test it out. because use txt or xml have a issue of character set encode, and i don't know how to process #[[[CDDATA node of one encoded xml(ex : utf-8,ascii,gb18030) with javascript, when i getNodeText, does the docObj return me byte or decoded char ? if it was decoded char which according to the header indicated charSet in the httpresponse , it would be all wrong. to avoid mess up with charSet ,i would like the server to response octet data and force strings data to be encoded as utf-8 but another charSet in the binary format. if the response is octal, so i guess the browser would not try to decode the response"txt" does this weird? or miss understanding the fundamental things? EDIT: I believe the question is asking this: Can Javascript safely process strings that aren't encoded in Unicode? What are the problems with trying to do so? EDIT: no no no , i means if http-header: content-type is "application/octet" , would the ie try to decoded it as (16bits Unicode | ie local setting charset ) when i get XMLHttpRequestobj.responseText use javascript ? or it(ie) just wrap every single byte of the response body as a javascript string, then every char in that string little than or equal 256 (char<=256), am i talking Mars language? sadly, if i were Marsizen,i would come as tourist without fuzzy questions. however i am in a country which share at least one property with Mars : RED

    Read the article

  • Is there a better tool than postcat for viewing postfix mail queue files?

    - by Geekman
    So I got a call early this morning about a client needing to see what email they have waiting to be delivered sitting in our secondary mail server. Their link for the main server had (still is) been down for two days and they needed to see their email. So I wrote up a quick Perl script to use mailq in combination with postcat to dump each email for their address into separate files, tar'd it up and sent it off. Horrible code, I know, but it was urgent. My solution works OK in that it at least gives a raw view, but I thought tonight it would be nice if I had a solution where I could provide their email attachments and maybe remove some "garbage" header text as well. Most of the important emails seem to have a PDF or similar attached. I've been looking around but the only method of viewing queue files I can see is the postcat command, and I really don't want to write my own parser - so I was wondering if any of you have already done so, or know of a better command to use? Here's the code for my current solution: #!/usr/bin/perl $qCmd="mailq | grep -B 2 \"someemailaddress@isp\" | cut -d \" \" -f 1"; @data = split(/\n/, `$qCmd`); $i = 0; foreach $line (@data) { $i++; $remainder = $i % 2; if ($remainder == 0) { next; } if ($line =~ /\(/ || $line =~ /\n/ || $line eq "") { next; } print "Processing: " . $line . "\n"; `postcat -q $line > $line.email.txt`; $subject=`cat $line.email.txt | grep "Subject:"`; #print "SUB" . $subject; #`cat $line.email.txt > \"$subject.$line.email.txt\"`; } Any advice appreciated.

    Read the article

  • Hierarchy of meaning

    - by asldkncvas
    I am looking for a method to build a hierarchy of words. Background: I am a "amateur" natural language processing enthusiast and right now one of the problems that I am interested in is determining the hierarchy of word semantics from a group of words. For example, if I have the set which contains a "super" representation of others, i.e. [cat, dog, monkey, animal, bird, ... ] I am interested to use any technique which would allow me to extract the word 'animal' which has the most meaningful and accurate representation of the other words inside this set. Note: they are NOT the same in meaning. cat != dog != monkey != animal BUT cat is a subset of animal and dog is a subset of animal. I know by now a lot of you will be telling me to use wordnet. Well, I will try to but I am actually interested in doing a very domain specific area which WordNet doesn't apply because: 1) Most words are not found in Wordnet 2) All the words are in another language; translation is possible but is to limited effect. another example would be: [ noise reduction, focal length, flash, functionality, .. ] so functionality includes everything in this set. I have also tried crawling wikipedia pages and applying some techniques on td-idf etc but wikipedia pages doesn't really do much either. Can someone possibly enlighten me as to what direction my research should go towards? (I could use anything)

    Read the article

  • Boost binding a function taking a reference

    - by Jamie Cook
    Hi all, I am having problems compiling the following snippet int temp; vector<int> origins; vector<string> originTokens = OTUtils::tokenize(buffer, ","); // buffer is a char[] array // original loop BOOST_FOREACH(string s, originTokens) { from_string(temp, s); origins.push_back(temp); } // I'd like to use this to replace the above loop std::transform(originTokens.begin(), originTokens.end(), origins.begin(), boost::bind<int>(&FromString<int>, boost::ref(temp), _1)); where the function in question is // the third parameter should be one of std::hex, std::dec or std::oct template <class T> bool FromString(T& t, const std::string& s, std::ios_base& (*f)(std::ios_base&) = std::dec) { std::istringstream iss(s); return !(iss >> f >> t).fail(); } the error I get is 1>Compiling with Intel(R) C++ 11.0.074 [IA-32]... (Intel C++ Environment) 1>C:\projects\svn\bdk\Source\deps\boost_1_42_0\boost/bind/bind.hpp(303): internal error: assertion failed: copy_default_arg_expr: rout NULL, no error (shared/edgcpfe/il.c, line 13919) 1> 1> return unwrapper<F>::unwrap(f, 0)(a[base_type::a1_], a[base_type::a2_]); 1> ^ 1> 1>icl: error #10298: problem during post processing of parallel object compilation Google is being unusually unhelpful so I hope that some one here can provide some insights.

    Read the article

  • Using hashing to group similar records

    - by Neil Dobson
    I work for a fulfillment company and we have to pack and ship many orders from our warehouse to customers. To improve efficiency we would like to group identical orders and pack these in the most optimum way. By identical I mean having the same number of order lines containing the same SKUs and same order quantities. To achieve this I was thinking about hashing each order. We can then group by hash to quickly see which orders are the same. We are moving from an Access database to a PostgreSQL database and we have .NET based systems for data loading and general order processing systems, so we can either do the hashing during the data loading or hand this task over to the DB. My question firstly is should the hashing be managed by DB, possibly using triggers, or should the hash be created on-the-fly using a view or something? And secondly would it be best to calculate a hash for each order line and then to combine these to find an order-level hash for grouping, or should I just use a trigger for all CRUD operations on the order lines table which re-calculates a single hash for the entire order and store the value in the orders table? TIA

    Read the article

  • Twitter api - no more than 150 requests per hour....

    - by RenegadeAndy
    Hi. I am writing a twitter app using jtwitter - and its running inside a server inside my work. Anyway - whenever i run it from work it returns the error below and I am only making a couple requests per hour: HTTP/1.1 400 Bad Request {"request":"/1/statuses/user_timeline.json?count=6&id=cicsdemo&","error":"Rate limit exceeded. Clients may not make more than 150 requests per hour."} ] 2010-06-03 18:44:49 zero.timer.TimerTask::run Thread-3 SEVERE [ CWPZA3100E: Exception during processing for timer task, "twitterTimer". Exception: java.lang.ClassCastException: winterwell.jtwitter.Twitter$Status incompatible with java.lang.String ] I run the same code from home - its fine. So obviously at some point twitter thinks our work is all coming from one direct IP - which is why its hitting a limit which it shouldnt. Do I have any choice or workaround - can i make the limit be counted from my direct machine IP - or to my account instead of IP? Can i use a proxy? Has any body else had this problem and solved it?! Before anyone asks the APP must live inside my work - it cannot run anywhere else! Cheers, Andy

    Read the article

  • How do I use a custom cookie session serializer in Rack?

    - by Damien Wilson
    Hello SO. I'm currently integrating Warden into a new Rack application I'm building. I'd like to implement a recent patch to Rack that allows me to specify how sessions are serialized; specifically, I'd like to use Rack::Session::Cookie::Identity as the session processor. Unfortunately, the documentation is a little unclear as to what syntax I should use to configure Rack::Session::Cookie in my rackup file, can anyone here tell me what I'm doing wrong? config.ru require 'my_sinatra_app' app = self use Rack::Session::Cookie.new(app, Rack::Session::Cookie::Identity.new), {:key => "auth_token"} use Warden::Manager do |warden| # Must come AFTER Rack::Session warden.default_strategies :password warden.failure_app Jelli::Auth.run! end run MySinatraApp error message from thin !! Unexpected error while processing request: undefined method `new' for #<Rack::Session::Cookie:0x00000110124128> PS: I'm using bundler to manage my gem dependencies and I've likewise included rack's master branch as the desired version. Update: As suggested in the comments below, I have read the documentation; sadly the suggested syntax in the docs is not working. Update: Still no luck on my end; offering up a bounty to whoever can help me figure this out.

    Read the article

< Previous Page | 228 229 230 231 232 233 234 235 236 237 238 239  | Next Page >