Search Results

Search found 6947 results on 278 pages for 'annotation processing'.

Page 237/278 | < Previous Page | 233 234 235 236 237 238 239 240 241 242 243 244  | Next Page >

  • How do I use a custom cookie session serializer in Rack?

    - by Damien Wilson
    Hello SO. I'm currently integrating Warden into a new Rack application I'm building. I'd like to implement a recent patch to Rack that allows me to specify how sessions are serialized; specifically, I'd like to use Rack::Session::Cookie::Identity as the session processor. Unfortunately, the documentation is a little unclear as to what syntax I should use to configure Rack::Session::Cookie in my rackup file, can anyone here tell me what I'm doing wrong? config.ru require 'my_sinatra_app' app = self use Rack::Session::Cookie.new(app, Rack::Session::Cookie::Identity.new), {:key => "auth_token"} use Warden::Manager do |warden| # Must come AFTER Rack::Session warden.default_strategies :password warden.failure_app Jelli::Auth.run! end run MySinatraApp error message from thin !! Unexpected error while processing request: undefined method `new' for #<Rack::Session::Cookie:0x00000110124128> PS: I'm using bundler to manage my gem dependencies and I've likewise included rack's master branch as the desired version. Update: As suggested in the comments below, I have read the documentation; sadly the suggested syntax in the docs is not working. Update: Still no luck on my end; offering up a bounty to whoever can help me figure this out.

    Read the article

  • Multiple websites, Single sign-on design

    - by Yannis
    Hi all, I have a question. A client I have been doing some work recently has a range of websites with different login mechanisms. He is looking to slowly migrate to a single sign-on mechanism for his websites (all written in asp.net mvc). I am looking at my options here, so here is a list of requirements: 1) It has to be secure (duh) 2) It needs to support extra user properties over and above the usual name, address stuff (such as money or credits for a user) 3) It has to provide a centralized user management web console for his convenience (I understand that this will be a small project on top of whatever design solution I choose to go for) 4) It has to integrate with the existing websites without re-engineering the whole product (I understand that this depends on the current product implementation). 5) It has to deal with emailing the user when he registers (in order for him to activate his account) 6) It has to deal with activating the user when he clicks the activate me link in the email (I understand that 5 and 6 require some form of email templating system to support different emails per application) I was thinking of creating a library working together with forms authentication that exposes whatever methods are required (e.g. login, logout, activate, etc. and a small restful service to implement activation from email, registration processing etc. Taking into account that loads of things have been left out to make this question brief and to the point, does this sound like a good design? But this looks like a very common problem so arent there any existing projects that I could use? Thanks for reading.

    Read the article

  • Assign sage variable values into R objects via sagetex and Sweave

    - by sheed03
    I am writing a short Sweave document that outputs into a Beamer presentation, in which I am using the sagetex package to solve an equation for two parameters in the beta binomial distribution, and I need to assign the parameter values into the R session so I can do additional processing on those values. The following code excerpt shows how I am interacting with sage: <<echo=false,results=hide>>= mean.raw <- c(5, 3.5, 2) theta <- 0.5 var.raw <- mean.raw + ((mean.raw^2)/theta) @ \begin{frame}[fragile] \frametitle{Test of Sage 2} \begin{sagesilent} var('a1, b1, a2, b2, a3, b3') eqn1 = [1000*a1/(a1+b1)==\Sexpr{mean.raw[1]}, ((1000*a1*b1)*(1000+a1+b1))/((a1+b1)^2*(a1+b1+1))==\Sexpr{var.raw[1]}] eqn2 = [1000*a2/(a2+b2)==\Sexpr{mean.raw[2]}, ((1000*a2*b2)*(1000+a2+b2))/((a2+b2)^2*(a2+b2+1))==\Sexpr{var.raw[2]}] eqn3 = [1000*a3/(a3+b3)==\Sexpr{mean.raw[3]}, ((1000*a3*b3)*(1000+a3+b3))/((a3+b3)^2*(a3+b3+1))==\Sexpr{var.raw[3]}] s1 = solve(eqn1, a1,b1) s2 = solve(eqn2, a2,b2) s3 = solve(eqn3, a3,b3) \end{sagesilent} Solutions of Beta Binomial Parameters: \begin{itemize} \item $\sage{s1[0]}$ \item $\sage{s2[0]}$ \item $\sage{s3[0]}$ \end{itemize} \end{frame} Everything compiles just fine, and in that slide I am able to see the solutions to the three equations respective parameters in that itemized list (for example the first item in the itemized list from that beamer slide is outputted as [a1=(328/667), b1=(65272/667)] (I am not able to post an image of the beamer slide but I hope you get the idea). I would like to save the parameter values a1,b1,a2,b2,a3,b3 into R objects so that I can use them in simulations. I cannot find any documentation in the sagetex package on how to save output from sage commands into variables for use with other programs (in this case R). Any suggestions on how to get these values into R?

    Read the article

  • looking to streamline my RSS feed mashup

    - by Mark Cejas
    Hello crafty developers, I have aggregated RSS feeds from various sources with RSSowl, fetching directly from the social mention API. The RSS feeds are categorized into the following major categories: blogs, news, twitter, Q&A and social networking sites. Each major category is nested with a common group of RSS feeds that represent a particular client/brand ontology. Merging these feeds into the RSSowl reader application, allows me to conduct and save refined search queries (from the aggregated data) into a single file - that I can then tag and further segment for analysis. This scheme is utilized for my own research needs and has helped me considerably. However, I find this RSS mashup scheme kinda clumsy, it requires quite a bit of time to initially organize all of the feeds and I would like to be able to do further natural language processing to the data as well as eventually be able to rank the collected list of URL's into some order of media prominence - right I don't want to pay the ridiculous radian6 web analytics fees, when my intuition is telling me that with a bit of 'elbow grease' I can maybe leverage some available resources online to develop a functional low scale web mining application and get some good intelligence from it. I am now starting to learn a little about computer science - my background is in physical science/statistics so is my thinking in the right track? So, I guess I am imagining an application that allows me to query in a refined manner. A manner that allows me to search for keyword combinations, applying AND/OR operators, selectively focus my queries into particular sources - like a collection of blogs or twitter, or social networking communities, then save the results of my queries into a structured format that can then be manipulated and explored. Am I dreaming? I just had to get all of this out. any bit of advice and insight would be hugely appreciated. my best, Mark

    Read the article

  • gpg error - connection already closed?

    - by OopsForgotMyOtherUserName
    omg... hope someone can help me because I am so lost as to what to try next.... I don't know what is causing the error to happen, and I don't see how to figure it out... Keep going between the pgloader.conf examples and what I have, and I don't understand why I keep getting the 'connection already closed' error? The first few lines of my fr.conf is at the very end... I'd really appreciate / love some guidance here... Been trying to get this thing going all morning, and am even getting stuck just on this part... Running this command at the command line: /usr/bin/pgloader -c /var/mybin/pgconfs/fr.conf Yields this in the pgloader.log (with the process just hanging) more /tmp/pgloader.log 27-03-2010 12:22:53 pgloader INFO Logger initialized 27-03-2010 12:22:53 pgloader INFO Reformat path is ['/usr/share/python-support/pgloader/reformat'] 27-03-2010 12:22:53 pgloader INFO Will consider following sections: 27-03-2010 12:22:53 pgloader INFO fixed 27-03-2010 12:22:54 fixed INFO fixed processing 27-03-2010 12:22:54 pgloader INFO All threads are started, wait for them to terminate 27-03-2010 12:22:57 fixed ERROR connection already closed 27-03-2010 12:22:57 fixed INFO closing current database connection [pgsql] host = localhost port = 5432 base = frdb user = username pass = password [fixed] table = fr format = fixed filename = /var/www/fr.txt ...

    Read the article

  • Event feed implementation - will it scale?

    - by SlappyTheFish
    Situation: I am currently designing a feed system for a social website whereby each user has a feed of their friends' activities. I have two possible methods how to generate the feeds and I would like to ask which is best in terms of ability to scale. Events from all users are collected in one central database table, event_log. Users are paired as friends in the table friends. The RDBMS we are using is MySQL. Standard method: When a user requests their feed page, the system generates the feed by inner joining event_log with friends. The result is then cached and set to timeout after 5 minutes. Scaling is achieved by varying this timeout. Hypothesised method: A task runs in the background and for each new, unprocessed item in event_log, it creates entries in the database table user_feed pairing that event with all of the users who are friends with the user who initiated the event. One table row pairs one event with one user. The problems with the standard method are well known – what if a lot of people's caches expire at the same time? The solution also does not scale well – the brief is for feeds to update as close to real-time as possible The hypothesised solution in my eyes seems much better; all processing is done offline so no user waits for a page to generate and there are no joins so database tables can be sharded across physical machines. However, if a user has 100,000 friends and creates 20 events in one session, then that results in inserting 2,000,000 rows into the database. Question: The question boils down to two points: Is this worst-case scenario mentioned above problematic, i.e. does table size have an impact on MySQL performance and are there any issues with this mass inserting of data for each event? Is there anything else I have missed?

    Read the article

  • Pre-populate iPhone Safari SQLite DB

    - by Matt Rogish
    I'm working with a PhoneGap app that uses Safari local storage (SQlite DB) via Javascript: http://developer.apple.com/safari/library/documentation/iPhone/Conceptual/SafariJSDatabaseGuide/UsingtheJavascriptDatabase/UsingtheJavascriptDatabase.html On first load, the app creates the database, tables, and populates the data via a series of INSERT statements. If the user closes the app while this processing is happening, then my app database is left in an inconsistent state. What I prefer to do is deploy the SQLite DB as part of my iTunes App packaging so nothing must be populated at app cold start. However, I'm not sure if that is possible -- all of the google hits for this topic that I can find are referring to the core-data provided SQLite which is not what we're using... If it's not possible, could I wrap the entire thing in a transaction and keep re-trying it when the app is restarted? Failing that, I guess I can create a simple table with one boolean column "is_app_db_loaded?" and set it to true after I've processed all my inserts. But that's really gross... Ideas? Thanks!!

    Read the article

  • Work around for MessageNotReadableException in Java

    - by Hari
    Hi, I am building a small api around the JMS API for a project of mine. Essentially, we are building code that will handle the connection logic, and will simplify publishing messages by providing a method like Client.send(String message). One of the ideas being discussed right now is that we provide a means for the users to attach interceptors to this client. We will apply the interceptors after preparing the JMS message and before publishing it. For example, if we want to timestamp a message and wrote an interceptor for that, then this is how we would apply that ...some code ... Message message = session.createMessage() ..do all the current processing on the message and set the body for(interceptor:listOfInterceptors){ interceptor.apply(message) } One of the intrerceptors we though of was to compress the message body. But when we try to read the body of the message in the interceptor, we are getting a MessageNotReadableException. In the past, I normally compressed the content before setting it as the body of the message - so never had to worry about this exception. Is there any way of getting around this exception?

    Read the article

  • Passing params in rails... Breakthrough in understanding params, aka, the "aha" moment

    - by Brian McDonough
    I have a simple has_many belongs_to relationship and I want to include the parent object for the view of the belongs_to model and I have had some success, but I want it to work better. class Submission < ActiveRecord::Base belongs_to :user belongs_to :contest end class Contest < ActiveRecord::Base belongs_to :user has_many :submissions, :dependent => :destroy end In the case that works, I pass contest_id to submissions by placing it in the url: <%= link_to 'Submit Contest Entry', new_submission_path(:contest_id => @contest.id), :class => 'btn btn-primary btn-large mleft10' %> So that, combined with a hidden_field: <%= f.hidden_field :contest_id %> And a find_contest method in the controller (called with a before_filter): def find_contest #the next line is giving the error (line 76) @contest = Contest.find(params[:submission][:contest_id]) end Makes it work for submissions/new, but how do I add a find to the controller that just works across more than that one page, like if I want to access in show and index. Right now, I get an error: Started GET "/submissions?contest_id=5" for 127.0.0.1 at 2012-12-01 16:01:45 -0800 Processing by SubmissionsController#index as HTML Parameters: {"contest_id"=>"5"} User Load (0.3ms) SELECT "users".* FROM "users" WHERE "users"."id" = 2 ORDER BY users.created_at DESC LIMIT 1 Completed 500 Internal Server Error in 37ms NoMethodError (undefined method `[]' for nil:NilClass): app/controllers/submissions_controller.rb:76:in `find_contest' [edited] Adding show action for submissions: before_filter :find_contest, :except => [:index, :show, :edit, :update, :destroy] def find_contest @contest = Contest.find(params[:submission][:contest_id]) end def show contest_id = @submission.contest_id @submission = @commentable = Submission.find(params[:id]) @comments = @commentable.comments.order(:created_at).reverse respond_to do |format| format.html # show.html.erb format.json { render :json => @submission } end end

    Read the article

  • How to use R's ellipsis feature when writing your own function?

    - by Ryan Thompson
    The R language has a nifty feature for defining functions that can take a variable number of arguments. For example, the function data.frame takes any number of arguments, and each argument becomes the data for a column in the resulting data table. Example usage: > data.frame(letters=c("a", "b", "c"), numbers=c(1,2,3), notes=c("do", "re", "mi")) letters numbers notes 1 a 1 do 2 b 2 re 3 c 3 mi The function's signature includes an ellipsis, like this: function (..., row.names = NULL, check.rows = FALSE, check.names = TRUE, stringsAsFactors = default.stringsAsFactors()) { [FUNCTION DEFINITION HERE] } I would like to write a function that does something similar, taking multiple values and consolidating them into a single return value (as well as doing some other processing). In order to do this, I need to figure out how to "unpack" the ... from the function's arguments within the function. I don't know how to do this. The relevant line in the function definition of data.frame is object <- as.list(substitute(list(...)))[-1L], which I can't make any sense of. So how can I convert the ellipsis from the function's signature into, for example, a list? To be more specific, how can I write get_list_from_ellipsis in the code below? my_ellipsis_function(...) { input_list <- get.list.from.ellipsis(...) output_list <- lapply(X=input_list, FUN=do_something_interesting) return(output_list) } my_ellipsis_function(a=1:10,b=11:20,c=21:30)

    Read the article

  • How to implement message queuing and handling in AWS with NServiceBus

    - by Pete Lunenfeld
    I am creating a new ASP MVC order application in the Amazon (AWS) cloud with the persistence layer at my local datacenter. I will be using the CQRS pattern. The goal of the project is high availability using Queue(s) to store and forward writes (commands/events) that can be picked up and handled asynchronously at my local datacenter. Then, ff the WAN or my local datacenter fails, my cloud MVC app can still take orders and just queue them up until processing can resume. My first thought was to use AWS SQS for the queuing and create my own queue consumer/dispatcher/handler in my own c# application to process the incoming messages/events. MVC (@ Amazon) -- Event/POCO -- SQS -- QueueReader (@ my datacenter) -- DB Then I found NServiceBus. NSB seems to handle lots of details very nicely: message handling, retries, error handling, etc. I hate to reinvent the wheel, and NServiceBus seems like a full featured and mature product that would be perfect for me. But on further research, it does NOT look like NServiceBus is really meant to be used over the WAN in physically separated environments (Cloud to my Datacenter). Google and SO don't really paint a good picture of using NServiceBus across the WAN like I need. How can I use NServiceBus across the WAN? Or is there a better solution to handle queuing and message handling between Amazon an my local datacenter?

    Read the article

  • collection_select not working as expected

    - by kgb
    First time I've come to use collection_select in a project and I've hit a wall with it. A Profile has_one Team, Team has_many Profile. In my view for editing profiles I have this. <td><%= f.collection_select(:team_id, @team, :id, :title) %></td> Which populates the drop down with titles of teams as expected. The couple of examples I have read seem to use it in a very similar way. I can't figure out when the profile is saved why it isn't populating the team_id field in my DB. In the development log the team_id is being passed. Processing ProfilesController#update (for 127.0.0.1 at 2010-03-28 22:49:16) [PUT] Parameters: {"commit"=>"Update", "profile"=>{"dob(1i)"=>"2010", "second_name"=>"", "dob(2i)"=>"3", "role"=>"", "dob(3i)"=>"28", "project"=>"", "specialties"=>"", "about"=>"", "team_id"=>"1", "status"=>"", "first_name"=>""}, "authenticity_token"=>"sdTiFPGj9JCO3OEge5EGNGxZbQSsq9ME5LP342EBjyc=", "id"=>"3"} The update controller is the standard scaffold one, this has worked fine for all other additions to the profile model I'd made previously. Am I missing something obvious?

    Read the article

  • Parameters lost on login form submitted with post in JBoss

    - by Supowski
    My login page contains two forms: one for logging in and one for signing up. Shorten like this: <form name="LoginForm" id="LoginForm" method="post" action="j_security_check" > <input type="text" name="j_username" /> <input type="text" name="j_password" /> <input type="submit" value="Sing In" /> <form> <form name="SignUpForm" id="SignUpForm" method="post" action="/homepage" > <input type="text" name="loginName" /> <input type="password" name="password1" /> <input type="password" name="password2" /> <input type="submit" value="Sing Up" /> <form> I If 'Sing Up', than values of input are lost during processing and not passed into my Servlet. It works fine with GET, but than there is password in URL, so it's not the right solution :) Similar problem has been posted to community.jboss.org, but with no response: http://community.jboss.org/message/7150#7150. I'm using JBoss 4.3eap. Any help?

    Read the article

  • The CLR has been unable to transition from COM context [...] for 60 seconds

    - by BlueRaja The Green Unicorn
    I am getting this error on code that used to work. I have not changed the code. Here is the full error: The CLR has been unable to transition from COM context 0x3322d98 to COM context 0x3322f08 for 60 seconds. The thread that owns the destination context/apartment is most likely either doing a non pumping wait or processing a very long running operation without pumping Windows messages. This situation generally has a negative performance impact and may even lead to the application becoming non responsive or memory usage accumulating continually over time. To avoid this problem, all single threaded apartment (STA) threads should use pumping wait primitives (such as CoWaitForMultipleHandles) and routinely pump messages during long running operations. And here is the code that caused it: var openFileDialog1 = new System.Windows.Forms.OpenFileDialog(); openFileDialog1.DefaultExt = "mdb"; openFileDialog1.Filter = "Management Database (manage.mdb)|manage.mdb"; //Stalls indefinitely on the following line, then gives the CLR error //one minute later. The dialog never opens. if(openFileDialog1.ShowDialog() == DialogResult.OK) { .... } Yes, I am sure the dialog is not open in the background, and no, I don't have any explicit COM code or unmanaged marshalling or multithreading. I have no idea why the OpenFileDialog won't open - any ideas?

    Read the article

  • SPRING: How do you programmatically instantiate classes based on information passed from Flex UI

    - by babyangel86
    Imagine the UI passes back an XMl node as such: <properties> <type> Source </type> <name> Blooper </name> <delay> <type> Deterministic </type> <parameters> <param> 4 </param> </parameters> <delay> <batch> <type> Erlang </type> <parameters> <param> 4 </param> <param> 6 </param> </parameters> <batch> And behind the scene what it is asking that you instantiate a class as such: new Source("blooper", new Exp(4), new Erlang(4,6); The problem lies in the fact that you don't know what class you will need to processing, and you will be sent a list of these class definitions with instructions on how they can be linked to each other. I've heard that using a BeanFactoryPostProcessor might be helpful, or a property editor/convertor. However I am at a loss as to how best to use them to solve my problem. Any help you can provide will be much appreciated.

    Read the article

  • Object Oriented Perl interface to read from and write to a socket

    - by user654967
    I need a perl client-server implementation as a wrapper for a server in C#. A perl script passes the server address and port number and an input string to a module, this module has to create the socket and send the input string to the server. The data sent has to follow ISO-8859-1 encoding. On receiving the information, the client has to first receive 3 byte, then the next 8 bytes, this has the length of the data that has to be received next.. so based on the length the client has to read the next data. each of the data that is read has to be stored in a variable and sent another module for further processing. Currently this is what my perl client looks like..which I'm sure isn't right..could someone tell me how to do this..and set me on the right direction.. sub WriteInfo { my ($addr, $port, $Input) = @_; $socket = IO::Socket::INET->new( Proto => "tcp", PeerAddr => $addr, PeerPort => $port, ); unless ($socket) { die "cannot connect to remote" } while (1) { $socket->send($Input); } } sub ReadData { while (1) { my $ExecutionResult = $socket->recv( $recv_data, 3); my $DataLength = $socket->recv( $recv_data, 8); $DataLength =~ s/^0+// ; my $decval = hex($DataLength); my $Data = $socket->recv( $recv_data, $decval); return($Data); } thanks a lot.. Archer

    Read the article

  • protocol parsing in c

    - by nomad.alien
    I have been playing around with trying to implement some protocol decoders, but each time I run into a "simple" problem and I feel the way I am solving the problem is not optimal and there must be a better way to do things. I'm using C. Currently I'm using some canned data and reading it in as a file, but later on it would be via TCP or UDP. Here's the problem. I'm currently playing with a binary protocol at work. All fields are 8 bits long. The first field(8bits) is the packet type. So I read in the first 8 bits and using a switch/case I call a function to read in the rest of the packet as I then know the size/structure of it. BUT...some of these packets have nested packets inside them, so when I encounter that specific packet I then have to read another 8-16 bytes have another switch/case to see what the next packet type is and on and on. (Luckily the packets are only nested 2 or 3 deep). Only once I have the whole packet decoded can I handle it over to my state machine for processing. I guess this can be a more general question as well. How much data do you have to read at a time from the socket? As much as possible? As much as what is "similar" in the protocol headers? So even though this protocol is fairly basic, my code is a whole bunch of switch/case statements and I do a lot of reading from the file/socket which I feel is not optimal. My main aim is to make this decoder as fast as possible. To the more experienced people out there, is this the way to go or is there a better way which I just haven't figured out yet? Any elegant solution to this problem?

    Read the article

  • MS Access (Jet) transactions, workspaces & scope

    - by Eric G
    I am having trouble with committing a transaction (using Access 2003 DAO). It's acting as if I never had called BeginTrans -- I get error 3034 on CommitTrans, "You tried to commit or rollback a transaction without first beginning a transaction"; and the changes are written to the database (presumably because they were never wrapped in a transaction). However, BeginTrans is run, if you step through it. I am running it within the Access environment using the DBEngine(0) workspace. The tables I'm updating are all opened via a Jet database connection (to the same database) and updated using DAO.Recordset.update. The connection is opened before starting BeforeTrans. I'm not doing anything weird in the middle of the transaction like closing/opening connections or multiple workspaces etc. There is one nested transaction level (basically it's wrapping multiple transacted updates in an outer transaction, so if any fail they all fail). The inner transactions run without errors, it's the outer transaction that doesn't work. Here are a few things I've looked into and ruled out: The transaction is spread across several methods and BeginTrans and CommitTrans (and Rollback) are all in different places. But when I tried a simple test of running a transaction this way, it doesn't seem like this should matter. I thought maybe the database connection gets closed when it goes out of local scope, even though I have another 'global' reference to it (I'm never sure what DAO does with dbase connections to be honest). But this seems not to be the case -- right before the commit, the connection and its recordsets are alive (I can check their properties, EOF = False, etc.) My CommitTrans and Rollback are done within event callbacks. (Very basically, a parser program is throwing an 'onLoad' or 'onLoadFail' event at the end of parsing, which I am handling by either committing or rolling back the inserts I made during processing.) However, again, trying a simple test, it doesn't seem like this should matter. Any ideas why this isn't working for me? Thanks.

    Read the article

  • How to cope with null results in SQL Tasks that return single rows in SSIS 2005?

    - by JSacksteder
    In a dataflow task, I can slip a rowcount into the processing flow and place the count into a variable. I can later use that variable to conditionally perform some other work if the rowcount was 0. This works well for me, but I have no corresponding strategy for sql tasks expected to return a single row. In that event, I'm returning those values into variables. If the lookup produces no rows, the sql task fails when assigning values into those variables. I can branch on that component failing, but there's a side effect of that - if I'm running the job as a SQL server agent job step, the step returns DTSER_FAILURE, causing the step to fail. I can tell the sql agent to disregard the step failure, but then I won't know if I have a legitimate error in that step. This seems harder than it should be. The only strategy I can think of is to run the same query with a count(*) aggregate and test if that returns a number 0 and if so running the query again without the count. That's ugly because I have the same query in two places that I need to keep in sync. Is there a better way?

    Read the article

  • How to properly load HTML data from third party website using MVC+AJAX?

    - by Dmitry
    I'm building ASP.NET MVC2 website that lets users store and analyze data about goods found on various online trade sites. When user is filling a form to create or edit an item, he should have a button "Import data" that automatically fills some fields based on data from third party website. The question is: what should this button do under the hood? I see at least 2 possible solutions. First. Do the import on client side using AJAX+jQuery load method. I tried it in IE8 and received browser warning popup about insecure script actions. Of course, it is completely unacceptable. Second. Add method ImportData(string URL) to ItemController class. It is called via AJAX, does the import + data processing server-side and returns JSON-d result to client. I tried it and received server exception (503) Server unavailable when loading HTML data into XMLDocument. Also I have a feeling that dealing with not well-formed HTML (missing closing tags, etc.) will be a huge pain. Any ideas how to parse such HTML documents?

    Read the article

  • Spring Integration 1.0 RC2: Streaming file content?

    - by gdm
    I've been trying to find information on this, but due to the immaturity of the Spring Integration framework I haven't had much luck. Here is my desired work flow: New files are placed in an 'Incoming' directory Files are picked up using a file:inbound-channel-adapter The file content is streamed, N lines at a time, to a 'Stage 1' channel, which parses the line into an intermediary (shared) representation. This parsed line is routed to multiple 'Stage 2' channels. Each 'Stage 2' channel does its own processing on the N available lines to convert them to a final representation. This channel must have a queue which ensures no Stage 2 channel is overwhelmed in the event that one channel processes significantly slower than the others. The final representation of the N lines is written to a file. There will be as many output files as there were routing destinations in step 4. *'N' above stands for any reasonable number of lines to read at a time, from [1, whatever I can fit into memory reasonably], but is guaranteed to always be less than the number of lines in the full file. How can I accomplish streaming (steps 3, 4, 5) in Spring Integration? It's fairly easy to do without streaming the files, but my files are large enough that I cannot read the entire file into memory. As a side note, I have a working implementation of this work flow without Spring Integration, but since we're using Spring Integration in other places in our project, I'd like to try it here to see how it performs and how the resulting code compares for length and clarity.

    Read the article

  • MMS2R and Multiple Images Rails

    - by Maletor
    Here's my code: require 'mms2r' class IncomingMailHandler < ActionMailer::Base ## # Receives email(s) from MMS-Email or regular email and # uploads that content the user's photos. # TODO: Use beanstalkd for background queueing and processing. def receive(email) begin mms = MMS2R::Media.new(email) ## # Ok to find user by email as long as activate upon registration. # Remember to make UI option that users can opt out of registration # and either not send emails or send them to a [email protected] # type address. ## # Remember to get SpamAssasin if (@user = User.find_by_email(email.from) && email.has_attachments?) mms.media.each do |key, value| if key.include?('image') value.each do |file| @user.photos.push Photo.create!(:uploaded_data => File.open(file), :title => email.subject.empty? ? "Untitled" : email.subject) end end end end ensure mms.purge end end end and here's my error: /usr/local/lib/ruby/gems/1.8/gems/rails-2.3.4/lib/commands/runner.rb:48: undefined method photos' for true:TrueClass (NoMethodError) from /usr/home/xxx/app/models/incoming_mail_handler.rb:23:in each' from /usr/home/xxx/app/models/incoming_mail_handler.rb:23:in receive' from /usr/home/xxx/app/models/incoming_mail_handler.rb:21:in each' from /usr/home/xxx/app/models/incoming_mail_handler.rb:21:in receive' from /usr/local/lib/ruby/gems/1.8/gems/actionmailer-2.3.4/lib/action_mailer/base.rb:419:in receive' from (eval):1 from /usr/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in eval' from /usr/local/lib/ruby/gems/1.8/gems/rails-2.3.4/lib/commands/runner.rb:48 from /usr/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in gem_original_require' from /usr/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `require' from /home/xxx/script/runner:3 I sent an email to the server with two image attachments. Upon receiving the email the server runs "| ruby /xxx/script/runner 'IncomingMailHandler.receive STDIN.read'" What is going on? What am I doing wrong? MMS2R docs are here: http://mms2r.rubyforge.org/mms2r/

    Read the article

  • Android listing design problem with cursors

    - by Priyank
    Hi. I have a following situation in my android app. I have an app that fetches messages from inbox, sent items and drafts based on search keywords. I use to accomplish this by fetching cursors for each manually based on selection by user and then populating them in a custom data holder object. Filter those results based on given keywords and then manually render view with respective data. Someone suggested that I should use a custom Cursor adapter to bind view and my cursor data. So I tried doing that. Now what I am doing is this: Fetch individual cursors for inbox, sent items and drafts. Merge them into one using Merge cursor and then pass that back to my CursorAdapter implmentation. Now where or how do I filter my cursor data based on keywords; because now binding will ensure that they are directly rendered to view on list. Also, some post fetching operation like fetching sender's contact pic and all will be something that I do not want to move to adapter. If I do all this processing in adapter; it'll be heavy and ugly. How could I have designed it better such that it performs and the responsibilities are shared and distributed. Any ideas will be helpful.

    Read the article

  • Framework or design pattern for mailing all users of a webapp

    - by Todd Owen
    My app takes care of user registration (with the option to receive email announcements), and can easily handle the actual template-based rendering of email for a given user. JavaMail provides the mail transport layer. But how should I design the application layer between the business objects (e.g. User) and the mail transport? The straightforward approach would be a simple, synchronous loop: iterate through the users, queue the emails, and be done with it. "Queue" might mean sending them straight to the MTA (mail server), or to an in-memory queue to be consumed by another thread. However, I also plan to implement features like throttling the rate of emails, processing bounced emails (NDRs), and maintaining status across application restarts. My intuition is that a good design would decouple this from both the business layer and the mail transport layer as much as possible. I wondered if others had solved this problem before, but after much searching I haven't found any Java libraries which seem to fit this problem. Standalone mail apps such as James or list servers are too large in scope; packages like Spring's MailSender or Commons Email are too small in scope (being basically drop-in replacements for JavaMail). For other languages I haven't found anything appropriate either. I'm curious about how other developers have gone about adding bulk mailing to their applications.

    Read the article

  • XMLHttpRequst return null on Chrome

    - by BoltBait
    I have the following code that works fine in IE: <HTML> <BODY> <script language="JavaScript"> text=""; req = new XMLHttpRequest(); if (req) { req.onreadystatechange = processStateChange; req.open("GET", "http://www.boltbait.com", true); req.send(); } function processStateChange() { // is the data ready for use? if (req.readyState == 4) { // process my data alert(req.status); alert(req.responseText); } } </script> </BODY> </HTML> In IE, the first alert returns 200, the second returns the web page. However, in Chrome the first alert returns 0 and the second returns the empty string. My intent is to grab a web page into a string for processing. If I'm not doing this right, how should I be doing this? Thanks.

    Read the article

< Previous Page | 233 234 235 236 237 238 239 240 241 242 243 244  | Next Page >