Search Results

Search found 7077 results on 284 pages for 'concurrent processing'.

Page 185/284 | < Previous Page | 181 182 183 184 185 186 187 188 189 190 191 192  | Next Page >

  • Can't parse a 1904 date in ARPA format (email date)

    - by Ramon
    I'm processing an IMAP mailbox and running into trouble parsing the dates using the mxDateTime package. In particular, early dates like "Fri, 1 Jan 1904 00:43:25 -0400" is causing trouble: >>> import mx.DateTime >>> import mx.DateTime.ARPA >>> mx.DateTime.ARPA.ParseDateTimeUTC("Fri, 1 Jan 1904 00:43:25 -0400").gmtoffset() Traceback (most recent call last): File "<interactive input>", line 1, in <module> Error: cannot convert value to a time value >>> mx.DateTime.ARPA.ParseDateTimeUTC("Thu, 1 Jan 2009 00:43:25 -0400").gmtoffset() <mx.DateTime.DateTimeDelta object for '-08:00:00.00' at 1497b60> >>> Note that an almost identical date from 2009 works fine. I can't find any description of date limitations in mxDateTime itself. Any ideas why this might be? Thx, Ramon

    Read the article

  • Computer Vision application(+web interface) for face detection and recognition from database

    - by Kush
    My project is a computer vision java application which should implement the following : A web interface through which the form entry+images(for example a student data) will be stored into a database(Mysql) & images into directory common to my java application. Then the data & images can be retrieved from my java Gui application and I can perform the following operations of image processing through OpenCV. Actually,I want to run the face detection on images retrieved and discard the false entries(no proper face). Also the application user/admin can search an image based on text search(By Id) or By another reference image using face recognition. I am well familiar with Java but the problem is that I need a guidance on how to organise it in a stepwise manner(links appreciated).OpenCv,Php and mySql are really messy.I know doing the openCV stuff within java is real overhead but i really want to do it.But If there is any suggestion to do it elseway please guide me.So any kind of help is a ray of hope for me. Thanks.

    Read the article

  • A copy of ApplicationController has been removed from the module tree but is still active

    - by Matchu
    Whenever two concurrent HTTP requests go to my Rails app, the second always returns the following error: A copy of ApplicationController has been removed from the module tree but is still active! From there it gives an unhelpful stack trace to the effect of "we went through the standard server stuff, ran your first before_filter on ApplicationController (and I checked; it's just whichever filter runs first)", then offers the following: /home/matchu/rails/torch/vendor/rails/activesupport/lib/active_support/dependencies.rb:414:in `load_missing_constant' /home/matchu/rails/torch/vendor/rails/activesupport/lib/active_support/dependencies.rb:96:in `const_missing' which I'm assuming is a generic response and doesn't really say much. Google seems to tell me that people developing Rails Engines will encounter this, but I don't do that. All I've done is upgrade my Rails app from 2.2 (2.1?) to 2.3. What are some possible causes for this error, and how can I go about tracking down what's really going on? I know this question is vague, so would any other information be helpful? More importantly: I tried doing a test run in a "production" environment just now, and the error doesn't seem to persist. Does this only affect development, then, and need I not worry too much?

    Read the article

  • Problem in creating different types of columns in a Winforms gridview

    - by Royson
    My windows form application has a grid view control with filename as a default column. User should create a column of following types Text, Number, Currency, Combo Box, Check Box, Radio Button ,Date time type (should display DateTimePicker control) and Hyper Link type. After that i want to pass all rows to next screen for further processing. We can create a column of these types in a grid view but how can i store it in a data table so that i can pass it to next screen. Or should i create a column in a data table and then assign data table to grid view by gridview.DataSource = dt; but can we create a these types of columns in a data table.

    Read the article

  • Performance Tricks for C# Logging

    - by Charles
    I am looking into C# logging and I do not want my log messages to spend any time processing if the message is below the logging threshold. The best I can see log4net does is a threshold check AFTER evaluating the log parameters. Example: _logger.Debug( "My complicated log message " + thisFunctionTakesALongTime() + " will take a long time" ) Even if the threshold is above Debug, thisFunctionTakesALongTime will still be evaluated. In log4net you are supposed to use _logger.isDebugEnabled so you end up with if( _logger.isDebugEnabled ) _logger.Debug( "Much faster" ) I want to know if there is a better solution for .net logging that does not involve a check each time I want to log. In C++ I am allowed to do LOG_DEBUG( "My complicated log message " + thisFunctionTakesALongTime() + " will take no time" ) since my LOG_DEBUG macro does the log level check itself. This frees me to have a 1 line log message throughout my app which I greatly prefer. Anyone know of a way to replicate this behavior in C#?

    Read the article

  • Multiple video overlay - need advice

    - by Marvin
    Hi, I'm working on a project and I need some advice. Just some background, Im not a programmer though at times I do some fiddling and I am generally comfortable with more specific terms. Now for the actual issue, I have a folder with 10 small videos (4/7 secs max each) and I would like to display them full screen looping and overlaid. I'm not too sure on at what should I be looking at, I thought maybe processing but my most serious issue if that I cant even ask for help since I don't know what I need. Thank you for your time.

    Read the article

  • How to pre-process CSV data for FasterCSV?

    - by Katherine Chalmers
    We're having a significant number of problems creating a bulk upload function for our little app. We're using the FasterCSV gem to upload data to a MySQL database but he Faster CSV is so twitchy and precise in its requirements that it constantly breaks with malformed CSV errors and time out errors. The csv files are generally created by users' pasting text from their web sites or from Microsoft Word docs so it is not reasonable to expect that there will never be odd characters like smart quotes or accents in the data. Also users aren't going to be readily able to identify whether their data is perfect enough for FasterCSV or not. We need to find a way to fix it for them automatically. Is there a good way or a reliable tool for pre-processing CSV data to fix any nits in the data before having the FasterCSV gem process it?

    Read the article

  • Verify my form workflow

    - by Shackrock
    I have a form, with some sensitive info (CC numbers). My work flow is: One page to take all form items Upon submission, values are validated. If all is well, all data is stored in a session variable, and the page reloads and displays this info from the session variable. If everything is ok on the review page, the user clicks submit and the session variable is sent to another form for processing (sending payment). Upon success, the session is destroyed. Upon failure (bad CC number, for example) - the user is sent back to the form, with all of the fields filled in just like before, so that they can check for errors and try again (session is NOT destroyed). Does anyone see anything wrong with this, from a security or best practices stand point? UPDATE I'm thinking I can get rid of a step - storing the info in a session EVER. Just have a one page checkout, no review page... makes sense.

    Read the article

  • What are the pros and cons of using an in memeory DB rather than a ThreadLocal

    - by Pangea
    we have been using ThreadLocal so far to carry some data so as to not clutter the API. However below are some of issues of using thread local that which I dont like 1) over the years the data items being carried in thread local has increased 2) Since we started using threads (for some light weight processing), we have also migrating these data to the threads in the pool and copying them back again I am thinking of using an in memory DB for these (we doesnt want to add this to the API). I wondering if this approach is good. What r the pros and cons. thx in advance.

    Read the article

  • What's the fastest way to bulk insert a lot of data in SQL Server (C# client)

    - by Andrew
    I am hitting some performance bottlenecks with my C# client inserting bulk data into a SQL Server 2005 database and I'm looking for ways in which to speed up the process. I am already using the SqlClient.SqlBulkCopy (which is based on TDS) to speed up the data transfer across the wire which helped a lot, but I'm still looking for more. I have a simple table that looks like this: CREATE TABLE [BulkData]( [ContainerId] [int] NOT NULL, [BinId] [smallint] NOT NULL, [Sequence] [smallint] NOT NULL, [ItemId] [int] NOT NULL, [Left] [smallint] NOT NULL, [Top] [smallint] NOT NULL, [Right] [smallint] NOT NULL, [Bottom] [smallint] NOT NULL, CONSTRAINT [PKBulkData] PRIMARY KEY CLUSTERED ( [ContainerIdId] ASC, [BinId] ASC, [Sequence] ASC )) I'm inserting data in chunks that average about 300 rows where ContainerId and BinId are constant in each chunk and the Sequence value is 0-n and the values are pre-sorted based on the primary key. The %Disk time performance counter spends a lot of time at 100% so it is clear that disk IO is the main issue but the speeds I'm getting are several orders of magnitude below a raw file copy. Does it help any if I: Drop the Primary key while I am doing the inserting and recreate it later Do inserts into a temporary table with the same schema and periodically transfer them into the main table to keep the size of the table where insertions are happening small Anything else? -- Based on the responses I have gotten, let me clarify a little bit: Portman: I'm using a clustered index because when the data is all imported I will need to access data sequentially in that order. I don't particularly need the index to be there while importing the data. Is there any advantage to having a nonclustered PK index while doing the inserts as opposed to dropping the constraint entirely for import? Chopeen: The data is being generated remotely on many other machines (my SQL server can only handle about 10 currently, but I would love to be able to add more). It's not practical to run the entire process on the local machine because it would then have to process 50 times as much input data to generate the output. Jason: I am not doing any concurrent queries against the table during the import process, I will try dropping the primary key and see if that helps. ~ Andrew

    Read the article

  • Tarballing without git metadata

    - by zaf
    My source tree contains several directories which are using git source control and I need to tarball the whole tree excluding any references to the git metadata or custom log files. I thought I'd have a go using a combo of find/egrep/xargs/tar but somehow the tar file contains the .git directories and the *.log files. This is what I have: find -type f . | egrep -v '\.git|\.log' | xargs tar rvf ~/app.tar Can someone explain my misunderstanding here? Why is tar processing the files that find and egrep are filtering? I'm open to other techniques as well.

    Read the article

  • Stop applet execution on load, pause/resume using javascript?

    - by Zane
    I'm making something of a java applet gallery for my website (processing applets, if you're interested) and I'd like to keep the applets from running when the sit first loads. Then, when the appropriate button is clicked, a piece of javascript would tell the applet to continue execution until another button is pressed to stop it. I know that I can use appletName.start() and appletName.stop(), but it doesn't seem to work on load, at least not well. I'm using element.getElementsById( "applet" ) to get the applets to use the start and stop methods on. It slows Firefox to a crawl for some reason.

    Read the article

  • How to properly close a socket after an exception is caught?

    - by marco
    Hello, after my last project I had the problem that the client was expecting an object from the server, but while processing the clients input an exception that forces the server to close the socket for security reasons is caught. This causes the client to terminate in a very unpleasant way, the way I decided to deal with this was sending the client a Input status message after each recieved input so that he knows if his input was processed properly or if he needs to throw an exception. So my question: Is there a better/cleaner way to close the socket after an exception is caught?? thanks,

    Read the article

  • How do you use scripting language (PHP, Python, etc) to improve your productivity?

    - by Edwin
    Hi, I'm a Delphi developer on the Windows platform, recently read the PHP tutorial at W3CSchools, it looks interesting. We all know scripting languages are very good at web site development, but I also want to utilize it to improve my productivity or get some tedious tasks done quickly, maybe some quick-and-dirty string/file processing? How do you usually do with scripting languages apart from software development? And we need a responsive, decent IDE/editor in order to gain productivity when writing scripts for this purpose? Thanks for in advance!

    Read the article

  • Minimum Hardware requirements for Android development

    - by vishwanath
    I need information about minimum hardware requirement I need to have better experience in developing Android application. My current configuration is as follows. P4 3.0 GHz, 512 MB of ram. Started with Hello Android development on my machine and experience was sluggish, was using Eclipse Helios for development. Emulator used to take lot of time to start. And running program too. Do I need to upgrade my machine for the development purpose or is there anything else I am missing on my machine(like heavy processing by some other application I might have installed). And If I do need to upgrade, do I need to upgrade my processor too(that counts to new machine actually, which I am not in favor of), or only upgrading RAM will suffice.

    Read the article

  • Is Oracle AQ/Streams of any use in my situation?

    - by RenderIn
    I'm writing a workflow system that is driven entirely at each step by explicit human interaction. That is, a task is assigned to a person, that person selects from a few limited options {approve, reject, forward}, and then it is either sent along to the next person or terminated. Just curious of Oracle Streams/AQ has anything to offer over flat tables managed by regular web application code. The amount of processing after each action is fairly limited and the volume is not terribly high, so there's not really a need to throttle things by throwing them into a queue. What are some of the benefits of introducing a queue structure, or is it overkill for my situation?

    Read the article

  • Detecting a image 404 in javascript.

    - by xal
    After a user uploads a file we have to do some additional processing with the images such as resizing and upload to S3. This can take up to 10 extra seconds. Obviously we do this in a background. However, we want to show the user the result page immediately and simply show spinners in place until the images arrive in their permanent home on s3. I'm looking for a way to detect that a certain image failed to load correctly (404) in a cross browser way. If that happens, we want to use JS to show a spinner in it's place and reload the image every few seconds until it can be successfully loaded from s3.

    Read the article

  • Problem updating a database field from my controller

    - by ben
    I have an update method in my users controller that I call from a HTTPService in Flex 4. The update method is as follows: def updateName @user = User.find_by_email(params[:email]) @user.name = params[:nameNew] render :nothing => true end This is console output: Processing UsersController#updateName (for 127.0.0.1 at 2010-05-24 14:12:49) [POST] Parameters: {"action"="updateName", "nameNew"="ben", "controller"="users", "email"="[email protected]"} User Load (0.6ms) SELECT * FROM "users" WHERE ("users"."email" = '[email protected]') LIMIT 1 Completed in 20ms (View: 1, DB: 1) | 200 OK [http://localhost/users/updateName] But when I check my database, the name field is never updated. What am I doing wrong? Thanks for reading.

    Read the article

  • Cannot easy_install readline for Python 2.7.3 on Mac Os Lion

    - by user11170
    I am trying to install readline for python 2.7.3 installed via homebrew. If I type easy_install readline I get Downloading http://pypi.python.org/packages/source/r/readline/readline-6.2.2.tar.gz#md5=ad9d4a5a3af37d31daf36ea917b08c77 Processing readline-6.2.2.tar.gz Writing /var/folders/44/dhrdb5sx53s243j4w03063vh0000gn/T/easy_install-64FbG8/readline-6.2.2/setup.cfg Running readline-6.2.2/setup.py -q bdist_egg --dist-dir /var/folders/44/dhrdb5sx53s243j4w03063vh0000gn/T/easy_install-64FbG8/readline-6.2.2/egg-dist-tmp-NOmStB clang: error: no such file or directory: 'readline/libreadline.a' clang: error: no such file or directory: 'readline/libhistory.a' error: Setup script exited with error: command '/usr/bin/clang' failed with exit status 1 Any ideas about how I could fix this ? Thanks

    Read the article

  • Converting raw bytes into audio sound

    - by Afro Genius
    In my application I inherit a javastreamingaudio class from the freeTTS package then bypass the write method which sends an array of bytes to the SourceDataLine for audio processing. Instead of writing to the data line, I write this and subsequent byte arrays into a buffer which I then bring into my class and try to process into sound. My application processes sound as arrays of floats so I convert to float and try to process but always get static sound back. I am sure this is the way to go but am missing something along the way. I know that sound is processed as frames and each frame is a group of bytes so in my application I have to process the bytes into frames somehow. Am I looking at this the right way? Thanx in advance for any help.

    Read the article

  • Sql Server Maintenance Plan Tasks & Completion

    - by Ben
    Hi All, I have a maintenance plan that looks like this... Client 1 Import Data (Success) -> Process Data (Success) -> Post Process (Completion) -> Next Client Client 2 Import Data (Success) -> Process Data (Success) -> Post Process (Completion) -> Next Client Client N ... Import Data and Process Data are calling jobs and Post Process is an Execute Sql task. If Import Data or Process Data Fail, it goes to the next client Import Data... Both Import Data and Process Data are jobs that contain SSIS packages that are using the built-in SQL logging provider. My expectation with the configuration as it stands is: Client 1 Import Data Runs: Failure - Client 2 Import Data | Success Process Data Process Data Runs: Failure - Client 2 Import Data | Success Post Process Post Process Runs: Completion - Success or Failure - Next Client Import Data This isn't what I'm seeing in my logs though... I see several Client Import Data SSIS log entries, then several Post Process log entries, then back to Client Import Data! Arg!! What am I doing wrong? I didn't think the "success" piece of Client 1 Import Data would kick off until it... well... succeeded aka finished! The logs seem to indicate otherwise though... I really need these tasks to be consecutive not concurrent. Is this possible? Thanks!

    Read the article

  • how to commit 'commit log' itself in same svn version?

    - by understack
    It might sound unnecessary, but let me explain my problem first. Probably then it would make sense. Few artists keep updating images based on clients' change requests. An artist makes changes accordingly and commits with proper 'commit messages'. Just before actual commit, I want to create a text file with image properties like size and all the 'commit messages'. And then this file would be committed itself. So basically some sort of pre-commit processing is required. Even though most of the artists are not very comfortable with svn, they can always see what changes were made last time to the image via simple text file. So artists only do update and commit with svn. How this could be done? Are there any better alternatives?

    Read the article

  • paperclip callbacks or simple processor?

    - by holden
    I wanted to run the callback after_post_process but it doesn't seem to work in Rails 3.0.1 using Paperclip 2.3.8. It gives an error: undefined method `_post_process_callbacks' for #<Class:0x102d55ea0> I want to call the Panda API after the file has been uploaded. I would have created my own processor for this, but as Panda handles the processing, and it can upload the files as well, and queue itself for an undetermined duration I thought a callback would do fine. But the callbacks don't seem to work in Rails3. after_post_process :panda_create def panda_create video = Panda::Video.create(:source_url => mp3.url.gsub(/[?]\d*/,''), :profiles => "f4475446032025d7216226ad8987f8e9", :path_format => "blah/1234") end I tried require and include for paperclip in my model but it didn't seem to matter. Anyideas?

    Read the article

  • Current state of client-side XSLT

    - by Casey
    Last I heard, Blizzard was one of the few companies to put client-side XSLT into practice (2008). Is this still the case in 2011, or are more people now exploring this technique in production?  It seems that modern browsers (IE9, FF4, Chrome) and client processing power are primed to exploit this standard for tangible savings in server CPU power and bandwidth on large scale properties. Am I missing something? The negative aspects I'm aware of include * additional rendering time * additional assets required on uncached page load * additional layer of complexity * noticably less developer experience than server-side template techniques The benefits I perceive include * distributed template composition (offloaded on the client) * caching of common template fragments offloaded on the client * logical separation of document structure and data * well-documented web standard supported by all modern browsers Finally, although I know it's impossible to predict the future, I am curious to know opinions on whether or not client-side XSLT's day will come. With interest in HTML5 driving users to upgrade their browsers and developers to explore new techniques, I would say yes. How about you? Thanks in advance, Casey

    Read the article

  • How do I do an AJAX post to a url within a class library but not the same IIS Web Application?

    - by Mark Adesina
    I have been working with ajax and there has been no problems below is how my ajax post code look like: $.ajax({ type: "POST", url: '<%=ResolveUrl("TodoService.asmx/CreateNewToDo")%>', data: jsonData, contentType: "application/json; charset=utf-8", datatype: "json", success: function () { //if (msg.d) { $('#ContentPlaceHolder1_useridHiddenField').val(""); $('#ContentPlaceHolder1_titleTextBox').val(""); $('#ContentPlaceHolder1_destTextBox').val(""); $('#ContentPlaceHolder1_duedateTextBox').val(""); alert('Your todo has been saved'); // } }, error: function (msg) { alert('There was an error processing your request'); } }); However, the problem came up when I try to get the url to a webservice that is located in a class library within the same solution.

    Read the article

< Previous Page | 181 182 183 184 185 186 187 188 189 190 191 192  | Next Page >