Search Results

Search found 7077 results on 284 pages for 'concurrent processing'.

Page 185/284 | < Previous Page | 181 182 183 184 185 186 187 188 189 190 191 192  | Next Page >

  • How do you use scripting language (PHP, Python, etc) to improve your productivity?

    - by Edwin
    Hi, I'm a Delphi developer on the Windows platform, recently read the PHP tutorial at W3CSchools, it looks interesting. We all know scripting languages are very good at web site development, but I also want to utilize it to improve my productivity or get some tedious tasks done quickly, maybe some quick-and-dirty string/file processing? How do you usually do with scripting languages apart from software development? And we need a responsive, decent IDE/editor in order to gain productivity when writing scripts for this purpose? Thanks for in advance!

    Read the article

  • Does a multithreaded crawler in Python really speed things up?

    - by beagleguy
    Was looking to write a little web crawler in python. I was starting to investigate writing it as a multithreaded script, one pool of threads downloading and one pool processing results. Due to the GIL would it actually do simultaneous downloading? How does the GIL affect a web crawler? Would each thread pick some data off the socket, then move on to the next thread, let it pick some data off the socket, etc..? Basically I'm asking is doing a multi-threaded crawler in python really going to buy me much performance vs single threaded? thanks!

    Read the article

  • how to commit 'commit log' itself in same svn version?

    - by understack
    It might sound unnecessary, but let me explain my problem first. Probably then it would make sense. Few artists keep updating images based on clients' change requests. An artist makes changes accordingly and commits with proper 'commit messages'. Just before actual commit, I want to create a text file with image properties like size and all the 'commit messages'. And then this file would be committed itself. So basically some sort of pre-commit processing is required. Even though most of the artists are not very comfortable with svn, they can always see what changes were made last time to the image via simple text file. So artists only do update and commit with svn. How this could be done? Are there any better alternatives?

    Read the article

  • Use WPF DLL Assembly in ASP.NET problem

    - by liimur
    Hello, I have C++ project that compiles as DLL Assembly in .NET 3.5 SP1 Project is used for Image rendering processing by using WPF (it loads 2 images from local folder, applies one image on another and saves the output file in the same folder). I want to use that that project as a reference in ASP.NET project to the rendering on the website. So I created simple Web Project in ASP.NET C# that uses C++ project as a Reference. Everything works great in ASP.NET Web Development Server (built-in Web server in VS2008). But once I publish this project to IIS on the same Machine or use IIS for debug instead of built-in Web server Image rendering it's not working anymore. I'm not getting any exceptions or error messages, it just output image is not processes as it supposed to be. If anyone know what could cause that I would really appreciate your insight!

    Read the article

  • Which metadata I should save when downloading web-pages?

    - by Vojtech R.
    Hi, I'm going to download (for future purposes of language processing) some thousands webpages. Now I'm thinking, which metadata I should save. I explore this, but I do not wont to neglect something important. <title> <link> <publish_date> <date_downloaded> <source> // to this page <keyword> // for Solr indexing <text> // cleaned body of page Is there something important what I could miss in future?

    Read the article

  • Indy10 Deadlock at TCPServer

    - by user1769184
    to write information on the processing state to the GUI inside a tcpserver.onexecute(..) function , i used the following command sequence ExecuteDUMMYCommand(Global_Send_Record); BitMap_PaintImageProcess; TThread.Synchronize(nil, BitMap_PaintImageProcess); The code is working well on some machines, but on a few it fails. The code execution stops atTThread.Synchronize command. I guess on these machines the function call is trapped inside a deadlock Any chance to figure out the real problem behind ? The procedure BitMap_PaintImageProcess , here I create a Bitmap and do a lot of painting stuff , but is seems that this code is never executed ?

    Read the article

  • What's the fastest way to bulk insert a lot of data in SQL Server (C# client)

    - by Andrew
    I am hitting some performance bottlenecks with my C# client inserting bulk data into a SQL Server 2005 database and I'm looking for ways in which to speed up the process. I am already using the SqlClient.SqlBulkCopy (which is based on TDS) to speed up the data transfer across the wire which helped a lot, but I'm still looking for more. I have a simple table that looks like this: CREATE TABLE [BulkData]( [ContainerId] [int] NOT NULL, [BinId] [smallint] NOT NULL, [Sequence] [smallint] NOT NULL, [ItemId] [int] NOT NULL, [Left] [smallint] NOT NULL, [Top] [smallint] NOT NULL, [Right] [smallint] NOT NULL, [Bottom] [smallint] NOT NULL, CONSTRAINT [PKBulkData] PRIMARY KEY CLUSTERED ( [ContainerIdId] ASC, [BinId] ASC, [Sequence] ASC )) I'm inserting data in chunks that average about 300 rows where ContainerId and BinId are constant in each chunk and the Sequence value is 0-n and the values are pre-sorted based on the primary key. The %Disk time performance counter spends a lot of time at 100% so it is clear that disk IO is the main issue but the speeds I'm getting are several orders of magnitude below a raw file copy. Does it help any if I: Drop the Primary key while I am doing the inserting and recreate it later Do inserts into a temporary table with the same schema and periodically transfer them into the main table to keep the size of the table where insertions are happening small Anything else? -- Based on the responses I have gotten, let me clarify a little bit: Portman: I'm using a clustered index because when the data is all imported I will need to access data sequentially in that order. I don't particularly need the index to be there while importing the data. Is there any advantage to having a nonclustered PK index while doing the inserts as opposed to dropping the constraint entirely for import? Chopeen: The data is being generated remotely on many other machines (my SQL server can only handle about 10 currently, but I would love to be able to add more). It's not practical to run the entire process on the local machine because it would then have to process 50 times as much input data to generate the output. Jason: I am not doing any concurrent queries against the table during the import process, I will try dropping the primary key and see if that helps. ~ Andrew

    Read the article

  • What are the pros and cons of using an in memeory DB rather than a ThreadLocal

    - by Pangea
    we have been using ThreadLocal so far to carry some data so as to not clutter the API. However below are some of issues of using thread local that which I dont like 1) over the years the data items being carried in thread local has increased 2) Since we started using threads (for some light weight processing), we have also migrating these data to the threads in the pool and copying them back again I am thinking of using an in memory DB for these (we doesnt want to add this to the API). I wondering if this approach is good. What r the pros and cons. thx in advance.

    Read the article

  • Multiple Socket Connections

    - by BSchlinker
    I need to write a server which accepts connections from multiple client machines, maintains track of connected clients and sends individual clients data as necessary. Sometimes, all clients may be contacted at once with the same message, other times, it may be one individual client or a group of clients. Since I need confirmation that the clients received the information and don't want to build an ACK structure for a UDP connection, I decided to use a TCP streaming method. However, I've been struggling to understand how to maintain multiple connections and keep them idle. I seem to have three options. Use a fork for each incoming connection to create a separate child process, use pthread_create to create an entire new thread for each process, or use select() to wait on all open socket IDs for a connection. Recommendations as to how to attack this? I've begun working with pthreads but since performance will likely not be an issue, multicore processing is not necessary and perhaps there is a simpler way.

    Read the article

  • Problem updating a database field from my controller

    - by ben
    I have an update method in my users controller that I call from a HTTPService in Flex 4. The update method is as follows: def updateName @user = User.find_by_email(params[:email]) @user.name = params[:nameNew] render :nothing => true end This is console output: Processing UsersController#updateName (for 127.0.0.1 at 2010-05-24 14:12:49) [POST] Parameters: {"action"="updateName", "nameNew"="ben", "controller"="users", "email"="[email protected]"} User Load (0.6ms) SELECT * FROM "users" WHERE ("users"."email" = '[email protected]') LIMIT 1 Completed in 20ms (View: 1, DB: 1) | 200 OK [http://localhost/users/updateName] But when I check my database, the name field is never updated. What am I doing wrong? Thanks for reading.

    Read the article

  • Performance Tricks for C# Logging

    - by Charles
    I am looking into C# logging and I do not want my log messages to spend any time processing if the message is below the logging threshold. The best I can see log4net does is a threshold check AFTER evaluating the log parameters. Example: _logger.Debug( "My complicated log message " + thisFunctionTakesALongTime() + " will take a long time" ) Even if the threshold is above Debug, thisFunctionTakesALongTime will still be evaluated. In log4net you are supposed to use _logger.isDebugEnabled so you end up with if( _logger.isDebugEnabled ) _logger.Debug( "Much faster" ) I want to know if there is a better solution for .net logging that does not involve a check each time I want to log. In C++ I am allowed to do LOG_DEBUG( "My complicated log message " + thisFunctionTakesALongTime() + " will take no time" ) since my LOG_DEBUG macro does the log level check itself. This frees me to have a 1 line log message throughout my app which I greatly prefer. Anyone know of a way to replicate this behavior in C#?

    Read the article

  • Stop applet execution on load, pause/resume using javascript?

    - by Zane
    I'm making something of a java applet gallery for my website (processing applets, if you're interested) and I'd like to keep the applets from running when the sit first loads. Then, when the appropriate button is clicked, a piece of javascript would tell the applet to continue execution until another button is pressed to stop it. I know that I can use appletName.start() and appletName.stop(), but it doesn't seem to work on load, at least not well. I'm using element.getElementsById( "applet" ) to get the applets to use the start and stop methods on. It slows Firefox to a crawl for some reason.

    Read the article

  • How to properly close a socket after an exception is caught?

    - by marco
    Hello, after my last project I had the problem that the client was expecting an object from the server, but while processing the clients input an exception that forces the server to close the socket for security reasons is caught. This causes the client to terminate in a very unpleasant way, the way I decided to deal with this was sending the client a Input status message after each recieved input so that he knows if his input was processed properly or if he needs to throw an exception. So my question: Is there a better/cleaner way to close the socket after an exception is caught?? thanks,

    Read the article

  • Cannot easy_install readline for Python 2.7.3 on Mac Os Lion

    - by user11170
    I am trying to install readline for python 2.7.3 installed via homebrew. If I type easy_install readline I get Downloading http://pypi.python.org/packages/source/r/readline/readline-6.2.2.tar.gz#md5=ad9d4a5a3af37d31daf36ea917b08c77 Processing readline-6.2.2.tar.gz Writing /var/folders/44/dhrdb5sx53s243j4w03063vh0000gn/T/easy_install-64FbG8/readline-6.2.2/setup.cfg Running readline-6.2.2/setup.py -q bdist_egg --dist-dir /var/folders/44/dhrdb5sx53s243j4w03063vh0000gn/T/easy_install-64FbG8/readline-6.2.2/egg-dist-tmp-NOmStB clang: error: no such file or directory: 'readline/libreadline.a' clang: error: no such file or directory: 'readline/libhistory.a' error: Setup script exited with error: command '/usr/bin/clang' failed with exit status 1 Any ideas about how I could fix this ? Thanks

    Read the article

  • How to pre-process CSV data for FasterCSV?

    - by Katherine Chalmers
    We're having a significant number of problems creating a bulk upload function for our little app. We're using the FasterCSV gem to upload data to a MySQL database but he Faster CSV is so twitchy and precise in its requirements that it constantly breaks with malformed CSV errors and time out errors. The csv files are generally created by users' pasting text from their web sites or from Microsoft Word docs so it is not reasonable to expect that there will never be odd characters like smart quotes or accents in the data. Also users aren't going to be readily able to identify whether their data is perfect enough for FasterCSV or not. We need to find a way to fix it for them automatically. Is there a good way or a reliable tool for pre-processing CSV data to fix any nits in the data before having the FasterCSV gem process it?

    Read the article

  • Design Decision - Scaling out web based application's architecture

    - by Vadi
    This question is about design decision. I am currently working on a web project that will have 40K users to start with and in couple of month expected to grow 50M users (not concurrent users though). I would like to have a architecture that can be scaled out easily without much effort. In order to explain, I would like to use a trivial scenario. Lets say, User entities and services such as CreateUser, AuthenticateUser etc., are a simple method calls for the Page Controllers. But once the traffic increases, for example, authenticating user (or such services related to user entities) has to be moved out to a different internal server to spread the load. But at the same time using RPC calls over the network when the user count is 40K would become overkill. My proposal was to use IPC initially and when we need to scale out we can interally switch to TCP based RPC calls so that it can easily scale out. For example, I am referring to System.IO.Pipes.NamedPipeStreamServer to start with and move on to a TcpListener later on. If we have proper design that can encapsulate above said approach, it would easy for us to scale out services into multiple network servers but at the same time avoid network calls when the user count is small. Is this is a best approach? Any suggestions would be great .. Note: The database scaling is definetly the second phase optimization so we have already made architectural design in place to easily partition data when traffic increases. The primary bottleneck would be application servers over the time period.

    Read the article

  • Converting raw bytes into audio sound

    - by Afro Genius
    In my application I inherit a javastreamingaudio class from the freeTTS package then bypass the write method which sends an array of bytes to the SourceDataLine for audio processing. Instead of writing to the data line, I write this and subsequent byte arrays into a buffer which I then bring into my class and try to process into sound. My application processes sound as arrays of floats so I convert to float and try to process but always get static sound back. I am sure this is the way to go but am missing something along the way. I know that sound is processed as frames and each frame is a group of bytes so in my application I have to process the bytes into frames somehow. Am I looking at this the right way? Thanx in advance for any help.

    Read the article

  • paperclip callbacks or simple processor?

    - by holden
    I wanted to run the callback after_post_process but it doesn't seem to work in Rails 3.0.1 using Paperclip 2.3.8. It gives an error: undefined method `_post_process_callbacks' for #<Class:0x102d55ea0> I want to call the Panda API after the file has been uploaded. I would have created my own processor for this, but as Panda handles the processing, and it can upload the files as well, and queue itself for an undetermined duration I thought a callback would do fine. But the callbacks don't seem to work in Rails3. after_post_process :panda_create def panda_create video = Panda::Video.create(:source_url => mp3.url.gsub(/[?]\d*/,''), :profiles => "f4475446032025d7216226ad8987f8e9", :path_format => "blah/1234") end I tried require and include for paperclip in my model but it didn't seem to matter. Anyideas?

    Read the article

  • Tarballing without git metadata

    - by zaf
    My source tree contains several directories which are using git source control and I need to tarball the whole tree excluding any references to the git metadata or custom log files. I thought I'd have a go using a combo of find/egrep/xargs/tar but somehow the tar file contains the .git directories and the *.log files. This is what I have: find -type f . | egrep -v '\.git|\.log' | xargs tar rvf ~/app.tar Can someone explain my misunderstanding here? Why is tar processing the files that find and egrep are filtering? I'm open to other techniques as well.

    Read the article

  • Sql Server Maintenance Plan Tasks & Completion

    - by Ben
    Hi All, I have a maintenance plan that looks like this... Client 1 Import Data (Success) -> Process Data (Success) -> Post Process (Completion) -> Next Client Client 2 Import Data (Success) -> Process Data (Success) -> Post Process (Completion) -> Next Client Client N ... Import Data and Process Data are calling jobs and Post Process is an Execute Sql task. If Import Data or Process Data Fail, it goes to the next client Import Data... Both Import Data and Process Data are jobs that contain SSIS packages that are using the built-in SQL logging provider. My expectation with the configuration as it stands is: Client 1 Import Data Runs: Failure - Client 2 Import Data | Success Process Data Process Data Runs: Failure - Client 2 Import Data | Success Post Process Post Process Runs: Completion - Success or Failure - Next Client Import Data This isn't what I'm seeing in my logs though... I see several Client Import Data SSIS log entries, then several Post Process log entries, then back to Client Import Data! Arg!! What am I doing wrong? I didn't think the "success" piece of Client 1 Import Data would kick off until it... well... succeeded aka finished! The logs seem to indicate otherwise though... I really need these tasks to be consecutive not concurrent. Is this possible? Thanks!

    Read the article

  • Log4Net GetLogger creates rolling files even for the unreferenced files

    - by ybastiand
    Hi, I have a C# solution that contains three executables. I have each of these three executables sharing the same log4net configuration file. At startup of each of the executable, they retrieve a logger (one logger per executable, as per configuration file further below). When one of the executable performs Log.GetLogger(), it creates all the rolling files instead of only the one rolling file that is referred to as appender-ref in the executable's logger configuration. For instance, when I startup my sending daemon executable, it performs Log.GetLogger("SendingDaemonLogger") which creates 3 files Log/RuleScheduler.txt, Log/NotificationGenerator.txt and Log/NotificationSender.txt instead of only the desired Log/NotificationSender.txt. Then when I startup another of the executables, for instance the rule scheduler daemon, this other process cannot write in Log/RuleScheduler.txt because it has been created and locked by the sending daemon process. I am guessing that there may be three different solutions to my problem: The GetLogger should only create the rolling file appenders that are referenced in the config I should have one config file per executable, this way each config file could list only one rolling file appender and starting each of the executable would not create the rolling files of the other daemons. I am however reluctant to do this because some of the configuration (SMTP appender, console appender) is shared between the daemons and I don't want to have duplicate copies to maintain. Unless there is a way to have a config file including another one? Maybe there is a way to configure the rolling file so that concurrent access across processes is allowed? This solution still isn't perfect in my opinion because any of the daemons should not be creating the rolling files of some other daemons. Thanks in advance for your help! I have difficulties for posting the config file properly here (this website interprets as HTML). Please go to the following link for seeing my log4net configuration file: log4Net configuration file

    Read the article

  • Verify my form workflow

    - by Shackrock
    I have a form, with some sensitive info (CC numbers). My work flow is: One page to take all form items Upon submission, values are validated. If all is well, all data is stored in a session variable, and the page reloads and displays this info from the session variable. If everything is ok on the review page, the user clicks submit and the session variable is sent to another form for processing (sending payment). Upon success, the session is destroyed. Upon failure (bad CC number, for example) - the user is sent back to the form, with all of the fields filled in just like before, so that they can check for errors and try again (session is NOT destroyed). Does anyone see anything wrong with this, from a security or best practices stand point? UPDATE I'm thinking I can get rid of a step - storing the info in a session EVER. Just have a one page checkout, no review page... makes sense.

    Read the article

  • PostgreSQL insert on primary key failing with contention, even at serializable level

    - by Steven Schlansker
    I'm trying to insert or update data in a PostgreSQL db. The simplest case is a key-value pairing (the actual data is more complicated, but this is the smallest clear example) When you set a value, I'd like it to insert if the key is not there, otherwise update. Sadly Postgres does not have an insert or update statement, so I have to emulate it myself. I've been working with the idea of basically SELECTing whether the key exists, and then running the appropriate INSERT or UPDATE. Now clearly this needs to be be in a transaction or all manner of bad things could happen. However, this is not working exactly how I'd like it to - I understand that there are limitations to serializable transactions, but I'm not sure how to work around this one. Here's the situation - ab: => set transaction isolation level serializable; a: => select count(1) from table where id=1; --> 0 b: => select count(1) from table where id=1; --> 0 a: => insert into table values(1); --> 1 b: => insert into table values(1); --> ERROR: duplicate key value violates unique constraint "serial_test_pkey" Now I would expect it to throw the usual "couldn't commit due to concurrent update" but I'm guessing since the inserts are different "rows" this does not happen. Is there an easy way to work around this?

    Read the article

  • Current state of client-side XSLT

    - by Casey
    Last I heard, Blizzard was one of the few companies to put client-side XSLT into practice (2008). Is this still the case in 2011, or are more people now exploring this technique in production?  It seems that modern browsers (IE9, FF4, Chrome) and client processing power are primed to exploit this standard for tangible savings in server CPU power and bandwidth on large scale properties. Am I missing something? The negative aspects I'm aware of include * additional rendering time * additional assets required on uncached page load * additional layer of complexity * noticably less developer experience than server-side template techniques The benefits I perceive include * distributed template composition (offloaded on the client) * caching of common template fragments offloaded on the client * logical separation of document structure and data * well-documented web standard supported by all modern browsers Finally, although I know it's impossible to predict the future, I am curious to know opinions on whether or not client-side XSLT's day will come. With interest in HTML5 driving users to upgrade their browsers and developers to explore new techniques, I would say yes. How about you? Thanks in advance, Casey

    Read the article

  • Object tree navigation language in Java

    - by lewap
    In the system which I'm currently developing I often have to navigate an object tree and based on its state and values take actions. In normal Java this results in tedious for loops, if statements etc... Are there alternative ways to achieve tree navigation, similar to XPath for XML? I know there is JXPath and OGNL, but do you know any other libraries for such purpose? Do you know any libraries which generate bytecodes for specific tree navigation expressions to make the processing as fast as Java native fors and ifs?

    Read the article

< Previous Page | 181 182 183 184 185 186 187 188 189 190 191 192  | Next Page >