Search Results

Search found 7077 results on 284 pages for 'concurrent processing'.

Page 185/284 | < Previous Page | 181 182 183 184 185 186 187 188 189 190 191 192  | Next Page >

  • Grails and PermGen issue with g:link and g:render

    - by Alexi Groove
    I've been running grails for sometime without any issues but recently after an upgrade to Grails 1.1.1, I've encountered the dreaded PermGen errors. Prior to the upgrade, no such issue. The error seems to be happening when the <g:link> and <g:render> tags are used in a GSP although I'm not sure it's indicative that this is the issue but more of the fact that it ran out of space when these tags were being rendered. Typically, everyone who encounters PermGen errors recommend increasing your java environment options -- but what maybe the source of the issue? Is it a Grails 1.1/hibernate/spring problem? The error: 2010-04-20 05:37:03,962 INFO [STDOUT] 05:37:03,961 ERROR [GroovyPagesServlet] Error processing GSP: Error executing tag <g:render>: org.codehaus.groovy.grails.web.taglib.exceptions.GrailsTagException: Error executing tag <g:link>: java.lang.OutOfMemoryError: PermGen space org.codehaus.groovy.grails.web.taglib.exceptions.GrailsTagException: Error executing tag <g:render>: org.codehaus.groovy.grails.web.taglib.exceptions.GrailsTagException: Error executing tag <g:link>: java.lang.OutOfMemoryError: PermGen space

    Read the article

  • AppDomain.CurrentDomain.DomainUnload not be raised in Console app

    - by Guy
    I have an assembly that when accessed spins up a single thread to process items placed on a queue. In that assembly I attach a handler to the DomainUnload event: AppDomain.CurrentDomain.DomainUnload += new EventHandler(CurrentDomain_DomainUnload); That handler joins the thread to the main thread so that all items on the queue can complete processing before the application terminates. The problem that I am experiencing is that the DomainUnload event is not getting fired when the console application terminates. Any ideas why this would be? Using .NET 3.5 and C#

    Read the article

  • Verify my form workflow

    - by Shackrock
    I have a form, with some sensitive info (CC numbers). My work flow is: One page to take all form items Upon submission, values are validated. If all is well, all data is stored in a session variable, and the page reloads and displays this info from the session variable. If everything is ok on the review page, the user clicks submit and the session variable is sent to another form for processing (sending payment). Upon success, the session is destroyed. Upon failure (bad CC number, for example) - the user is sent back to the form, with all of the fields filled in just like before, so that they can check for errors and try again (session is NOT destroyed). Does anyone see anything wrong with this, from a security or best practices stand point? UPDATE I'm thinking I can get rid of a step - storing the info in a session EVER. Just have a one page checkout, no review page... makes sense.

    Read the article

  • Symlinking (ln) faster than moving (mv)?

    - by Chad Johnson
    When we build web software releases, we prepare the release in a temporary directory and then replace the release directory with the temporary one just prepared: # Move and replace existing release directory. mv /path/to/httpdocs /path/to/httpdocs.before mv /path/to/$newReleaseName /path/to/httpdocs Under this scheme, it happens that with about 1 in every 15 releases, a user was using a file in the original release directory exactly at the time the commands above are run, and a fatal error occurs for that user. I am wondering if using symlinking like follows would be significantly faster, in terms of processing time, thereby helping to lessen the likelihood of this problem: # Remove and replace existing release symlink. ln -sf /path/to/$newReleaseName path/to/httpdocs

    Read the article

  • Performance Tricks for C# Logging

    - by Charles
    I am looking into C# logging and I do not want my log messages to spend any time processing if the message is below the logging threshold. The best I can see log4net does is a threshold check AFTER evaluating the log parameters. Example: _logger.Debug( "My complicated log message " + thisFunctionTakesALongTime() + " will take a long time" ) Even if the threshold is above Debug, thisFunctionTakesALongTime will still be evaluated. In log4net you are supposed to use _logger.isDebugEnabled so you end up with if( _logger.isDebugEnabled ) _logger.Debug( "Much faster" ) I want to know if there is a better solution for .net logging that does not involve a check each time I want to log. In C++ I am allowed to do LOG_DEBUG( "My complicated log message " + thisFunctionTakesALongTime() + " will take no time" ) since my LOG_DEBUG macro does the log level check itself. This frees me to have a 1 line log message throughout my app which I greatly prefer. Anyone know of a way to replicate this behavior in C#?

    Read the article

  • Can't parse a 1904 date in ARPA format (email date)

    - by Ramon
    I'm processing an IMAP mailbox and running into trouble parsing the dates using the mxDateTime package. In particular, early dates like "Fri, 1 Jan 1904 00:43:25 -0400" is causing trouble: >>> import mx.DateTime >>> import mx.DateTime.ARPA >>> mx.DateTime.ARPA.ParseDateTimeUTC("Fri, 1 Jan 1904 00:43:25 -0400").gmtoffset() Traceback (most recent call last): File "<interactive input>", line 1, in <module> Error: cannot convert value to a time value >>> mx.DateTime.ARPA.ParseDateTimeUTC("Thu, 1 Jan 2009 00:43:25 -0400").gmtoffset() <mx.DateTime.DateTimeDelta object for '-08:00:00.00' at 1497b60> >>> Note that an almost identical date from 2009 works fine. I can't find any description of date limitations in mxDateTime itself. Any ideas why this might be? Thx, Ramon

    Read the article

  • How to properly close a socket after an exception is caught?

    - by marco
    Hello, after my last project I had the problem that the client was expecting an object from the server, but while processing the clients input an exception that forces the server to close the socket for security reasons is caught. This causes the client to terminate in a very unpleasant way, the way I decided to deal with this was sending the client a Input status message after each recieved input so that he knows if his input was processed properly or if he needs to throw an exception. So my question: Is there a better/cleaner way to close the socket after an exception is caught?? thanks,

    Read the article

  • What are the pros and cons of using an in memeory DB rather than a ThreadLocal

    - by Pangea
    we have been using ThreadLocal so far to carry some data so as to not clutter the API. However below are some of issues of using thread local that which I dont like 1) over the years the data items being carried in thread local has increased 2) Since we started using threads (for some light weight processing), we have also migrating these data to the threads in the pool and copying them back again I am thinking of using an in memory DB for these (we doesnt want to add this to the API). I wondering if this approach is good. What r the pros and cons. thx in advance.

    Read the article

  • Problem in creating different types of columns in a Winforms gridview

    - by Royson
    My windows form application has a grid view control with filename as a default column. User should create a column of following types Text, Number, Currency, Combo Box, Check Box, Radio Button ,Date time type (should display DateTimePicker control) and Hyper Link type. After that i want to pass all rows to next screen for further processing. We can create a column of these types in a grid view but how can i store it in a data table so that i can pass it to next screen. Or should i create a column in a data table and then assign data table to grid view by gridview.DataSource = dt; but can we create a these types of columns in a data table.

    Read the article

  • paperclip callbacks or simple processor?

    - by holden
    I wanted to run the callback after_post_process but it doesn't seem to work in Rails 3.0.1 using Paperclip 2.3.8. It gives an error: undefined method `_post_process_callbacks' for #<Class:0x102d55ea0> I want to call the Panda API after the file has been uploaded. I would have created my own processor for this, but as Panda handles the processing, and it can upload the files as well, and queue itself for an undetermined duration I thought a callback would do fine. But the callbacks don't seem to work in Rails3. after_post_process :panda_create def panda_create video = Panda::Video.create(:source_url => mp3.url.gsub(/[?]\d*/,''), :profiles => "f4475446032025d7216226ad8987f8e9", :path_format => "blah/1234") end I tried require and include for paperclip in my model but it didn't seem to matter. Anyideas?

    Read the article

  • Cannot easy_install readline for Python 2.7.3 on Mac Os Lion

    - by user11170
    I am trying to install readline for python 2.7.3 installed via homebrew. If I type easy_install readline I get Downloading http://pypi.python.org/packages/source/r/readline/readline-6.2.2.tar.gz#md5=ad9d4a5a3af37d31daf36ea917b08c77 Processing readline-6.2.2.tar.gz Writing /var/folders/44/dhrdb5sx53s243j4w03063vh0000gn/T/easy_install-64FbG8/readline-6.2.2/setup.cfg Running readline-6.2.2/setup.py -q bdist_egg --dist-dir /var/folders/44/dhrdb5sx53s243j4w03063vh0000gn/T/easy_install-64FbG8/readline-6.2.2/egg-dist-tmp-NOmStB clang: error: no such file or directory: 'readline/libreadline.a' clang: error: no such file or directory: 'readline/libhistory.a' error: Setup script exited with error: command '/usr/bin/clang' failed with exit status 1 Any ideas about how I could fix this ? Thanks

    Read the article

  • Problem updating a database field from my controller

    - by ben
    I have an update method in my users controller that I call from a HTTPService in Flex 4. The update method is as follows: def updateName @user = User.find_by_email(params[:email]) @user.name = params[:nameNew] render :nothing => true end This is console output: Processing UsersController#updateName (for 127.0.0.1 at 2010-05-24 14:12:49) [POST] Parameters: {"action"="updateName", "nameNew"="ben", "controller"="users", "email"="[email protected]"} User Load (0.6ms) SELECT * FROM "users" WHERE ("users"."email" = '[email protected]') LIMIT 1 Completed in 20ms (View: 1, DB: 1) | 200 OK [http://localhost/users/updateName] But when I check my database, the name field is never updated. What am I doing wrong? Thanks for reading.

    Read the article

  • What's the fastest way to bulk insert a lot of data in SQL Server (C# client)

    - by Andrew
    I am hitting some performance bottlenecks with my C# client inserting bulk data into a SQL Server 2005 database and I'm looking for ways in which to speed up the process. I am already using the SqlClient.SqlBulkCopy (which is based on TDS) to speed up the data transfer across the wire which helped a lot, but I'm still looking for more. I have a simple table that looks like this: CREATE TABLE [BulkData]( [ContainerId] [int] NOT NULL, [BinId] [smallint] NOT NULL, [Sequence] [smallint] NOT NULL, [ItemId] [int] NOT NULL, [Left] [smallint] NOT NULL, [Top] [smallint] NOT NULL, [Right] [smallint] NOT NULL, [Bottom] [smallint] NOT NULL, CONSTRAINT [PKBulkData] PRIMARY KEY CLUSTERED ( [ContainerIdId] ASC, [BinId] ASC, [Sequence] ASC )) I'm inserting data in chunks that average about 300 rows where ContainerId and BinId are constant in each chunk and the Sequence value is 0-n and the values are pre-sorted based on the primary key. The %Disk time performance counter spends a lot of time at 100% so it is clear that disk IO is the main issue but the speeds I'm getting are several orders of magnitude below a raw file copy. Does it help any if I: Drop the Primary key while I am doing the inserting and recreate it later Do inserts into a temporary table with the same schema and periodically transfer them into the main table to keep the size of the table where insertions are happening small Anything else? -- Based on the responses I have gotten, let me clarify a little bit: Portman: I'm using a clustered index because when the data is all imported I will need to access data sequentially in that order. I don't particularly need the index to be there while importing the data. Is there any advantage to having a nonclustered PK index while doing the inserts as opposed to dropping the constraint entirely for import? Chopeen: The data is being generated remotely on many other machines (my SQL server can only handle about 10 currently, but I would love to be able to add more). It's not practical to run the entire process on the local machine because it would then have to process 50 times as much input data to generate the output. Jason: I am not doing any concurrent queries against the table during the import process, I will try dropping the primary key and see if that helps. ~ Andrew

    Read the article

  • How do I do an AJAX post to a url within a class library but not the same IIS Web Application?

    - by Mark Adesina
    I have been working with ajax and there has been no problems below is how my ajax post code look like: $.ajax({ type: "POST", url: '<%=ResolveUrl("TodoService.asmx/CreateNewToDo")%>', data: jsonData, contentType: "application/json; charset=utf-8", datatype: "json", success: function () { //if (msg.d) { $('#ContentPlaceHolder1_useridHiddenField').val(""); $('#ContentPlaceHolder1_titleTextBox').val(""); $('#ContentPlaceHolder1_destTextBox').val(""); $('#ContentPlaceHolder1_duedateTextBox').val(""); alert('Your todo has been saved'); // } }, error: function (msg) { alert('There was an error processing your request'); } }); However, the problem came up when I try to get the url to a webservice that is located in a class library within the same solution.

    Read the article

  • Multiple video overlay - need advice

    - by Marvin
    Hi, I'm working on a project and I need some advice. Just some background, Im not a programmer though at times I do some fiddling and I am generally comfortable with more specific terms. Now for the actual issue, I have a folder with 10 small videos (4/7 secs max each) and I would like to display them full screen looping and overlaid. I'm not too sure on at what should I be looking at, I thought maybe processing but my most serious issue if that I cant even ask for help since I don't know what I need. Thank you for your time.

    Read the article

  • Stop applet execution on load, pause/resume using javascript?

    - by Zane
    I'm making something of a java applet gallery for my website (processing applets, if you're interested) and I'd like to keep the applets from running when the sit first loads. Then, when the appropriate button is clicked, a piece of javascript would tell the applet to continue execution until another button is pressed to stop it. I know that I can use appletName.start() and appletName.stop(), but it doesn't seem to work on load, at least not well. I'm using element.getElementsById( "applet" ) to get the applets to use the start and stop methods on. It slows Firefox to a crawl for some reason.

    Read the article

  • Tarballing without git metadata

    - by zaf
    My source tree contains several directories which are using git source control and I need to tarball the whole tree excluding any references to the git metadata or custom log files. I thought I'd have a go using a combo of find/egrep/xargs/tar but somehow the tar file contains the .git directories and the *.log files. This is what I have: find -type f . | egrep -v '\.git|\.log' | xargs tar rvf ~/app.tar Can someone explain my misunderstanding here? Why is tar processing the files that find and egrep are filtering? I'm open to other techniques as well.

    Read the article

  • Minimum Hardware requirements for Android development

    - by vishwanath
    I need information about minimum hardware requirement I need to have better experience in developing Android application. My current configuration is as follows. P4 3.0 GHz, 512 MB of ram. Started with Hello Android development on my machine and experience was sluggish, was using Eclipse Helios for development. Emulator used to take lot of time to start. And running program too. Do I need to upgrade my machine for the development purpose or is there anything else I am missing on my machine(like heavy processing by some other application I might have installed). And If I do need to upgrade, do I need to upgrade my processor too(that counts to new machine actually, which I am not in favor of), or only upgrading RAM will suffice.

    Read the article

  • Is Oracle AQ/Streams of any use in my situation?

    - by RenderIn
    I'm writing a workflow system that is driven entirely at each step by explicit human interaction. That is, a task is assigned to a person, that person selects from a few limited options {approve, reject, forward}, and then it is either sent along to the next person or terminated. Just curious of Oracle Streams/AQ has anything to offer over flat tables managed by regular web application code. The amount of processing after each action is fairly limited and the volume is not terribly high, so there's not really a need to throttle things by throwing them into a queue. What are some of the benefits of introducing a queue structure, or is it overkill for my situation?

    Read the article

  • Multiple merchant accounts with Activemerchant gem.

    - by sosborn
    I am developing a rails site that will allow a group of merchants (5 - 10) to accept credit card orders online. I plan on using the Activemerchant gem to handle the processing. In this case, each merchant will have their own merchant accounts to handle the payments. Storing banking information like that is not something I am a fan of. This could be solved by queing orders and allowing the merchant to log in to the site, input their credentials and process the order. However, if I go that route then it seems to me that I would have to store the customers' credit card information temporarily until the merchant has the opportunity to log in and process the order, which to me is the greater evil. Has anyone dealt with this situation? If so, what are the options available and what pitfalls should I look out for? In my mind, security customer credit card information is priority number one with the merchant account information a close second.

    Read the article

  • Current state of client-side XSLT

    - by Casey
    Last I heard, Blizzard was one of the few companies to put client-side XSLT into practice (2008). Is this still the case in 2011, or are more people now exploring this technique in production?  It seems that modern browsers (IE9, FF4, Chrome) and client processing power are primed to exploit this standard for tangible savings in server CPU power and bandwidth on large scale properties. Am I missing something? The negative aspects I'm aware of include * additional rendering time * additional assets required on uncached page load * additional layer of complexity * noticably less developer experience than server-side template techniques The benefits I perceive include * distributed template composition (offloaded on the client) * caching of common template fragments offloaded on the client * logical separation of document structure and data * well-documented web standard supported by all modern browsers Finally, although I know it's impossible to predict the future, I am curious to know opinions on whether or not client-side XSLT's day will come. With interest in HTML5 driving users to upgrade their browsers and developers to explore new techniques, I would say yes. How about you? Thanks in advance, Casey

    Read the article

  • Detecting a image 404 in javascript.

    - by xal
    After a user uploads a file we have to do some additional processing with the images such as resizing and upload to S3. This can take up to 10 extra seconds. Obviously we do this in a background. However, we want to show the user the result page immediately and simply show spinners in place until the images arrive in their permanent home on s3. I'm looking for a way to detect that a certain image failed to load correctly (404) in a cross browser way. If that happens, we want to use JS to show a spinner in it's place and reload the image every few seconds until it can be successfully loaded from s3.

    Read the article

  • How do you use scripting language (PHP, Python, etc) to improve your productivity?

    - by Edwin
    Hi, I'm a Delphi developer on the Windows platform, recently read the PHP tutorial at W3CSchools, it looks interesting. We all know scripting languages are very good at web site development, but I also want to utilize it to improve my productivity or get some tedious tasks done quickly, maybe some quick-and-dirty string/file processing? How do you usually do with scripting languages apart from software development? And we need a responsive, decent IDE/editor in order to gain productivity when writing scripts for this purpose? Thanks for in advance!

    Read the article

  • Sql Server Maintenance Plan Tasks & Completion

    - by Ben
    Hi All, I have a maintenance plan that looks like this... Client 1 Import Data (Success) -> Process Data (Success) -> Post Process (Completion) -> Next Client Client 2 Import Data (Success) -> Process Data (Success) -> Post Process (Completion) -> Next Client Client N ... Import Data and Process Data are calling jobs and Post Process is an Execute Sql task. If Import Data or Process Data Fail, it goes to the next client Import Data... Both Import Data and Process Data are jobs that contain SSIS packages that are using the built-in SQL logging provider. My expectation with the configuration as it stands is: Client 1 Import Data Runs: Failure - Client 2 Import Data | Success Process Data Process Data Runs: Failure - Client 2 Import Data | Success Post Process Post Process Runs: Completion - Success or Failure - Next Client Import Data This isn't what I'm seeing in my logs though... I see several Client Import Data SSIS log entries, then several Post Process log entries, then back to Client Import Data! Arg!! What am I doing wrong? I didn't think the "success" piece of Client 1 Import Data would kick off until it... well... succeeded aka finished! The logs seem to indicate otherwise though... I really need these tasks to be consecutive not concurrent. Is this possible? Thanks!

    Read the article

  • Can't start the portable version of NetBeans 6.9.1 IDE

    - by Coder
    I downloaded the portable version of netbeans netbeans-6.9.1-201007282301-ml.zip from the netbeans site and changed the config file in etc/netbeans.conf as indicated on the netbeans site. The file contents are below. # ${HOME} will be replaced by JVM user.home system property #netbeans_default_userdir="${HOME}/.netbeans/6.9" netbeans_default_userdir=".netbeans/6.9" # Options used by NetBeans launcher by default, can be overridden by explicit # command line switches: netbeans_default_options="-J-client -J-Xss2m -J-Xms32m -J-XX:PermSize=32m -J-XX:MaxPermSize=200m -J-Dapple.laf.useScreenMenuBar=true -J-Dapple.awt.graphics.UseQuartz=true -J-Dsun.java2d.noddraw=true" # Note that a default -Xmx is selected for you automatically. # You can find this value in var/log/messages.log file in your userdir. # The automatically selected value can be overridden by specifying -J-Xmx here # or on the command line. # If you specify the heap size (-Xmx) explicitely, you may also want to enable # Concurrent Mark & Sweep garbage collector. In such case add the following # options to the netbeans_default_options: # -J-XX:+UseConcMarkSweepGC -J-XX:+CMSClassUnloadingEnabled -J-XX:+CMSPermGenSweepingEnabled # (see http://wiki.netbeans.org/wiki/view/FaqGCPauses) # Default location of JDK, can be overridden by using --jdkhome <dir>: #netbeans_jdkhome="/path/to/jdk" netbeans_jdkhome="C:\Program Files\Java\jdk1.6.0_24\" # Additional module clusters, using ${path.separator} (';' on Windows or ':' on Unix): #netbeans_extraclusters="/absolute/path/to/cluster1:/absolute/path/to/cluster2" # If you have some problems with detect of proxy settings, you may want to enable # detect the proxy settings provided by JDK5 or higher. # In such case add -J-Djava.net.useSystemProxies=true to the netbeans_default_options. But it refuses to start when i try to run it. If i change the JDK path to something incorrect it complains that it can't find the jdk so i think the jdk path is correct. It also creates a .netbeans directory when i try to start it. I don't see any errors and it just doesn't do anything else observable. Does anybody know how to set up this version of netbeans? Thanks.

    Read the article

< Previous Page | 181 182 183 184 185 186 187 188 189 190 191 192  | Next Page >