Search Results

Search found 2635 results on 106 pages for 'cheers and hth alf'.

Page 84/106 | < Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >

  • Jquery Form ajaxSubmit not submitting

    - by Gazler
    Hi, I am using the JQuery Form extension to submit a form with AJAX. I have the following code: var options = { beforeSubmit: showRequest, // pre-submit callback success: showResponse, // post-submit callback // other available options: //url: url // override for form's 'action' attribute //type: 'post', // 'get' or 'post', override for form's 'method' attribute //dataType: null // 'xml', 'script', or 'json' (expected server response type) clearForm: true, // clear all form fields after successful submit //resetForm: true // reset the form after successful submit // $.ajax options can be used here too, for example: timeout: 3000 }; $('#composeForm').submit(function() { // inside event callbacks 'this' is the DOM element so we first // wrap it in a jQuery object and then invoke ajaxSubmit $(this).find(':disabled').removeAttr('disabled'); $(this).ajaxSubmit(options); // !!! Important !!! // always return false to prevent standard browser submit and page navigation return false; }); The problem is that the form doesn't appear to be submitting, or atleast the success function is not being called. If I remove the return false, then the submission works, but the page navigates away. Is there a problem in my code that could be causing this? Cheers, Gazler.

    Read the article

  • DataTableReader is invalid for current DataTable 'TempTable'

    - by Sk93
    Hi, I'm getting the following error whenever my code creates a DataTableReader from a valid DataTable Object: "DataTableReader is invalid for current DataTable 'TempTable'." The thing is, if I reboot my machine, it works fine for an undertimed amount of time, then dies with the above. The code that throws this error could have been working fine for hours and then: bang. you get this error. It's not limited to one line either; it's every single location that a DataTableReader is used. Also, this error does NOT occur on the production web server - ever. This is an example of one of the lines where it falls over: If I step over this line, I get this: However, if I do this in the immediate window: I get no problems. Same goes if I actually use that line in the code. This has been driving me nuts for the best part of a week, and I've failed to find anything on Google that could help (as I'm pretty positive this isn't a coding issue). Some technical info: DEV Box: Vista 32bit (with all current windows updates) Visual Studio 2008 v9.0.30729.1 SP dotNet Framework 3.5 SP1 SQL Server: Microsoft SQL Server 2005 Standard Edition- 9.00.4035.00 (X64) Windows 2003 64bit (with all current windows updates) Web Server: Windows 2003 64bit (with all current windows updates) any help, ideas, or advice would be greatly appreciated! Cheers, Ian

    Read the article

  • Which combing css technique?

    - by DotnetShadow
    Hi there, Which of the following would you say is the best way to go when combining files for CSS: Say I have a master.css file that is used across all pages on my website (page1.aspx, page2.aspx) Page1.aspx - A specific page that has some unique css that is only ever used on that page, so I create a page1.css and it also uses another css grids.css Page2.aspx - Another specific page that is different from all other pages on the site and is different to page1.aspx, I'll name this page2.aspx and make a page2.css this doesn't use grids.css So would you combine the scripts as: Option1: Combine scripts csshandler.axd?d=master.css,page1.css,grids.css when visiting page1 Combine scripts csshandler.axd?d=master.css,page2.css when visiting page2 Benefits: Page specific, rendering quicker since only selectors for that page need to be matched up no unused selectors Drawback: Multiple combinations of master.css + page specific hence master.css has to be downloaded for each page Option2: Combine all scripts whether a page needs them or not csshandler.axd?d=master.css,page1.css,page2.css,grids.css (master, page1 and page2) that way it gets cached as one. The problem is that rendering maybe slower since it will have to try and match EVERY selector in the css with selectors on the page even the missing ones, so in the case of page2.aspx that doesn't use grids.css the selectors in grids.css will need to be parsed to see if they are in page2 which means rendering will be slow Benefits: One file will ever be downloaded and cached doesn't matter what page you visit Drawback: Unused selectors will need to be parsed by the browser slower rendering Option3: Leave the master file on it's own and only combine other scripts (the benefit of this is because master is used across all pages there is a chance that this is cached so doesn't need to keep on downloading csshandler.axd?d=Master.css csshandler.axd?d=page1.css,grids.css Benefits: master.css file can be cached doesn't matter what page you visit. Not many unused selectors as page spefic is applied Drawback: Initially minimum of 2 HTTP request will have to be made What do you guys think? Cheers DotnetShadow

    Read the article

  • Rails 3 - Category Filter Using Select - Load Partial Via Ajax

    - by fourfour
    Hey. I am trying to filter Client Comments by using a select and rendering it in a partial. Right now the partial loads @client.comments. I have a Category model with a Categorizations join. This all works, just need to know how to get the select to call the filter action and load the partial with ajax. Thanks for you help. Categories controller: def filter_category @categories = Category.all respond_to do |format| format.js # filter.rjs end end filter.js.erb: page.replace_html 'client-not-inner', :partial => 'comments', :locals => { :com => Category.first.comments } show.html.erb (clients) <% form_tag(filter_category_path(:id), :method => :put, :class => 'categories', :remote => true, :controller => 'categoires', :action => 'filter') do %> <label>Categories</label> <%= select_tag(:category, options_for_select(Category.all.map {|category| [category.name, category.id]}, @category_id)) %> <% end %> <div class="client-note-inner"> <%= render :partial => 'comments', :locals => { :com => @comments } %> </div><!--end client-note-inner--> Hope that makes sense. Cheers.

    Read the article

  • Trying to save file from Flash to PHP using $GLOBALS["HTTP_RAW_POST_DATA"]

    - by jolyonruss
    Let me start by saying PHP isn't my forte, I'm usually reluctant to try working with it because of problems exactly like this. The code works fine on my local machine under MAMP and on my server, but doesn't on the clients server :'( So what am I trying to do, well - save an image from Flash onto the server, simple right?! I'm using the method described on this site here: http://designreviver.com/tutorials/actionscript-3-jpeg-encoder-revealed-saving-images-from-flash/ but have made a small alteration so that instead of echoing the jpg causing the browser to download it locally, I do an fwrite and an fclose to save it to the server. Here is my PHP: $imageFile = '../images/' . $_GET['name']; $imageHandle = fopen($imageFile, "w"); fwrite($imageHandle, $jpg); fclose($imageHandle); } ? I've dona a phpinfo() on my clients server and it's running 5.2.2 my host is running 5.2.11 I don't know if much can have changed in those 9 minor revisions? I've also read another question on here which suggests making suer always_populate_raw_post_data is set to ON, but it's set to OFF on all of the server environments I've been testing in. I'm doing some XML saving using file_get_contents('php://input') which I've tried but failed to get working with images. Any help would be gratefully received, I'm happy to post the AS3 as well but it's EXACTLY the same as example I've linked above and works locally. As far as I can tell the problem lies with the PHP. Cheers.

    Read the article

  • Remote JMS connection still using localhost

    - by James
    I have a created a JMS Connection Factory on a remote glassfish server and want to use that server from a java client app on my local machine. I have the following configuration to get the context and connection factory: Properties env = new Properties(); env.setProperty("java.naming.factory.initial", "com.sun.enterprise.naming.SerialInitContextFactory"); env.setProperty("java.naming.factory.url.pkgs", "com.sun.enterprise.naming"); env.setProperty("java.naming.factory.state", "com.sun.corba.ee.impl.presentation.rmi.JNDIStateFactoryImpl"); env.setProperty("org.omg.CORBA.ORBInitialHost", JMS_SERVER_NAME); env.setProperty("org.omg.CORBA.ORBInitialPort", "3700"); initialContext = new InitialContext(env); TopicConnectionFactory topicConnectionFactory = (TopicConnectionFactory) initialContext.lookup("jms/MyConnectionFactory"); topicConnection = topicConnectionFactory.createTopicConnection(); topicConnection.start(); This seems to work and when I delete the ConnectionFactory from the glassfish server I get a exception indicating that is can't find jms/MyConnectionFactory as expected. However when I subsequently use my topicConnection to get a topic it tries to connect to localhost:7676 (this fails as I am not running glassfish locally). If I dynamically create a topic: TopicSession pubSession = topicConnection.createTopicSession(false, Session.AUTO_ACKNOWLEDGE); Topic topic = pubSession.createTopic(topicName); TopicPublisher publisher = pubSession.createPublisher(topic); Message mapMessage = pubSession.createTextMessage(message); publisher.publish(mapMessage); and the glassfish server is not running locally I get the same connection refused however, if I start my local glassfish server the topics are created locally and I can see them in the glassfish admin console. In case you ask I do not have jms/MyConnectionFactory on my local glassfish instance, it is only available on the remote server. I can't see what I am doing wrong here and why it is trying to use localhost at all. Any ideas? Cheers, James

    Read the article

  • javascript - catch SyntaxError and run alternate function

    - by ludicco
    Hello there, I'm trying to build something on javascript that I can have an input that can be everything like string, xml, javascript and (non-javascript string without quotes) as follows: //strings eval("'hello I am a string'"); /* note the following proper quote marks */ //xml eval(<p>Hello I am a XML doc</p>); //javascript eval("var hello = 2+2;"); So this first 3 are working well since they are simple javascript native formats but when I try use this inside javascript //plain-text without quotes eval("hello I am a plain text without quotes"); //--SyntaxError: missing ; before statement:--// Obviously javascript interprets this as syntax error because it thinks its javascript throwing a SyntaxError. So what I would like to do it to catch this error and perform the adjustment method if this occurs. I've already tried with try catch but it doesn't work since it keeps returning the Syntax error as soon as it tries to execute the code. Any help would be much appreciated Cheers :) Additional Information: Imagine an external file that javascript would read, using spidermonkey, so it's a non-browser stuff(I can't use HttpRequest, DOM, etc...)..not sure if this matters, but there it is. :)

    Read the article

  • Login to website using PHP and get text from page

    - by Anthony Garand
    I am trying to login to a website and grab content from a page you must be authenticated to see. I have done some research and have seen some examples using both cURL and stream_context_create but I cannot get either way to work. I have the url for the page to login to, and the page that contains the data I need to get. Your help is much appreciated! Here's what I'm working with: <?php $pages = array('home' => 'https://www.53.com/wps/portal/personal', 'login' => 'https://www.53.com/wps/portal/personal', 'data' => 'https://www.53.com/servlet/efsonline/index.html?Messages.SortedBy=DATE,REVERSE'); $ch = curl_init(); //Set options for curl session $options = array(CURLOPT_USERAGENT => 'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0)', CURLOPT_SSL_VERIFYPEER => FALSE, CURLOPT_SSL_VERIFYHOST => 2, CURLOPT_HEADER => TRUE, //CURLOPT_RETURNTRANSFER => TRUE, CURLOPT_COOKIEFILE => 'cookie.txt', CURLOPT_COOKIEJAR => 'cookies.txt'); //Hit home page for session cookie $options[CURLOPT_URL] = $pages['home']; curl_setopt_array($ch, $options); curl_exec($ch); //Login $options[CURLOPT_URL] = $pages['login']; $options[CURLOPT_POST] = TRUE; $options[CURLOPT_POSTFIELDS] = 'uid-input=xxx&pw=xxx'; $options[CURLOPT_FOLLOWLOCATION] = FALSE; curl_setopt_array($ch, $options); curl_exec($ch); //Hit data page $options[CURLOPT_URL] = $pages['data']; curl_setopt_array($ch, $options); $data = curl_exec($ch); //Output data echo $data; //Close curl session curl_close($ch); ?> Cheers, Anthony

    Read the article

  • how to create and track multiple pairs "View-ViewModel"?

    - by Gianluca Colucci
    Hi! I am building an application that is based on MVVM-Light. I am in the need of creating multiple instances of the same View, and each one should bind to its own ViewModel. The default ViewModelLocator implements ViewModels as singletons, therefore different instances of the same View will bind to the same ViewModel. I could create the ViewModel in the VMLocator as a non-static object (as simple as returning new VM()...), but that would only partially help me. In fact, I still need to keep track of the opened windows. Nevertheless, each window might open several other windows (of a different kind, though). In this situation I might need to execute some operation on the parent View and all its children. For example before closing the View P, I might want to close all its children (view C1, view C2, etc.). Hence, is there any simple and easy way to achieve this? Or is there any best practice you would advice me to follow? Thanks in advance for your precious help. Cheers, Gianluca.

    Read the article

  • SQL Server 2008 Running trigger after Insert, Update locks original table

    - by Polity
    Hi Folks, I have a serious performance problem. I have a database with (related to this problem), 2 tables. 1 Table contains strings with some global information. The second table contains the string stripped down to each individual word. So the string is like indexed in the second table, word by word. The validity of the data in the second table is of less important then the validity of the data in the first table. Since the first table can grow like towards 1*10^6 records and the second table having an average of like 10 words for 1 string can grow like 1*10^7 records, i use a nolock in order to read the second this leaves me free for inserting new records without locking it (Expect many reads on both tables). I have a script which keeps on adding and updating rows to the first table in a MERGE statement. On average, the data beeing merged are like 20 strings a time and the scripts runs like ones every 5 seconds. On the first table, i have a trigger which is beeing invoked on a Insert or Update, which takes the newly inserted or updated data and calls a stored procedure on it which makes sure the data is indexed in the second table. (This takes some significant time). The problem is that when having the trigger disbaled, Reading the first table happens in a few ms. However, when enabling the trigger and your in bad luck of trying to read the first table while this is beeing updated, Our webserver gives you a timeout after 10 seconds (which is way to long anyways). I can quess from this part that when running the trigger, the first table is kept (partially) in a lock untill the trigger is completed. What do you think, if i'm right, is there a easy way around this? Thanks in advance! Cheers, Koen

    Read the article

  • ntpd on Fedora Core 6 with high negative time rest values

    - by Mark White
    The basic problem is we have a FC6 server instance running on a virtual machine, and the system time seems to have been slowly varying until it is now causing a problem. The server runs 24/7 and has been up for 155 days. It has been changed to show GMT, and reports the time as (example) 00:15:15 GMT whereas the actual time is 00:00:00 GMT. This is an offset of 915 seconds. selinux has been changed to 'setenforce 0' for testing and I am running as root. I stop the ntpd service and change the time in System|Administration|Date & Time. The time still shows the same with 'date' in bash. There are no error logs. I change the date with 'date --set' in bash. The response confirms the changed date. I run 'date' and the incorrect date is shown. There are no error logs. I start the ntpd service and /var/log/messages shows success with 'time reset -915.720139s'. The date remains unchanged. ntpq -p shows three three time servers all have offsets of around -915 seconds. I stop ntpd service and try 'ntpd -gqx' and get the same result as above - success, but a large negative time reset. I've tried varying combinations of the above, and a few more settings in System|Administration|Date & Time - no change. I just need to reset the system time to GMT. No offset. But I can't wait for ntpd to slew the time over the next few weeks. Any advice is welcome, cheers! Sure this shouldn't be this difficult... Mark...

    Read the article

  • Enable MEX in a Web.Config

    - by Conor
    Hi Folks, How do I enable/create a MEX endpoint in the below web config so I can view the service from my browser? I have tried a few variation from googling but VS always complains about it. (not a valid child element etc...) <configuration> <system.web> <compilation debug="true" targetFramework="4.0" /> </system.web> <system.serviceModel> <serviceHostingEnvironment aspNetCompatibilityEnabled="true"/> <services> <service name="MyApp.MyService" behaviorConfiguration="WebServiceBehavior"> <endpoint address="" binding="webHttpBinding" contract="MyApp.IMyService" behaviorConfiguration="JsonBehavior"> <identity> <dns value="localhost"/> </identity> </endpoint> </service> </services> <behaviors> <endpointBehaviors> <behavior name="JsonBehavior"> <webHttp/> </behavior> </endpointBehaviors> </behaviors> </system.serviceModel> </configuration> Cheers, Conor

    Read the article

  • Hibernate - query caching/second level cache does not work by value object containing subitems

    - by Zoltan Hamori
    Hi! I have been struggling with the following problem: I have a value object containing different panels. Each panel has a list of fields. Mapping: <class name="com.aviseurope.core.application.RACountryPanels" table="CTRY" schema="DBDEV1A" where="PEARL_CTRY='Y'" lazy="join"> <cache usage="read-only"/> <id name="ctryCode"> <column name="CTRY_CD_ID" sql-type="VARCHAR2(2)" not-null="true"/> </id> <bag name="panelPE" table="RA_COUNTRY_MAPPING" fetch="join" where="MANDATORY_FLAG!='N'"> <key column="COUNTRY_LOCATION_ID"/> <many-to-many class="com.aviseurope.core.application.RAFieldVO" column="RA_FIELD_MID" where="PANEL_ID='PE'"/> </bag> </class> I use the following criteria to get the value object: Session m_Session = HibernateUtil.currentSession(); m_Criteria = m_Session.createCriteria(RACountryPanels.class); m_Criteria.add(Expression.eq("ctryCode", p_Country)); m_Criteria.setCacheable(true); As I see the query cache contains only the main select like select * from CTRY where ctry_cd_id=? Both RACountryPanels and RAFieldVO are second level cached. If I check the 2nd level cache content I can see that it cointains the RAFields and the RACountryPanels as well and I can see the select .. from CTRY where ctry_cd_id=... in query cache region as well. When I call the servlet it seems that it is using the cache, but second time not. If I check the content of the cache using JMX, everything seems to be ok, but when I measure the object access time, it seems that it does not always use the cache. Cheers Zoltan

    Read the article

  • API Wrapper Architecture Best Practice

    - by Adam Taylor
    Hi, So I'm writing a Perl wrapper module around a REST webservice and I'm hoping to have some advice on how best to architect the module. I've been looking at a couple of different Perl modules for inspiration. Flickr::Simple2 - so this is basically one big file with methods wrapping around the different methods in the Flickr API, e.g. getPhotos() etc. Flickr::API - this is a sub-class of another module (LWP) for making HTTP requests. So basically it just allows you to make calls through the module, using LWP, that go to the correct API method/URL without defining any wrapper methods itself. (That's explained pretty poorly - but basically it has a method that takes an argument (a API method name) and constructs the correct API call). e.g request() / response(). An alternative design would be like the first described, but less monolithic, with separate classes for separate "areas" of the API. I'd like to follow modern/best practice Perl methods so I'm using Dist::Zilla to build the module and Moose for the OO stuff but I'd appreciate some input on how to actually design/architect my wrapper. Guides/tutorials or pointers to other well designed modules would be appreciated. Cheers

    Read the article

  • jQuery Autocomplete plugin (Jorn Zaefferer's) - how to dynamically change the list of displayed valu

    - by Max Williams
    I'm using Jorn Zaefferer's Autocomplete query plugin, http://bassistance.de/jquery-plugins/jquery-plugin-autocomplete/ I have options set so it shows all the values when you click in the empty text field, a bit like a select, and the option is also set so that the user can only choose from the list of values used by the autocomplete (so it's kind of like a select but with autocomplete functionality). I have two radio buttons below the text field, which determine whether the user chooses from a long list or a short list of possible values. I want to update the values used in the autocomplete when one of these radio buttons is clicked. Currently i'm doing this in a not very clever way by calling autocomplete again on the same text field, with the different array of values, but this creates a situation where both are active at once, and i can see the long list peeking out from behind the short list. What i need to do is either a) dynamically change the values used in the autocomplete or b) remove (unbind?) the autocomplete from the text field before re-initialising it. Either of these would do tbh though option a) is kind of nicer. Any ideas anyone? Here's my current code: function initSubjectLongShortList(field, short_values, long_values){ $(".subject_short_long_list").change(function(){ updateSubjectAutocomplete(field, short_values, long_values); }); updateSubjectAutocomplete(field, short_values, long_values); } function updateSubjectAutocomplete(field, short_values, long_values){ if($(".subject_short_long_list:checked").attr('id') == "subject_long_list"){ initSubjectAutocomplete(field, long_values); } else { initSubjectAutocomplete(field, short_values); } } function initSubjectAutocomplete(field, values){ jQuery(field).autocomplete(values, { minChars: 0, //make it appear as soon as we click in the field max: 2000, scrollHeight: 400, matchContains: true, selectFirst: false }); } cheers, max

    Read the article

  • Spell checker software

    - by Naren
    Hello Guys, I have been assigned a task to find a decent spell checker (UK English) preferably the free one for a project that we are doing. I have looked at Google AJAX API for this. The project contains some young person's (kids less than 18 years old) data which shouldn't allow exposing or storing outside the application boundaries. Google logs the data for research purpose that means Google owns the data whatever we send over the wire through Google API. Is this right? I fired an email to Google regarding the privacy of data and storage but they haven't come back. If you have some knowledge regarding this please share with me. At this point our servers might not have access to external entities that means we might not be able to use Web API for this over the wire. But it may change in the future. That means I have to find out some spell checker alternatives that can sit in our environment and do the job or an external APIs. Would you mind share your findings and knowledge in this regard. I would prefer free services but never know if you have some cracking spell checker for a few quid’s then I don't mind recommending to the project board. Technology using ASP.NET 3.5/4.0, MVC, jQuery, SQL Sever 2008 etc Cheers, Naren

    Read the article

  • Asynchronously populate datagridview in Windows Forms application

    - by dcryptd
    howzit! I'm a web developer that has been recently requested to develop a Windows forms application, so please bear with me (or dont laugh!) if my question is a bit elementary. After many sessions with my client, we eventually decided on an interface that contains a tabcontrol with 5 tabs. Each tab has a datagridview that may eventually hold up to 25,000 rows of data (with about 6 columns each). I have successfully managed to bind the grids when the tab page is loaded and it works fine for a few records, but the UI freezes when I bound the grid with 20,000 dummy records. The "freeze" occurs when I click on the tab itself, and the UI only frees up (and the tab page is rendered) once the bind is complete. I communicated this to the client and mentioned the option of paging for each grid, but she is adament w.r.t. NOT wanting this. My only option then is to look for some asynchronous method of doing this in the background. I don't know much about threading in windows forms, but I know that I can use the BackgroundWorker control to achieve this. My only issue after reading up a bit on it is that it is ideally used for "long-running" tasks and I/O operations. My questions: How does one determine a long-running task? How does one NOT MISUSE the BackgroundWorker control, ie. is there a general guideline to follow when using this? (I understand that opening/spawning multiple threads may be undesirable in certain instances) Most importantly: How can I achieve (asychronously) binding of the datagridview after the tab page - and all its child controls - loads. Thank you for reading this (ahem) lengthy query, and I highly appreciate any responses/thoughts/directions on this matter! Cheers!

    Read the article

  • Java - SAX parser on a XHTML document

    - by Peter
    Hey, I'm trying to write a SAX parser for an XHTML document that I download from the web. At first I was having a problem with the doctype declaration (I found out from here that it was because W3C have intentionally blocked access to the DTD), but I fixed that with: XMLReader reader = parser.getXMLReader(); reader.setFeature("http://apache.org/xml/features/disallow-doctype-decl",true); However, now I'm experiencing a second problem. The SAX parser throws an exception when it reaches some Javascript embedded in the XHTML document: <script type="text/javascript" language="JavaScript"> function checkForm() { answer = true; if (siw && siw.selectingSomething) answer = false; return answer; }// </script> Specifically the parser throws an error once it reaches the &&'s, as it's expecting an entity reference. The exact exception is: `org.xml.sax.SAXParseException: The entity name must immediately follow the '&' in the entity reference. at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.createSAXParseException(ErrorHandlerWrapper.java:198) at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.fatalError(ErrorHandlerWrapper.java:177) at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:391) at com.sun.org.apache.xerces.internal.impl.XMLScanner.reportFatalError(XMLScanner.java:1390) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanEntityReference(XMLDocumentFragmentScannerImpl.java:1814) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:3000) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:624) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:486) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:810) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:740) at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:110) at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1208) at com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:525) at MLIAParser.readPage(MLIAParser.java:55) at MLIAParser.main(MLIAParser.java:75)` I suspect (but I don't know) that if I hadn't disabled the DTD then I wouldn't get this error. So, how can I avoid the DTD error and avoid the entity reference error? Cheers, Pete

    Read the article

  • Which CEP product to start with?

    - by Andreas
    Hi, I want to learn more on how to build CEP based applications. So I looked around and found several products (overview found here: http://rulecore.com/CEPblog/?page_id=47). But as there are quite a few at the moment, I don't know which is the best to start with. And overall I just would consider the one available for free. The rest is a bit to expensive for just private use ;) Esper is for free, but without Esper studio it seems quite tedious to develop a cep app. Streambase offers a free trial, but I couldn't find out how long you can use this (if only for a month, no that helpful for longer research). Oracle CEP suite seems quite complete, but in the cep scene - as far as I can see - it is the least recognized compared to Esper or Streambase. So do you have any hints on what is the best way to start with cep development? Is it worth to spent time on working through the oracle documenation or is it better to start with Esper or Streambase? Cheers, Andreas

    Read the article

  • Inheritance mapping with Fluent NHibernate

    - by Berryl
    Below is an example of how I currently use automapping overrides to set up a my db representation of inheritance. It gets the job done functionality wise BUT by using some internal default values. For example, the discriminator column name winds up being the literal value 'discriminator' instead of "ActivityType, and the discriminator values are the fully qualified type of each class, instead of "ACCOUNT" and "PROJECT". I am guessing that this is a bug that doesn't get much attention now that conventions are preferred, and that the convention approach works correctly. I am looking for a sample of usage. Cheers, Berryl public class ActivityBaseMap : IAutoMappingOverride<ActivityBase> { public void Override(AutoMapping<ActivityBase> mapping) { ... mapping.DiscriminateSubClassesOnColumn("ActivityType"); } } public class AccountingActivityMap : SubclassMap<AccountingActivity> { public AccountingActivityMap() { ... DiscriminatorValue("ACCOUNT"); } } public class ProjectActivityMap : SubclassMap<ProjectActivity> { public ProjectActivityMap() { ... DiscriminatorValue("PROJECT"); } }

    Read the article

  • iPad + OpenGL ES2. Why the Puzzling Virtual Memory Spike During Device Reorientation?

    - by dugla
    I've been spending the afternoon starring at Xcode Instruments memory monitor trying to decipher the following memory issue. I have a fullscreen OpenGL ES2 app running on iPad. I am fanatical about memory issues so my retains/releases are all nicely balanced. I closely monitor memory leaks. My app is basically squeeky clean. Except occassionally when I reorient the device. Portrait to Landscape. Back and forth I rock the device stress testing my discarding and rebuilding of the OpenGL framebuffer. The ambient memory footprint of my app is about 70MB Real Mems and 180MB Virtual Mems. Real memory hardly varies at all during device rotations. However the virtual mems reading sometimes briefly spikes up to 250MB and then recedes back to 180MB. No real pattern. But clearly related discarding/rebuilding the framebuffer. I see random memory warnings in my NSlogs but the app just hums along, no worries. 1) Since iPhone OS devices don't have VM could someone explain to me what the VM reading actually means? 2) My app totally leak free and generally bulletproof dispite the VM spikes. Never crashes. Rock solid. Should I be concerned about this? 3) There is clearly something happening in OpenGL framebuffer land that is causing this but I am using the API in the proper way: paraphrasing: Discarding the framebuffer: glDeleteRenderbuffers(1, &m_colorbuffer); glDeleteFramebuffers(1, &m_framebuffer); Rebuilding the framebuffer: glGenFramebuffers(1, &m_framebuffer); glGenRenderbuffers(1, &m_colorbuffer); Is there some other memory flushing trick I have missed? Thanks for any insight. Cheers, Doug

    Read the article

  • ASP.NET MVC - Entending the Authorize Attribute

    - by Mad Halfling
    Hi folks, currently I use [Authorize(Roles = ".....")] to secure my controller actions on my ASP.NET MVC 1 app, and this works fine. However, certain search views need to have buttons that route to these actions that need to be enabled/disabled based on the record selected on the search list, and also the security privs of the user logged in. Therefore I think I need to have a class accessing a DB table which cross-references these target controller/actions with application roles to determine the state of these buttons. This will, obviously, make things messy as privs will need to be maintained in 2 places - in that class/DB table and also on the controller actions (plus, if I want to change the access to the action I will have to change the code and compile rather than just change a DB table entry). Ideally I would like to extend the [Authorize] functionality so that instead of having to specify the roles in the [Authorize] code, it will query the security class based on the user, controller and action and that will then return a boolean allowing or denying access. Are there any good articles on this - I can't imagine it's an unusual thing to want to do, but I seem to be struggling to find anything on how to do it (could be Monday-morning brain). I've started some code doing this, looking at article http://schotime.net/blog/index.php/2009/02/17/custom-authorization-with-aspnet-mvc/ , and it seems to be starting off ok but I can't find the "correct" way to get the calling controller and action values from the httpContext - I could possibly fudge a bit of code to extract them from the request url, but that doesn't seem right to me and I'd rather do it properly. Cheers MH

    Read the article

  • log4net: Creating a logger dynamically, should I worry about anything?

    - by vtortola
    Hi, I need to create loggers dynamically, so with a post from here and the help of reflector I have managed to create loggers dynamically, but I'd like to know if I should worry about something else ... I don't know wich implications can have do it. public static ILog GetDyamicLogger(Guid applicationId) { Hierarchy hierarchy = (Hierarchy)LogManager.GetRepository(); RollingFileAppender roller = new RollingFileAppender(); roller.LockingModel = new log4net.Appender.FileAppender.MinimalLock(); roller.AppendToFile = true; roller.RollingStyle = RollingFileAppender.RollingMode.Composite; roller.MaxSizeRollBackups = 14; roller.MaximumFileSize = "15000KB"; roller.DatePattern = "yyyyMMdd"; roller.Layout = new log4net.Layout.PatternLayout(); roller.File = "App_Data\\Logs\\"+applicationId.ToString()+"\\debug.log"; roller.StaticLogFileName = true; PatternLayout patternLayout = new PatternLayout(); patternLayout.ConversionPattern = "%date [%thread] %-5level %logger [%property{NDC}] - %message%newline"; patternLayout.ActivateOptions(); roller.Layout = patternLayout; roller.ActivateOptions(); hierarchy.Root.AddAppender(roller); hierarchy.Root.Level = Level.All; hierarchy.Configured = true; DummyLogger dummyILogger = new DummyLogger(applicationId.ToString()); dummyILogger.Hierarchy = hierarchy; dummyILogger.Level = log4net.Core.Level.All; dummyILogger.AddAppender(roller); return new LogImpl(dummyILogger); } internal sealed class DummyLogger : Logger { // Methods internal DummyLogger(string name) : base(name) { } } Cheers.

    Read the article

  • Problem with apostrophes and other special characters when using aspell in windows

    - by Loftx
    Hi there, We seem to be having a problem with the spell checker on our content management system where it marks the ve part of We’ve as a misspelling. The spellchecker uses aspell which is called from a script on the server which executes the cmd.exe and uses it to pipe a file into aspell (it's a long winded way I know, but our server side programming langauge (ColdFusion) doesn't support writing to stdin for executables). Aspell is called by executing: c:\windows\system32\cmd.exe /c type d:\path_to_file\file.txt | "C:\Program Files\Aspell\bin\aspell" --lang=en -a Where file.txt contains the text to be spelled e.g. ^Oh have We’ve (the carat is added to prevent piping problems I believe). Aspell then output: @(#) International Ispell Version 3.1.20 (but really Aspell 0.50.3) * * * & ve 62 12: vie, voe, V, v, veg, vet, Be, Ce, be, Ev, E, e, vex, VA, VI, Va, Vi, vi, we, VD, VF, VG, VJ, VP, VT, Vt, vb, vs, DE, De, Fe, GE, Ge, He, IE, Le, ME, Me, NE, Ne, OE, PE, Re, SE, Se, Te, Xe, he, me, re, ye, Ave, Eve, Ive, ave, eve, VAR, var, veer, vier, view, vow However, we have a dev site, with the same version of Aspell, and when the same file is used it outputs with no misspellings. Both servers are running Aspell 0.50.3 on Windows server 2003, but there could be other differences in configuration: @(#) International Ispell Version 3.1.20 (but really Aspell 0.50.3) I'm wondering if the problem is to do with the piping part of the process or something different in the Aspell configuration. Does anyone have any ideas? Cheers, Tom

    Read the article

  • MonoRail: Testing, Route Extensions, Folder Structures

    - by Kezzer
    I've got a few questions related to the use of MonoRail Testing Does everyone tend to use NUnit for their testing? I haven't worked enough with testing to know if this is a good testing framework to use. I'm just looking to get more into testing my applications a lot more than before and wanted to know if there's any general guidelines. Are you supposed to copy the controller over to a test area and just rename it with test in the name and re-run it? How do you ensure your test project and main project coincide with one another? Is it just a case of copying everything over again or are there tools available to do it for you? Route Extensions MonoRail tends to use <action>.rails, can you omit the .rails part if you configure your routing correctly? Why does this seem to be the standard? Folder Structures I haven't found anywhere which really points out your standard folder structure. Sure, you have Controllers, Models, and Views. But your Models folder should contain your data access objects as well. I've seen some have something like -> Models -> DaoClasses -> Entities But what about custom structures used to get data out of views? And if you're using NHibernate, where's a good place to stick the mappings? I know it's entirely dependent on the developer, but I haven't really seen any standard approach. Cheers

    Read the article

< Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >