Search Results

Search found 20904 results on 837 pages for 'disk performance'.

Page 738/837 | < Previous Page | 734 735 736 737 738 739 740 741 742 743 744 745  | Next Page >

  • Web-based clients vs thick/rich clients?

    - by rudolfv
    My company is a software solutions provider to a major telecommunications company. The environment is currently IBM WebSphere-based with front-end IBM Portal servers talking to a cluster of back-end WebSphere Application Servers providing EJB services. Some of the portlets use our own home-grown MVC-pattern and some are written in JSF. Recently we did a proof-of-concept rich/thick-client application that communicates directly with the EJB's on the back-end servers. It was written in NetBeans Platform and uses the WebSphere application client library to establish communication with the EJB's. The really painful bit was getting the client to use secure JAAS/SSL communications. But, after that was resolved, we've found that the rich client has a number of advantages over the web-based portal client applications we've become accustomed to: Enormous performance advantage (CORBA vs. HTTP, cut out the Portal Server middle man) Development is simplified and faster due to use of NetBeans' visual designer and Swing's generally robust architecture The debug cycle is shortened by not having to deploy your client application to a test server No mishmash of technologies as with web-based development (Struts, JSF, JQuery, HTML, JSTL etc., etc.) After enduring the pain of web-based development (even JSF) for a while now, I've come to the following conclusion: Rich clients aren't right for every situation, but when you're developing an in-house intranet-based solution, then you'd be crazy not to consider NetBeans Platform or Eclipse RCP. Any comments/experiences with rich clients vs. web clients?

    Read the article

  • Chart controls for ASP.NET

    - by tolism7
    I am looking people's opinion and experience on using chart controls within an ASP.NET application (web forms or MVC) primarily but also in any kind of project. I am currently doing my research and I have a pretty big list of controls to evaluate. My list includes (in no particular order): ASP.NET controls: DevExpress XtraCharts (http://demos.devexpress.com/XtraChartsDemos/) Dundas Chart for .NET (http://www.dundas.com/) Telerik RadChart for ASP.NET AJAX (http://www.telerik.com/) ComponentArt Charting & Visualization for ASP.NET (http://www.componentart.com/) Infragistics WebChart (http://www.infragistics.com/dotnet/netadvantage/aspnet.aspx#Overview) .net Charting (http://www.dotnetcharting.com/) Chart Control for .Net Framework (Microsoft's) (http://weblogs.asp.net/scottgu/archive/2008/11/24/new-asp-net-charting-control-lt-asp-chart-runat-quot-server-quot-gt.aspx) Flash controls: FusionCharts v3 (http://www.fusioncharts.com/) XML/SWF Charts (http://www.maani.us/xml%5Fcharts/index.php) amCharts (http://www.amcharts.com/) AnyChart (http://www.anychart.com/home/) Javascript: Flot (http://code.google.com/p/flot/) Flotr (http://solutoire.com/flotr/) jqPlot (http://www.jqplot.com/index.php) (If I missed some that worth to be compared against the above please let me know.) What I am looking is opinions on using any of the above so I can form my own and help others do the same, based on what I read here. I do not care which one is better. What I care for is why someone likes one of the above and what do these controls offer as a distinct advantage. I am interested in developer's opinion and I would like to find out which things are difficult doing with any of the above controls and which things are easy to achieve. AJAX compatibility (build in to the controls but also manual), ASP.NET compatibility, input capabilities, data binding options, performance, how much code does one need to write in order to create a chart, are some of the things that I would want to read about. I have already done my research on StackOverflow for relevant questions but there is nothing on the level of detail that I would want to read in order to make a responsible decision.

    Read the article

  • SQL Server 2008, join or no join?

    - by Patrick
    Just a small question regarding joins. I have a table with around 30 fields and i was thinking about making a second table to store 10 of those fields. Then i would just join them in with the main data. The 10 fields that i was planning to store in a second table does not get queried directly, it's just some settings for the data in the first table. Something like: Table 1 Id Data1 Data2 Data3 etc ... Table 2 Id (same id as table one) Settings1 Settings2 Settings3 Is this a bad solution? Should i just use 1 table? How much performance inpact does it have? All entries in table 1 would also then have an entry in table 2. Small update is in order. Most of the Data fields are of the type varchar and 2 of them are of the type text. How is indexing treated? My plan is to index 2 data fields, email (varchar 50) and author (varchar 20). And yes, all records in Table 1 will have a record in Table 2. Most of the settings fields are of the bit type, around 80%. The rest is a mix between int and varchar. The varchars can be null.

    Read the article

  • OutOfMemoryError: Java heap space error when start solr

    - by Hamid
    Hi I start indexing DB articles with solr, but after add about 58 million article (and about 113 GB size of disk) , i get below error message on tomcat log error Note1: i already set Init memory pool to 256MB, and Max memory pool:1400MB to tomcat server. Note2: I can post or search article but must wait over 3 min for get response. 8-apr-2010 14:27:07 org.apache.solr.common.SolrException log SEVERE: java.lang.OutOfMemoryError: Java heap space at org.apache.lucene.util.PriorityQueue.initialize(PriorityQueue.java:89) at org.apache.lucene.search.HitQueue.<init>(HitQueue.java:67) at org.apache.lucene.search.TopScoreDocCollector.<init>(TopScoreDocCollector.java:113) at org.apache.lucene.search.TopScoreDocCollector.<init>(TopScoreDocCollector.java:37) at org.apache.lucene.search.TopScoreDocCollector$InOrderTopScoreDocCollector.<init>(TopScoreDocCollector.java:42) at org.apache.lucene.search.TopScoreDocCollector$InOrderTopScoreDocCollector.<init>(TopScoreDocCollector.java:40) at org.apache.lucene.search.TopScoreDocCollector.create(TopScoreDocCollector.java:100) at org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:979) at org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:884) at org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:341) at org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:182) at org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:195) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131) at org.apache.solr.core.SolrCore.execute(SolrCore.java:1316) at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:338) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:241) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293) at org.apache.coyote.http11.Http11AprProcessor.process(Http11AprProcessor.java:859) at org.apache.coyote.http11.Http11AprProtocol$Http11ConnectionHandler.process(Http11AprProtocol.java:574) at org.apache.tomcat.util.net.AprEndpoint$Worker.run(AprEndpoint.java:1527) at java.lang.Thread.run(Unknown Source) What's problem ? Have any suggestion ? Thanks in advanced

    Read the article

  • solve a classic map-reduce problem with opencl?

    - by liuliu
    I am trying to parallel a classic map-reduce problem (which can parallel well with MPI) with OpenCL, namely, the AMD implementation. But the result bothers me. Let me brief about the problem first. There are two type of data that flow into the system: the feature set (30 parameters for each) and the sample set (9000+ dimensions for each). It is a classic map-reduce problem in the sense that I need to calculate the score of every feature on every sample (Map). And then, sum up the overall score for every feature (Reduce). There are around 10k features and 30k samples. I tried different ways to solve the problem. First, I tried to decompose the problem by features. The problem is that the score calculation consists of random memory access (pick some of the 9000+ dimensions and do plus/subtraction calculations). Since I cannot coalesce memory access, it costs. Then, I tried to decompose the problem by samples. The problem is that to sum up overall score, all threads are competing for few score variables. It keeps overwriting the score which turns out to be incorrect. (I cannot carry out individual score first and sum up later because it requires 10k * 30k * 4 bytes). The first method I tried gives me the same performance on i7 860 CPU with 8 threads. However, I don't think the problem is unsolvable: it is remarkably similar to ray tracing problem (for which you carry out calculation that millions of rays against millions of triangles). Any ideas?

    Read the article

  • Force float left with no line break no matter what

    - by Tesserex
    I'm guessing this isn't possible, but here goes. I have two tables, and I'm trying to get them to sit side-by-side so that they look like one table. The reason for this, instead of using one larger table, is that the data in the second table needs to be handled on a column basis, not row basis, for performance reasons like caching and AJAX-fetching data. So rather than have to reload the whole table for a single column, I decided to break the column out into a separate table, but have it visually seem like a single table. I can't find a way to forcibly put the second table next to the first. I can float them, but when the first table is too wide, the second one breaks to the next line. Here's the kicker: the width of the first table is dynamic. So I can't just set a huge width to their container. Well, I could set a huge width, like 1000%, but then I have a huge ugly horizontal scroll bar. So is there any way to tell the second table "Stay on that same line, no matter what! And line up right next to the previous element please!"

    Read the article

  • What components and IDE add-ins do you install with Delphi?

    - by Mick
    After a clean install of Delphi, what components and IDE add-ins do you make certain that you install? What's your Delphi "rig"? Here's what I install after a clean installation: Delphi 2007 JCL / JVCL - JEDI Code Library and JEDI Visual Code Library (600+ components) JWA / JWSCL - JEDI API Library & Security Code Library GExperts - GExperts is a free set of tools built to increase the productivity of Delphi and C++Builder programmers by adding several features to the IDE. TWM's experimental GExperts code formatter - adds code formatting capabilities to Delphi Virtual TreeView - Virtual Treeview is a treeview control built from ground up. More than 5 years of development made it one of the most flexible and advanced tree controls available today. MustangPeak Components (EasyList View, Virtual ShellTools, etc) - EasyListview is a control that has no dependance on the Microsoft Listview control but has all the features of the latest version from Microsoft. Also includes 'Explorer.exe' like shell components. Synapse lightweight networking components - contains simple low level non-visual objects for easy programming without problems. (no required multi-threaded synchronization, no need for windows message processing,…) Great for command line utilities, visual projects, NT services EurekaLog - EurekaLog is a complete bug resolution tool for Delphi and C++Builder developers that gives your application the power to catch every exception and memory leak, directly on the end user PC, generating a detailed log of the call stack (with file, class, method and line number), optionally sending you a copy of each log entry via email or to a web bug-tracker. DelphiSpeedUp - DelphiSpeedUp is an IDE plugin for Delphi and C++Builder. It improves the IDE’s startup speed and increases the general speed of the whole IDE. DDevExtensions - DDevExtensions extends the Delphi/C++Builder IDE by adding some new productivity features. IDE Fix Pack - The IDE Fix Pack installs is a DLL-Expert that fixes the following RAD Studio 2007 bugs at runtime. All changes are done in memory. No file on disk is modified. TPerlRegex - Regular Expression library for Delphi How about other Delphi developers?

    Read the article

  • Tips on creating user interfaces and optimizing the user experience

    - by Saif Bechan
    I am currently working on a project where a lot of user interaction is going to take place. There is also a commercial side as people can buy certain items and services. In my opinion a good blend of user interface, speed and security is essential for these types of websites. It is fairly easy to use ajax and JavaScript nowadays to do almost everything, as there are a lot of libraries available such as jQuery and others. But this can have some performance and incompatibility issues. This can lead to users just going to the next website. The overall look of the website is important too. Where to place certain buttons, where to place certain types of articles such as faq and support. Where and how to display error messages so that the user sees them but are not bothering him. And an overall color scheme is important too. The basic question is: How to create an interface that triggers a user to buy/use your services I know psychology also plays a huge role in how users interact with your website. The color scheme for example is important. When the colors are irritating on a website you just want to click away. I have not found any articles that explain those concept. Does anyone have any tips and/or recourses where i can get some articles that guide you in making the correct choices for your website.

    Read the article

  • Why use short-circuit code?

    - by Tim Lytle
    Related Questions: Benefits of using short-circuit evaluation, Why would a language NOT use Short-circuit evaluation?, Can someone explain this line of code please? (Logic & Assignment operators) There are questions about the benefits of a language using short-circuit code, but I'm wondering what are the benefits for a programmer? Is it just that it can make code a little more concise? Or are there performance reasons? I'm not asking about situations where two entities need to be evaluated anyway, for example: if($user->auth() AND $model->valid()){ $model->save(); } To me the reasoning there is clear - since both need to be true, you can skip the more costly model validation if the user can't save the data. This also has a (to me) obvious purpose: if(is_string($userid) AND strlen($userid) > 10){ //do something }; Because it wouldn't be wise to call strlen() with a non-string value. What I'm wondering about is the use of short-circuit code when it doesn't effect any other statements. For example, from the Zend Application default index page: defined('APPLICATION_PATH') || define('APPLICATION_PATH', realpath(dirname(__FILE__) . '/../application')); This could have been: if(!defined('APPLICATION_PATH')){ define('APPLICATION_PATH', realpath(dirname(__FILE__) . '/../application')); } Or even as a single statement: if(!defined('APPLICATION_PATH')) define('APPLICATION_PATH', realpath(dirname(__FILE__) . '/../application')); So why use the short-circuit code? Just for the 'coolness' factor of using logic operators in place of control structures? To consolidate nested if statements? Because it's faster?

    Read the article

  • Core Data migration of to-one relationship to to-many relationship

    - by westsider
    I have a deployed app that samples measurements from sensors (e.g., Temp °C, Pressure kPa). The user can create Experiments and collect samples. Each sample is stored as a Run, such that there is a one-to-many relationship from Experiment to Run. In the interest of performance, Run has a to-one relationship with Data entity (which is where the actual raw data is stored); this allows some Run attributes to be loaded without necessarily loading lots of data. Most of our sensors have multiple measurements, so it would be nice to store all the data that is actually being sampled. But this means that the Run <--- Data relationship needs to become Run <-- Data (to use Xcode's convention). I am faced with trying to migrate data from old Run to-one Data model to new Run to-many Data model. Can this be done using Mapping Models? If so, does anyone have any pointers to examples? If not, does anyone have any pointers to examples of how to do that? Thanks for any pointers or advice.

    Read the article

  • Versioning freindly, extendible binary file format

    - by Bas Bossink
    In the project I'm currently working on there is a need to save a sizeable data structure to disk. Being in optimist I thought their must be a standard solution for such a problem however upto now I haven't found a solution that satisfies the following requirements: .net 2.0 support, preferably with a foss implementation version friendly (this should be interpreted as reading an old version of the format should be relatively simple if the changes in the underlying data structure are simple, say adding/dropping fields) ability to do some form of random access where part of the data can be extended after initial creation (think of this as extending intermediate results) space and time efficient (xml has been excluded as option given this requierement) Options considered so far: Protocol Buffers : was turned down by verdict of the documentation about Large Data Sets since this comment suggest adding another layer on top, this would call for additional complexity which I wish to have handled by the file format itself. HDF5,EXI : do not seem to have .net implementations SQLite : the data structure at hand would result in a pretty complex table structure that seems to heavyweight for the intended use BSON : does not appear to support requirement 3. Fast Infoset : only seems to have buyware .net implementations Any recommendations or pointers are greatly appreciated. Furthermore if you believe any of the information above is not true please provide pointers/examples to proove me wrong.

    Read the article

  • Extracting files from merge module

    - by Mystagogue
    All I want is a command-line tool that can extract files from a merge module (.msm) onto disk. I'm trying msidb.exe and orca.exe The documentation for orca states: Many merge module options can be specified from the command line... Extracting Files from a Merge Module Orca supports three different methods for extracting files contained in a merge module. Orca can extract the individual CAB file, extract the files into a module tree and extract the files into a source image once it has been merged into a target database... Extracting Files To extract the individual files from a merge module, use the ... -x ... option on the command line, where is the desired path to the new directory tree. The specified path is used as the root path for the extracted files. All files are extracted from the CAB file embedded in the module and placed in the specified path. The directory layout for the extracted files is based on the directory tree of the merge module. It mostly sounds like exactly what I need. But when I try it, orca simply opens up an editor (with info on the msm I specified) and then does nothing. I've tried a variety of command lines, usually starting with this: orca -x theDirectory theModule.msm I use "theDirectory" as whatever empty folder I want. Like I said - it didn't do anything. Then I tried msidb, where a couple of attempts I've made look like this: msidb -d theModule.msm -w {storage} msidb -d theModule.msm -x {stream} In both cases, I don't know what to insert for {storage} or {stream} to make it happy - I don't know what those represent. Can someone explain what I'm doing wrong with the command line options? Is there any other tool that can do this?

    Read the article

  • Ruby on Rails export to csv - maintain mysql select statement order

    - by zekial
    Exporting some data from mysql to a csv file using FasterCSV. I'd like the columns in the outputted CSV to be in the same order as the select statement in my query. Example: rows = Data.find( :all, :select=>'name, age, height, weight' ) headers = rows[0].attributes.keys FasterCSV.generate do |csv| csv << headers rows.each do |r| csv << r.attributes.values end end CSV Output: height,weight,name,age 74,212,bob,23 70,201,fred,24 . . . I want the CSV columns in the same order as my select statement. Obviously the attributes method is not going to work. Any ideas on the best way to ensure that the columns in my csv file will be in the same order as the select statement? Got a lot of data and performance is an issue. The select statement is not static. I realize I could loop through column names within the rows.each loop but it seems kinda dirty.

    Read the article

  • WCF Service Throttling

    - by Mubashar Ahmad
    Dear All I have a WCF Service Deployed in a Console App with BasicHTTPBinding and SSL enabled on port using NetSH command and more over following attribute is set as well. [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] And also i have set the Throttling behavior as <serviceThrottling maxConcurrentCalls="2147483647" maxConcurrentSessions="2147483647" maxConcurrentInstances="2147483647" /> On the other hand i have created a Test Client(for load test) that initiates multiple clients simultaneously(multiple threads) and performs transactions on server. everything seems fine and working properly but on server the CPU utilization is doesn't grow so i added some logging to view the number of concurrent calls to the server and found that its never went over 6. i have reviewed the performance counter logging code more than twice and it seems fine to me. So i want to ask where is the problem in this situation and one more thing i haven't specified any kind of ContextMode or ConcurrencyMode yet. After this Post I noticed that whenever i start another Intance of Test Client my concurrent Server Calls counter increase to 2 like if i am running only 1 instance the maximum Concurrent Rcvd Calls will be 2 and if there are two instance the same value goes to 4 and so on. Is there any limit of Number of WCF Calls from once process? Looking for help Mubashar *Added on 17-March******************* Today i ran another test with one test client(with 50 concurrent users) on the same machine on which the server is running this time i am getting exact result what i wanted it to show i.e. Maximum concurrent Calls Rcvd by Server = 50 but i need to do it the same on others machines as well. Can anybody help me on this.

    Read the article

  • Manual drag-drop operations in Flex

    - by Yarin
    This is a two-part problem: A) I'm implementing several irregular drag-drop operations in Flex (e.g. DataGrid ItemRenderer into Tree). My preference was modifying DragManager operations to meet my needs, and in fact using DragManager allows me to do everything I need, but I'm having serious issues with performance. For example, dragging anything over a many-columned DataGrid, whether the drag was initiated with DragManager.doDrag, or just using native ListBase drag-drop functionality, slows the drag movement to a crawl. Even if the DataGrid is disabled/ not listenening for any move/drag events, this happens. On the other hand, if the drag is initiated by calling .startDrag() on the Sprite, the drag is smooth and performs great over DataGrids and everything else. So part A would be: Is there a reason why .startDrag() operations work so well, while drags initiated through DragManager.doDrag suffer so badly when over certain components? B) If indeed the solution is to handle drag-drops using .startDrag(), how would I go about determining what component the mouse is over when the drag is released? In my example, my dragged object is brought up to the top level of the display list, and so is being moved around in stage coordinates. mouseMove, mouseOver events don't fire on the components I'm dragging over because the mouse is constantly over the dragged component, so I would need some sort of stage.coordinate - visibleComponentAtThatCoordinate conversion. Any thoughts on this? Thanks alot!-- Yarin

    Read the article

  • Can't connect to local MySQL server through socket

    - by Martin
    I was trying to tune the performance of a running mysql-server by running this command: mysqld_safe --key_buffer_size=64M --table_cache=256 --sort_buffer_size=4M --read_buffer_size=1M & After this i'm unable to connect mysql from the server where mysql is running. I get this error: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (111) However, luckily i can still connect to mysql remotely. So all my webservers still have access to mysql and are running without any problems. Because of this though i don't want to try to restart the mysql server since that will probably fuck everything up. Now i know that mysqld_safe is starting the mysql-server, and since the mysql server was already running i guess it's some kind of problem with two mysql servers running and listening to the same port. Is there some way to solve this problem without restarting the initial mysql server? UPDATE: This is what ps xa | grep "mysql" says: 11672 ? S 0:00 /bin/sh /usr/bin/mysqld_safe 11780 ? Sl 175:04 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql --pid-file=/var/run/mysqld/mysqld.pid --socket=/var/run/mysqld/mysqld.sock --port=3306 11781 ? S 0:00 logger -t mysqld -p daemon.error 12432 pts/0 R+ 0:00 grep mysql

    Read the article

  • MSWord automation:Get file contents after it was saved

    - by BlackTigerX
    I have an application that uses MSWord automation to edit some documents, after they save and close word I need to grab the modified file and put it back on the repository, there is only one scenario where I can't get it to work and that is when the user makes changes to the file, selects to close word and selects yes to save the file there are 2 events that I'm using: DocumentBeforeSave Quit on the Quit event I'm trying to load the .docx file from disk but on this particular scenario I get an IOException because the file is still in use, somehow I need to wait until after the Quit event has been processed, which is when Word is actually closed and the file is no longer being used right now I have it working using this word.Visible = true; while (!wordDone) { //gets changed to true on the Quit event System.Threading.Thread.Sleep(100); } bool error = false; do { try { //need to load the contents of the modified file ls.Content = System.IO.File.ReadAllBytes(provider.GetFileName()); error = false; } catch (System.IO.IOException) { error = true; System.Threading.Thread.Sleep(200); } } while (error); while this works it is very ugly, I need a way to fire an event after the Quit event has been handled, or block the current thread while word is still running, or get an event after the document has been saved, the bottom line is I need a clean way to load the file after it has been saved and word is closed. DocumentAfterSave would be awesome, but doesn't seem to exist. I Also tried unhooking the Quit handler and calling word.Quit on the Quit handler, that made no difference

    Read the article

  • Understanding WordProcessingML tags and avoid unnecessary tags

    - by rithanyalaxmi
    Hi, I am using MS Word API to generate .docx which contains the data fetched from DB, in which i am applying the respective styles, fonts, symbols, etc. If the data fetched from the DB is quite huge, then there is a problem in displaying those data in the .docx file. I found that internally MS Word 2007 will write some content through tags which may not be needed to display the data. Hence i am figuring out what are the necessary MS Word tags needed when converting into a .xml file. So that i can avoid unnecessary tags and build only the respective tags which are needed to display the data. Hence i am planning to write my own .xml with the MS Word tags which are needed, than generating a .XML from .docx file My queries are:- 1) Whether it is right that the MS Word will generate some tags which may not be needed during the conversion of .docx to document.xml? That makes it heavy? If so what are the tags , so that i can avoid them when write by own .xml file. 2) Please send links to understand about the MS Word tags and its advantages, which tags are needed and which are not ? 3) Whether my approach to write a new .xml similar to document.xml (.docx conversion) is worthy one to go forward so that i can build the .xml with the tags i needed , so that i can improve the performance of the data display? Please shed some light into it and thanks in advance.. Thanks, Rithu

    Read the article

  • SWI-Prolog as an embedded application in VS2008 C++

    - by H.J. Miri
    I have a C++ application (prog.sln) which is my main program and I want to use Prolog as an embedded logic server (hanoi.pl) int main(int argc, char** argv) { putenv( "SWI_HOME_DIR=C:\\Program Files\\pl" ); argv[0] = "libpl.dll"; argv[1] = "-x"; argv[2] = "hanoi.pl"; argv[3] = NULL; PL_initialise(argc, argv); { fid_t fid = PL_open_foreign_frame(); predicate_t pred = PL_predicate( "hanoi", 1, "user" ); int rval = PL_call_predicate( NULL, PL_Q_NORMAL, pred, 3 ); qid_t qid = PL_open_query( NULL, PL_Q_NORMAL, pred, 3 ); while ( PL_next_solution(qid) ) return qid; PL_close_query(qid); PL_close_foreign_frame(fid); system("PAUSE"); } return 1; system("PAUSE"); } Here is my Prolog source code: :- use_module( library(shlib) ). hanoi( N ):- move( N, left, center, right ). move( 0, _, _, _ ):- !. move( N, A, B, C ):- M is N-1, move( M, A, C, B ), inform( A, B ), move( M, C, B, A ). inform( X, Y ):- write( 'move a disk from ' ), write( X ), write( ' to ' ), write( Y ), nl. I type at the command: plld -o hanoi prog.sln hanoi.pl But it doesn't compile. When I run my C++ app, it says: Undefined procedure: hanoi/1 What is missing/wrong in my code or at the prompt that prevents my C++ app from consulting Prolog? Appreciate any help/pointer.

    Read the article

  • In Python epoll can I avoid the errno.EWOULDBLOCK, errno.EAGAIN ?

    - by davyzhang
    I wrote a epoll wrapper in python, It works fine but recently I found the performance is not not ideal for large package sending. I look down into the code and found there's actually a LOT of error Traceback (most recent call last): File "/Users/dawn/Documents/workspace/work/dev/server/sandbox/single_point/tcp_epoll.py", line 231, in send_now num_bytes = self.sock.send(self.response) error: [Errno 35] Resource temporarily unavailable and previously silent it as the document said, so my sending function was done this way: def send_now(self): '''send message at once''' st = time.time() times = 0 while self.response != '': try: num_bytes = self.sock.send(self.response) l.info('msg wrote %s %d : %r size %r',self.ip,self.port,self.response[:num_bytes],num_bytes) self.response = self.response[num_bytes:] except socket.error,e: if e[0] in (errno.EWOULDBLOCK,errno.EAGAIN): #here I printed it, but I silent it in normal days #print 'would block, again %r',tb.format_exc() break else: l.warning('%r %r socket error %r',self.ip,self.port,tb.format_exc()) #must break or cause dead loop break except: #other exceptions l.warning('%r %r msg write error %r',self.ip,self.port,tb.format_exc()) break times += 1 et = time.time() I googled it, and says it caused by temporarily network buffer run out So how can I manually and efficiently detect this error instead it goes to exception phase? Because it cause to much time to rasie/handle the exception.

    Read the article

  • Retrieving varbinary output from a query in sql server into classic ASP

    - by user303526
    Hi, I'm trying to retrieve a varbinary output value from a query running on SQL Server 2005 into Classic ASP. The ASP execution just fails when it comes to part of code that is simply taking a varbinary output into a string. So I guess we gotta handle it some other way. Actually, I'm trying to set (sp_setapprole) and unset (sp_unsetapprole) application roles for a database connection. First I'd set the approle, then I'd run my required queries and finally unset the approle. During unsetting is when I need the cookie (varbinary) value in my ASP code so that I can create a query like 'exec sp_unsetapprole @cookie'. Well at this stage, I don't have the cookie (varbinary) value. The reason I'm doing this is I used to get 'sp_setapprole was not invoked correctly' error when trying to set app roles. I've disabled pooling by appending 'OLE DB Services = -2;Pooling=False' into my connection string. I know pooling helps performance wise but here I'm facing big problems. Please help me out to retrieve a varbinary value into an classic ASP file or suggest a way to set & unset app roles. Either way solutions are appreciated. Thanks, Nandagopal

    Read the article

  • How to best handle exception to repeating calendar events

    - by blcArmadillo
    I'm working on a project that will require me to implement a calendar. I'm trying to come up with a system that is very flexible: can handle repeating events, exceptions to repeats, etc. I've looked at the schema for applications like iCal, Lotus Notes, and Mozilla to get an idea of how to go about implementing such a system. Currently I'm having trouble deciding what is the best way to handle exceptions to repeating events. I've used databases quite a bit but don't have a ton of experience with really optimizing everything so I'm not sure which method of the two I'm considering would be optimal in terms of overall performance and ability to query/search: Breaking the repeating event. So taking the changing the ending date on the current row for the repeating event, inserting a new row with the exception, and adding another row continuing the old sequence. Simply adding an exception. So adding a new row with some field that indicates it as an override. So here is why I can't decide. Method one will result in a lot more rows since each edit requires 2 extra rows as apposed to only one row by the second method. On the other hand I think the query to find an event would be much simper, and thus possibly faster(?) using the first method. The second method seems like it will require more calculating on the application server since once you get the data you'll have to remove the intersection of the two rows. I know databases are often the bottleneck for websites and while I'm sure a lot of you are thinking either is fine because your project will probably never get large enough for the difference in efficiency to really matter, I'd still like to implement the best solution. So what method would you guys pick, or would you do something completely different? Also, as a side note I'll be using MySQL and PHP. If there is another technology that you think would be better suited for this, especially in the database area, please mention it. Thanks for the advice.

    Read the article

  • How can I combine several Expressions into a fast method?

    - by chillitom
    Suppose I have the following expressions: Expression<Action<T, StringBuilder>> expr1 = (t, sb) => sb.Append(t.Name); Expression<Action<T, StringBuilder>> expr2 = (t, sb) => sb.Append(", "); Expression<Action<T, StringBuilder>> expr3 = (t, sb) => sb.Append(t.Description); I'd like to be able to compile these into a method/delegate equivalent to the following: void Method(T t, StringBuilder sb) { sb.Append(t.Name); sb.Append(", "); sb.Append(t.Description); } What is the best way to approach this? I'd like it to perform well, ideally with performance equivalent to the above method. UPDATE So, whilst it appears that there is no way to do this directly in C#3 is there a way to convert an expression to IL so that I can use it with System.Reflection.Emit?

    Read the article

  • Building Web Application project using MSBuild from command line on 64-bit: missing targets file

    - by James Allen
    Building a solution containing a web application project using MSBuild from powershell like this: msbuild "/p:OutDir=$build_dir\" $solution_file Works fine for me on 32-bit, but on a 64-bit machine I am running into this error: error MSB4019: The imported project "C:\Program Files\MSBuild\Microsoft\VisualStudio\v9.0\WebApplications\Microsoft.WebApplication.targets" was not found. Confirm that the path in the declaration is correct, and that the file exists on disk. I am using Visual Studio 2008 and powershell v2. The problem has already been documented here and here. Basically on 64-bit install of VS, the Microsoft.WebApplication.targets needed by MSBuild is in the Program Files(x86) dir, not the Program Files dir, but MSBuild doesn't recognise this and so looks in the wrong place. The two solutions are not ideal: Manually copy the file on 64-bit from Program Files(x86) to Program Files. This is a poor solution - every dev will have to do this manually. Manually edit the csproj file so MSBuild looks in the right place. Again not ideal: I would rather not have to get everyone on 64bit to manually edit csproj files on every new project. e.g. <Import Project="$(MSBuildExtensionsPathx86)\$(WebAppTargetsSuffix)" Condition="Exists('$(MSBuildExtensionsPathx86)\$(WebAppTargetsSuffix)')" /> Ideally I want a way to tell MSBuild to import the target file form the right place from the command line but I can't work out how to do that. Any solutions?

    Read the article

  • Do I really need an ORM?

    - by alchemical
    We're about to begin development on a mid-size ASP.Net MVC 2 web site. For a typical page, we grab data and throw it up on the web page, i.e. there is not much pre-processing of the data before it is sent to the UI. We're now making the decision whether or not to use an ORM and if yes, which one. We had been looking at EF2 AKA EF4 (ASP.Net Entity Framework in VS 2010) as one possibility. However, I'm thinking a simple solution in this case may be just to use datatables. The reason being that we don't plan to move the data around or process it a lot once we fetch it, so I'm not sure there is that much value in having strongly-typed objects as DTOs. Also, this way we avoid mapping altogether, thereby I think simplifying the code and allowing for faster development. I should mention budget is an issue on this project, as well as speed of execution. We are striving for simplicity anywhere we can, both to keep the budget smaller, the schedule shorter, and performance fast. We haven't fully decided this yet, but are currently leaning towards no ORM. Will we be OK with the no ORM approach or is an ORM worth it?

    Read the article

< Previous Page | 734 735 736 737 738 739 740 741 742 743 744 745  | Next Page >