Search Results

Search found 848 results on 34 pages for 'robust'.

Page 18/34 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • Crashing the OS X Pasteboard

    - by Ben Packard
    I have an application that reads in text by emulating CMD-C copy commands and reading the pasteboard - unfortunately this the only way to achieve what I need. Occasionally, something goes wrong in execution (not sure yet if it's related to the copy command or not) and the app crashes. Once in a while, this has a knock on effect on the system-wide pasteboard - any other application that is running will crash if I attempt a copy, cut, or paste. Is there a robust way to handle this - something I should be doing with the NSPasteboard before exiting? Any information on what might be happening is appreciated. For completeness, here are the only snippets of code that access the pasteboard: Reading from the pasteboard: NSString *pBoardText = [[NSPasteboard generalPasteboard]stringForType:NSStringPboardType]; Initially clearing the pasteboard (I run this only once, at launch): [[NSPasteboard generalPasteboard] declareTypes: [NSArray arrayWithObject:NSStringPboardType] owner: self]; [[NSPasteboard generalPasteboard] setString: @"" forType: NSStringPboardType];

    Read the article

  • Does MSDeploy support website and database upgrades?

    - by Samuel Jack
    I've just been reading about MSDeploy, the new website deployment tool from Microsoft. I'm developing an installer for a webapplication and a webservice to be used for our off-the-shelf product. I have a couple of questions that I couldn't find obvious answers to. Does MSDeploy have robust support for upgrading websites after the initial deployment? I can see MSDeploy has good support for the initial deployment of databases. But does it have support for upgrading schemas whilst preserving the current data? Links addressing these specific questions would be good.

    Read the article

  • Identifying NHibernate proxy classes

    - by Marc Gravell
    I'm not an NHibernate user; I write a serialization utility library. A user has logged a feature-request that I should handle NHibernate proxy classes, treating them the same as the actual type. At the moment my code is treating them as unexpected inheritance, and throwing an exception. The code won't know in advance about NHibernate (including no reference, but I'm not aftaid of reflection ;-p) Is there a robust / guaranteed way of detecting such proxy types? Apparently DataContractSerializer handles this fine, so I'm hoping it is something pretty simple. Perhaps some interface or [attribute] decoration. Also, during deserialization; at the moment I would be creating the original type (not the NHibernate type). Is this fine for persistence purposes? Or is the proxy type required? If the latter; what is required to create an instance of the proxy type?

    Read the article

  • Delphi Compiler Directive to Evaluate Arguments in Reverse

    - by Peter Turner
    I was really impressed with this delphi two liner using the IFThen function from Math.pas. However, it evaluates the DB.ReturnFieldI first, which is unfortunate because I need to call DB.first to get the first record. DB.RunQuery('select awesomedata1 from awesometable where awesometableid = "great"'); result := IfThen(DB.First = 0, DB.ReturnFieldI('awesomedata1')); Obviously this isn't such a big deal, as I could make it work with five robust liners. But all I need for this to work is for Delphi to evaluate DB.first first and DB.ReturnFieldI second. I don't want to change math.pas and I don't think this warrants me making a overloaded ifthen because there's like 16 ifthen functions. Just let me know what the compiler directive is, if there is an even better way to do this, or if there is no way to do this and anyone whose procedure is to call db.first and blindly retrieve the first thing he finds is not a real programmer.

    Read the article

  • Reading data from a socket, considerations for robustness and security

    - by w.brian
    I am writing a socket server that will implement small portions of the HTTP and the WebSocket protocol, and I'm wondering what I need to take into consideration in order to make it robust/secure. This is my first time writing a socket-based application so please excuse me if any of my questions are particularly naive. Here goes: Is it wrong to assume that you've received an entire HTTP request (WebSocket request, etc) if you've read all data available from the socket? Likewise, is it wrong to assume you've only received one request? Is TCP responsible for making sure I'm getting the "message" all at once as sent by the client? Or do I have to manually detect the beginning and end of each "message" for whatever protocol I'm implementing? Regarding security: What, in general, should I be aware of? Are there any common pitfalls when implementing something like this? As always, any feedback is greatly appreciated.

    Read the article

  • Cross-domain REST proxy with Javascript, HTML5

    - by Bosh
    I'm writing a service (say, service.com) that provides a REST API to external apps running inside of IFrames. (These apps are hosted from domains outside the service.com). I'm planning a javascript client library for the apps to make pure-javascript requests to the service.com REST API -- basically using postMessage and some ad-hoc encapsulation of my API calls to get messages back and forth across frames (from the outside-app.com IFrame -- service.com REST API, and back to the IFrame with a response). My question: is there any robust, general-purpose javascript library to accomplish the kind of cross-domain REST request proxying I need, or should I just hack it from scratch?

    Read the article

  • Best way to store sales tax information

    - by Seph
    When designing a stock management database system (sales / purchases) what would be the best way to store the various taxes and other such amounts? A few of the fields that could be saved are: Unit price excluding tax Unit price including tax Tax per item Total excluding tax (rounded to 2 decimals) Total including tax (rounded to 2 decimals) Total tax (rounded to 2 decimals) Currently the most reasonable solution so far is storing down (roughly) item, quantity, total excluding tax (rounded) and the total tax (rounded). Can anyone suggest some better way of storing this details for a generic system? Also, given the system needs to be robust, what should be done if there were multiple tax values (eg: state and city) which might need to be separated, in this case a separate table would be in order, but would it be considered excessive to just have a rowID and some taxID mapping to a totalTax column?

    Read the article

  • Book recommendation for learning server management and Apache

    - by japancheese
    Hello, I'm currently managing a site that I feel could be optimized and utilized to be much faster, however, I'm having difficulty finding reliable information to do it. I find the Apache documentation to be a hard read, and too technical about things I don't have a strong grasp on. I'm just looking for a good beginner/intermediate book about server administration to learn as much as possible about Apache, as well as how to create a nice secure, robust server that doesn't crash at the first hint of unusual traffic surges. Thanks to anyone who can point me in the right direction.

    Read the article

  • NHibernate Lazy="Extra"

    - by Adam Rackis
    Is there a good explanation out there on what exactly lazy="extra" is capable of? All the posts I've seen all just repeat the fact that it turns references to MyObject.ItsCollection.Count into select count(*) queries (assuming they're not loaded already). I'd like to know if it's capable of more robust things, like turning MyObject.ItsCollection.Any(o => o.Whatever == 5) into a SELECT ...EXISTS query. Section 18.1 of the docs only touches on it. I'm not an NH developer, so I can't really experiment with it and watch SQL Profiler without doing a bit of work getting everything set up; I'm just looking for some sort of reference describing what this feature is capable of. Thank you!

    Read the article

  • HTML Rendering Engine as a Java Control

    - by SvrGuy
    Hi All, We have a client side application (Java/Swing) that we need an HTML rendering control for. What I want to find is the most widely adopted, most heavily developed, easiest to deploy solution to get Gecko or WebKit into a Swing app (Needs to run OS X and Windows). The limited (crappy?) JEditPane type solutions are not robust enough for our needs. We would really like to use either WebKit or Gecko. Some libraries seems to exist that would allow this: (QT WebKit) http://trac.webkit.org/wiki/QtWebKit (JRex) [can not post URL because I am new] etc. Whats the best library to achieve this?

    Read the article

  • How to embed a Python interpreter in a PyQT widget

    - by Mathias
    I want to be able to bring up an interactive python terminal from my python application. Some, but not all, variables in my program needs to be exposed to the interpreter. Currently I use a sub-classed and modified QPlainTextEdit and route all "commands" there to eval or exec, and keep track of a separate namespace in a dict. However there got to be a more elegant and robust way! How? Here is an example doing just what I want, but it is with IPython and pyGTK... http://ipython.scipy.org/moin/Cookbook/EmbeddingInGTK

    Read the article

  • How can I load static configuration information

    - by Goro
    In my code, I use JavaScript for UI and PHP for back end. I also use PHP to store application settings and sometimes my UI code needs to access this information. My configuration file looks something like this (config.php): $resolution_x = 1920; $resolution_y = 1080; etc... When I need to access any of these settings form JavaScript, i simply use <?php echo ... ?> to substitute the value directly, but it just doesn't strike me as very robust. Are they any dangers of doing this that I am not aware of? Is there a better way of doing this? Thank you,

    Read the article

  • What is a good Java web crawler library?

    - by DrDee
    Hi, I am about to develop a crawler in Java but don't feel like reinventing the wheel. A quick Google search gives a whole bunch of Java libraries to build a web crawler. Besides that Nutch is of course a very robust package but seems a bit too advanced for my needs. I only need to crawl a handful websites a week containing a couple of 1000 pages each. Which open source Java library would you recommend considering: speed multithreading (or even distributed) extending it with new functionality active maintained and documentation?

    Read the article

  • 2-Version software: Best VCS approach?

    - by Tom R
    I suppose I'd better explain my situation: I'm in the process of developing some software, and I'm at the stage where I'd like to split my project into two branches which differ in features. It so happens that this application is an Android application which I will be deploying on the Market, which has the constraint that every app must have a unique package identifier (sensible, no?). My current approach has been to clone the git repo of my original project, but this causes issues with package names. I want the system to be robust enough so that a bugfix/new feature on one branch will merge into another branch, but only when I want it to. Does anyone have any suggestions?

    Read the article

  • How should I store an Java Enum in JavaDB?

    - by Jonas
    How should I store an Java Enum in JavaDB? Should I try to map the enums to SMALLINT and keep the values in source code only? The embedded database is only used by a single application. Or should I just store the values as DECIMAL? None of these solutions feels good/robust for me. Is there any better alternatives? Here is my enum: import java.math.BigDecimal; public enum Vat { NORMAL(new BigDecimal("0.25")), FOOD(new BigDecimal("0.12")), BOOKS(new BigDecimal("0.06")), NONE(new BigDecimal("0.00")); private final BigDecimal value; Vat(BigDecimal val) { value = val; } public BigDecimal getValue() { return value; } } I have read other similar questions on this topic, but the problem or solution doesn't match my problem. Enum storage in Database field, Best method to store Enum in Database, Best way to store enum values in database - String or Int

    Read the article

  • Java JFileChooser getAbsoluteFile Add File Extension

    - by ikurtz
    i have this issue working but i would like to know if there is a better way of adding the file extension? what i am doing right now is: String filePath = chooser.getSelectedFile().getAbsoluteFile() + ".html"; im adding the extension hard coded. and then saving to it. just wondering if there is a more robust/logical manner this can be implemented? thank you for your time. EDIT: i ask this as i would like my app to be portable across platforms. so adding .html manually i may make this a windows only solution.

    Read the article

  • How do I create a web service with rails?

    - by NotDan
    I have a silverlight application that needs to talk to a rails app to add a record. I have been able to get the silverlight app to successfully do the POST assuming everything goes good. Now, however, I need to be able to make it more robust and have the rails app return error/success messages to the silverlight app in a format it can read (xml maybe?). I can modify the rails app and silverlight app as needed. What is the best way to accomplish this with rails?

    Read the article

  • How can I extract System.Net.Mail from Mono and rename the namespaces?

    - by JL
    Microsoft's implementation of System.Net.Mail does not provide a robust mailing solution. I would like to use Mono's implementation of System.Net.Mail instead, however that namespace is embedded in the System.dll shipped with Mono, and has exact same namespaces as the original .net framework. What I would like to do is instead extract System.Net.Mail from the mono solution and rename namespaces to Mono.System.Net.Mail. Then I can compile this in its own DLL and finally have a mailing solution that works! Can anyone tell me how this can be done?

    Read the article

  • Choosing between YUI Charts or Google Visualization API

    - by r2b2
    Hello , I'm a bit stuck with which charting library I will use in my project. Im stuck with this two (but also open for other suggestions) For YUI Charts : Pro : - Very robust and configurable Cons : - Uses flash 9 , which might potentially be inaccessible for users without up to date flash version - Does not support export to image (for flash versions < 10 only) For Google Visualization API pros: - small file size for the libraries, - can be exported to static image charts (via separate API call) Cons - limited configuration options So there, please help me decide. YUI charts has the edge over configuration options but Google Visualization API has the edge in terms of accessibility as it uses SVG to render the grapsh instead of Flash. For users that are hand-cuffed by corporate IT prohibitions , they cant just upgrade their Flash version and the page will not work. Thanks!

    Read the article

  • Real-time aggregation of files from multiple machines to one

    - by dmitry-kay
    I need a tool which gets a list of machine names and file wildcards. Then it connects to all these machines (SSH) and begins to monitor changes (appendings to the end) in each file matched by wildcards. New lines in each such file are saved to the local machine to the file with the same name. (This is a task of real-time log files collecting.) I could use ssh + tail -f, of course, but it is not very robust: if a monitoring process dies and then restarts, some data from remote files may be lost (because tail -f does not save the position at which it is finished before). I may write this tool manually, but before - I'd like to know if such tool already exists or not.

    Read the article

  • How can I pipe a large amount of data as a runtime argument?

    - by Zombies
    Running an executable JAR on a linux platform here. The program it self works on a somewhat large amount of data, basically a list of URLs... could be up to 2k. Currently I get this from a simple DB call. But I was thinking that instead of creating a new mode and writing SQL to get a new result set and having to redploy everytime, I could just make the program more robust by passing in the result set (the list of URLs) that need to be worked on... so, within a linux environment, is there a pain-free/simple way to get the result set and pass it in dynamically? I know file i/o is one, but it doesn't seem to be effecient because each file has to be named, as well more logic to handle grabbing the correct file, creating a file with a unique name, etc.

    Read the article

  • Must See Conference Videos for Python/Django Developers

    - by Koobz
    There's lots of good conference videos online regarding Python and Django development. Instead of watching ST:TNG at the computer, I figure it'd more productive to hone my knowledge . Fire away with some of your most inspiring and educational Python, Django, or simply programming related talks. Provide an explanation of why you found the talk useful. Examples: James Bennet on Re-usable Apps - Got me to take a serious look at django apps. Put together a fairly robust site in two days afterwards with django-cms, django-photologue, django-contact-form. Good advice on when your app is crossing boundaries and why it's good to err on the site of 'make it a separate app.'

    Read the article

  • "conveyor belt" cache architecture

    - by Andrew Matthews
    I'm producing an application with a few peculiar internal communication characteristics that make the usual suspects for data storage and transport (Qs and RDBMSs) ill-fitted. I'm wondering whether there is a product out there that matches the following characteristics: all data put into it is peristent all reads are delivered out of memory data is universally available data lives where it is most needed data is versioned (nice to have) updates are transactional (I'd like ACID characteristics) data is potentially replicated, but always in sync works on windows is based on or has bindings for .NET is really fast is really robust is redundant is scalable I'm looking at things like Microsoft codename "Velocity", but I am not sure whether it fits all of the above characteristics. Likewise, Memcached is not a perfect fit either. The current version of this app opts for an RDBMS with a signaling system for inter-system sync, but latency is too high and versioning of the DB is a pain. I need all the robustness, but with none of the trade-offs.

    Read the article

  • Better algorithm for estimating download time

    - by Scott Smith
    We've all seen the download time running estimate that initially says something like "7 days", but keeps dropping wildly (e.g. "23 hours", "45 minutes", "1 min. 50 sec", etc) with each successive estimation as the chunks are downloaded. To avoid these initial (alarming) estimates, there are techniques one could try like suppressing display of the first n estimates, or waiting for the delta between estimates to drop below some threshold before you start displaying them, but these don't seem like a general, robust solution. There are corner cases involving too few samples, or samples that actually are wildly varying... I think I recall a general solution for this kind of thing in mathematics (statistics?) that reduced or eliminated these wild values. Does anyone know?

    Read the article

  • Strategy to structure a search index in a relational database

    - by neilc
    I am interested in suggestions for building an efficient and robust structure for indexing products in a new database I am building (i'm using MySql) When a product is entered through the form there are three parts I am interested in indexing for searching purposes. The product title The product description Tags The most important is title, followed by tags, followed by the description. I was thinking of using the following structure CREATE TABLE `searchindex` ( `id` INT NOT NULL , `word` VARCHAR( 255 ) NOT NULL , `weighting` INT NOT NULL , `product_id` INT NOT NULL , PRIMARY KEY ( `id` ) ) Then each time a product is created I would split apart the title, description and tags (removing common words) and award them a weighting. Then it is trivial to select out the words and corresponding products and order them by weighting. Is there a better way to do this? I would be worried that this strategy would slow down over time and as the database filled up.

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >