Search Results

Search found 12291 results on 492 pages for 'fluent api'.

Page 457/492 | < Previous Page | 453 454 455 456 457 458 459 460 461 462 463 464  | Next Page >

  • wx_ref and custom wx_object's

    - by Iogann
    Hi! I am developing MDI application with help of wxErlang. I have a parent frame, implemented as wx_object: -module(main_frame). -export([new/0, init/1, handle_call/3, handle_event/2, terminate/2]). -behaviour(wx_object). .... And I have a child frame, implemented as wx_object too: module(child_frame). -export([new/2, init/1, handle_call/3, handle_event/2, terminate/2]). -export([save/1]). -behaviour(wx_object). % some public API method save(Frame) -> wx_object:call(Frame, save). .... I want to call save/1 for an active child frame from the parent frame. There is my code: ActiveChild = wxMDIParentFrame:getActiveChild(Frame), case wx:is_null(ActiveChild) of false - child_frame:save(ActiveChild); _ - ignore end This code fails because ActiveChild is #wx_ref{} with state=[], but wx_object:call/2 needs #wx_ref{} where state is set to the pid of the process which we call. What is the right method to do this? I thought only to store a list of all created child frames with its pids in the parent frame and search the pid in this list, but this is ugly.

    Read the article

  • Same Origin issue with web service call

    - by Wjdavis5
    My web server runs at http://mypc.com:80 ` Given the following snip: $(window).load(function () { var myURL = "http://mypc.com:8000/PSOCharts/service/HighChart_ColumnChart/i"; $.getJSON(myURL) .done(function(data) {alert(data);}); }); I am running to this error: XMLHttpRequest cannot load http://mypc.com:8000/PSOCharts/service/HighChart_ColumnChart/i. Origin http://mypc.com is not allowed by Access-Control-Allow-Origin. I understand why (I think) b/c my webservice runs at port 8000 which is different from what IIS is running on (port 80). I thought I could get around by using jsonp (according to the jQuery documentation here So I copied the example of making a call to the flickr api, but it isnt working. Any thoughts/sugggestions? UPDATE Ok so my request is being made now: var myURL = "http://192.168.1.104:8000/PSOCharts/service/HighChart_ColumnChart/i?jsoncallback=?"; $.ajax({ url :myURL, dataType: "jsonp", success: function(data) {a(data)} , error: function(){alert("err");}, }); But I am continually hitting the error function, here is what's being returned: [1.4,54.43,49.39,93.23] Now I'm assuming this is b/c the response text doesnt contain any type of callback here is the part of the interface I'm calling: [WebInvoke(Method = "GET", RequestFormat = WebMessageFormat.Json, ResponseFormat = WebMessageFormat.Json, UriTemplate = "HighChart_ColumnChart/{id}?callback={cb}")] List<double> HighChart_ColumnChart(string id,string cb); Here is the actual function being called: public List<double> HighChart_ColumnChart(string id,string cb) { var z = new List<double>(); z.Add(1.4); z.Add(54.43); z.Add(49.39); z.Add(93.23); return z; } when I debug, the CB param is something like : "jQuery19108121746340766549_1372630643878". How do I modify the code to wrap it correctly? Thanks for the help thus far!

    Read the article

  • Has anyone used ever Mangoslick from themeforest?

    - by bonesnatch
    I was assigned to integrate MangoSlick theme to our current admin panel, Its a jQuery, Slick(?) and Responsive template. First, let me explain how the API goes In the documentation, it only says this is the only way data-[options]=[value] Example: If I wanna make a progress bar I can use this format <div class="progress"> <div class="bar" data-title="[title]" data-value="[value]" data-max="[max]" data-format="[format string]"></div> </div> so filling-in values <div class="progress"> <div class="bar" data-title="Space" data-value="1285" data-max="5120" data-format="0,0 MB"></div> </div> I will have this as output Now, the main question is when I use jQuery attr() to change the attribute values for data-title, data-max, data-value and data-format. Why is it not working? <script> var jq = $.noConflict(); jq(document).ready(function(){ jq('#bokz').attr("data-title", "No Space"); }); </script> Using the script above and inspect element in chrome the values are changed but not in the progressbar Some of you may have some ideas on this? Any help/suggestion would be very much appreciated.

    Read the article

  • How to Sort a TreeList in Sitecore 6 in the Source

    - by Scott
    My team uses Sitecore 6 as content management system and then .Net to interface with Sitecore API. In many of our templates we make use of a Treelist. When adding a new item to the selected items Treelist it automatically puts the item at the bottom of the list. In some lists they get very large. In most cases end users would like to see these lists sorted descending by a Date field that is part of the templates that can be added as selected to the Treelist. Programmatically on the .Net side its very easy to handle this using Linq OrderByDescending and all displays great in the site to visitors. What I am trying to figure out is how to get it to display the same in Sitecore Content Editor. I've not found anything from Google search other than there seems to be a SortBy you can specify in the source but I tried this and can't get it to have any effect. Has anyone dealt with this before? Again, main goal is to sort items in a Treelist in the Sitecore Content Editor itself. Thanks for any input anyone has.

    Read the article

  • Mono Text Based Web Browser

    - by powerbox
    Hi guys, is there any public text based web browser implementation for C# or on mono base api that I can use to fill up web forms automatically? I'll be using it to automate some web task that does not require any image authentication. I'm currently using a web browser control available on .Net Framework and waits for the event WebBrowserDocumentCompletedEventHandler to fire after a page is successfully loaded and invoke some actions like Submit or simulating a mouse click on some links. It actually does the job but I can't process bulk transactions since I needed to wait for the whole page to be loaded together with the images and other stuff. It is easy to use HttpWebRequest to fill up some forms , provide some data and then submit. But on some occasions I only need to simulate a mouse click to a certain link which I don't know how to do with HttpWebRequest. By the way using HttpWebRequest will still download all the images of a web page that I see pointless since I only need to provide correct data back to the server. I hope someone can pinpoint me the correct way of doing this kind of automation and thanks in advance!

    Read the article

  • Reference remotely located assembly (web uri) from locally installed application?

    - by moonground.de
    Hi Stackoverflowers! :) We have a .NET application for Windows which is installed locally by Microsoft Installer. Now we have the need to use additional assemblies which are located online at our Web Servers. We'd like to refer to a remote uri like https://www.ourserver.com/OurProductName/ExternalLib.dll and reveal additional functionality, which is described roughly by a known common ("AddIn/Plugin") Interface. These are not 3rd Party Plugins, we just want be able to exchange parts of the application frequently, without the need to have frequent software updates. Our first idea was to add some kind of "remote refence" in Visual Studio by setting the path to the remote assembly uri. But Visual Studio downloaded the assembly immediately to a temporary directory, adding a reference to it. Our second attempt then, is simply using a WebRequest (or WebClient) to retrieve a binary stream of the Assembly, loading it "from image" by using Assembly.Load(...). This actually works, but is not very elegant and requires more additional programming for verification etc. We hoped Clickonce would provide useful techniques but apparently it's suitable for standalone applications only. (Correct me?) Is there a way (.net native or by framework/api) to reference remotely located assemblies? Thanks in advance and have a happy easter!

    Read the article

  • Design patterns for Caching Images in a MVC?

    - by Onema
    Hi, I'm designing an image cache system that will be used in an MVC CMS. The main purpose of the image cacher is to modify images: scale, crop, etc and cache them in the client site. I have created an image cache Model and Mapper that interact with the Database, to keep track of the images and know what kind of actions have been applied to them (scale, crop, etc). In addition to the Model and Mapper I have created a ImageCacher Class that is used by the API to manage the Model and image creation based on arguments passed by the client site, this class creates the images and generates the links to the images for the View. A coworker argued that I need to include the functionality of this last Class inside the Model, as the bulk of the logic should go in the model. I respectfully disagree with him since I feel the model's responsibility is to deal with the information about the images cached at the database level, and the responsibility of the ImageCacher Class is to create the url/image that we will be caching (keeping the single responsibility principle). In addition to this I believe that a model should not have Presentation-related features, like creating or showing images. Does anyone have any insight on this? is there a particular design pattern that would make this division of tasks clear and and the image cacher reusable? Should I add all the logic in the Model? Thank you.

    Read the article

  • Flash Player, security: If a URL starts with "http://" will the SWF always be loaded into REMOTE san

    - by Pavel
    Seems to be a question for a Flash security guru. Suppose we are loading an external SWF movie with MovieClipLoader.loadMovie(url:String) Is it safe to assume that if url starts with "http://", the movie will be loaded in REMOTE sandbox? We need to tell local SWFs from remote ones to close a security hole. If you need the context read on. We have developed a Projector, written in C++ embedding Flash Player ActiveX. Our Flash application runs inside the Projector. Soon we want to give our users a way to create plugins for the application. The plugins are obviously will be SWF movies. The case I'm afraid of is the following. A bad person creates a malicious evil.swf pretending it to be nice plugin for our app. In case evil.swf is loaded from the local file system it is granted an access to the whole MovieClip tree and Projector API, opening C++ file access operations. On the other hand if evil.swf is loaded from the internet, remotely, it will be locked in REMOTE sandbox by Flash security model. Because of this, we need a reliable way to tell local SWF from remote one before loading it. And we must not make a mistake. So again, is it safe to assume that if url begins with "http://", the clip will be loaded inside REMOTE sandbox?

    Read the article

  • Sample twitter App

    - by Jack
    I am now running my code on a web hosting service http://xtreemhost.com/ <?php function updateTwitter($status) { $username = 'xxxxxx'; $password = 'xxxx'; $url = 'http://twitter.com/statuses/update.xml'; $postargs = 'status='.urlencode($status); $responseInfo=array(); $ch = curl_init($url); curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 2); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt ($ch, CURLOPT_POST, true); // Give CURL the arguments in the POST curl_setopt ($ch, CURLOPT_POSTFIELDS, $postargs); // Set the username and password in the CURL call curl_setopt($ch, CURLOPT_USERPWD, $username.':'.$password); // Set some cur flags (not too important) $response = curl_exec($ch); if($response === false) { echo 'Curl error: ' . curl_error($ch); } else { echo 'Operation completed without any errors<br/>'; } // Get information about the response $responseInfo=curl_getinfo($ch); // Close the CURL connection curl_close($ch); // Make sure we received a response from Twitter if(intval($responseInfo['http_code'])==200){ // Display the response from Twitter echo $response; }else{ // Something went wrong echo "Error: " . $responseInfo['http_code']; } curl_close($ch); } updateTwitter("Just finished a sweet tutorial on http://brandontreb.com"); ?> I get the following error now Curl error: Couldn't resolve host 'api.twitter.com' Error: 0 Please somebody solve my problem

    Read the article

  • Portable way of counting milliseconds in C++ ?

    - by ereOn
    Hi, Is there any portable (Windows & Linux) way of counting how many milliseconds elapsed between two calls ? Basically, I want to achieve the same functionnality than the StopWatch class of .NET. (for those who already used it) In a perfect world, I would have used boost::date_time but that's not an option here due to some silly rules I'm enforced to respect. For those who better read code, this is what I'd like to achieve. Timer timer; timer.start(); // Some instructions here timer.stop(); // Print out the elapsed time std::cout << "Elapsed time: " << timer.milliseconds() << "ms" << std::endl; So, if there is a portable (set of) function(s) that can help me implement the Timer class, what is it ? If there is no such function, what Windows & Linux API should I use to achieve this functionnality ? (using #ifdef WINDOWS-like macros) Thanks !

    Read the article

  • How often should network traffic/collisions cause SNMP Sets to fail?

    - by A. Levy
    My team has a situation where an SNMP SET will fail once every two weeks or so. Since this set happens automatically, we don't necessarily notice it immediately when it fails, and this can result in an inconsistent configuration and associated wailing and gnashing of teeth. The plan is to fix this by having our software automatically retry the SET when it fails. The problem is, we aren't sure why the failure is happening. My (extremely limited) knowledge of SNMP isn't particularly helpful in diagnosing this problem, so I thought I'd ask StackOverflow for some advice. We think that every so often a spike in network traffic will cause the SET to fail. Since SNMP uses UDP for communication, I would think it would be relatively easy for a command to be drowned out if traffic was high for a short period of time. However, I have no idea how common this is. We have a small network with a single cisco router and there are less than a dozen SNMP controlled devices on that network. In addition to the SNMP traffic, there are some status web pages being loaded from the various devices. In case it makes a difference, I believe we are using the AdventNet SNMP API version 4.0.4 for Java. Does it sound reasonable that there will be some SET commands dropped occasionally, or should we be looking for other causes?

    Read the article

  • Confused about std::runtime_error vs. std::logic_error

    - by David Gladfelter
    I recently saw that the boost program_options library throws a logic_error if the command-line input was un-parsable. That challenged my assumptions about logic_error vs. runtime_error. I assumed that logic errors (logic_error and its derived classes) were problems that resulted from internal failures to adhere to program invariants, often in the form of illegal arguments to internal API's. In that sense they are largely equivalent to ASSERT's, but meant to be used in released code (unlike ASSERT's which are not usually compiled into released code.) They are useful in situations where it is infeasible to integrate separate software components in debug/test builds or the consequences of a failure are such that it is important to give runtime feedback about the invalid invariant condition to the user. Similarly, I thought that runtime_errors resulted exclusively from runtime conditions outside of the control of the programmer: I/O errors, invalid user input, etc. However, program_options is obviously heavily (primarily?) used as a means of parsing end-user input, so under my mental model it certainly should throw a runtime_error in the case of bad input. Where am I going wrong? Do you agree with the boost model of exception typing?

    Read the article

  • Is Storing Cookies in a Database Safe?

    - by viatropos
    If I use mechanize, I can, for instance, create a new google analytics profile for a website. I do this by programmatically filling out the login form and storing the cookies in the database. Then, for at least until the cookie expires, I can access my analytics admin panel without having to enter my username and password again. Assuming you can't create a new analytics profile any other way (with OpenAuth or any of that, I don't think it works for actually creating a new Google Analytics profile, the Analytics API is for viewing the data, but I need to create an new analytics profile), is storing the cookie in the database a bad thing? If I do store the cookie in the database, it makes it super easy to programatically login to Google Analytics without the user ever having to go to the browser (maybe the app has functionality that says "user, you can schedule a hook that creates a new anaytics profile for each new domain you create, just enter your credentials once and we'll keep you logged in and safe"). Otherwise I have to keep transferring around emails and passwords which seems worse. So is storing cookies in the database safe?

    Read the article

  • PHP and storing stats

    - by John
    Using PHP5 and the latest version of MySQL I want to be able to track impressions and clicks for business listings. My question is if I did this myself what would be the best method in storing it so I can run reports? Before I just had a table that had the listing id, user ip address and if it was a click or impression as well as the date it was tracked. However the database itself is approaching 2GB of data and its very slow, part of the problem is its a pretty simple script that includes impressions and clicks from anyone including search engines and basically anyone or anything that accesses the listing page. Is there an api or file out there that has an update to date list that can detect if the person viewing is a actually person and not a spider so I dont fill up the database with unneeded stats? Just looking for suggestions, do I just have a raw database that gets just the hits then a cron job at night tally up for the day for each listing for each ip and store the cumulative stats in a different table? Also what type of database should it be? Innodb? MyISAM?

    Read the article

  • Parsing Windows Event Logs, is it possible?

    - by xceph
    Hello, I am doing a little research into the feasibility of a project I have in mind. It involves doing a little forensic work on images of hard drives, and I have been looking for information on how to analyze saved windows event log files. I do not require the ability to monitor current events, I simply want to be able to view events which have been created, and record the time and application/process which created those events. However I do not have much experience in the inner workings of the windows system specifics, and am wondering if this is possible? The plan is to create images of a hard drive, and then do the analysis on a second machine. Ideally this would be done in either Java or Python, as they are my most proficient languages. The main concerns I have are as follows: Is this information encrypted in anyway? Are there any existing API for parsing this data directly? Is there information available regarding the format in which these logs are stored, and how does it differ from windows versions? This must be possible from analyzing the drive itself, as ideally the installation of windows on the drive would not be running, (as it would be a mounted image on another system) The closest thing I could find in my searches is http://www.j-interop.org/ but that seems to be aimed at remote clients. Ideally nothing would have to be installed on the imaged drive. The other solution which seemed to also pop up is the JNI library, but that also seems to be more so in the area of monitoring a running system. Any help at all is greatly appreciated. :)

    Read the article

  • Common methods/implementation across multiple WCF Services

    - by Rob
    I'm looking at implementing some WCF Services as part of an API for 3rd parties to access data within a product I work on. There are currently a set of services exposed as "classic" .net Web Services and I need to emulate the behaviour of these, at least in part. The existing services all have an AcquireAuthenticationToken method that takes a set of parameters (username, password, etc) and return a session token (represented as a GUID), which is then passed in on calls to any other method (There's also a ReleaseAuthenticationToken method, no guesses needed as to what that does!). What I want to do is implement multiple WCF services, such as: ProductData UserData and have both of these services share a common implementation of Acquire/Release. From the base project that is created by VS2k8, it would appear I will start with, per service: public class ServiceName : IServiceName { } public interface IServiceName { } Therefore my questions would be: Will WCF tolerate me adding a base class to this, public class ServiceName : ServiceBase, IServiceName, or does the fact that there's an interface involved mean that won't work? If "No it won't work" to Question 1, could I change IServiceName so it extends another interface, IServiceBase, thus forcing the presence of Acquire/Release methods, but then having to supply the implementation in each service. Are 1 and 2 both really bad ideas and there's actually a much better solution that, knowing next to nothing about WCF, I just haven't thought of?

    Read the article

  • Major JQuery bug on IE not reproducible - What can i do in this situation to solve this bug?

    - by ming yeow
    I am hoping to get some help on this issue. Some users on IE have been reporting this javascript issue, but I have been unable to re-produce it. In essence, for some class of windows IE users, the game doesn't work (or $.ajax() is not working). What I know: I swapped out an ajax call (ajax_init_trainer) and used a standard link with some request parameters to do the initialization and ppl seemed to get passed the problem until they hit the next ajax call. I read somewhere that IE does crazy caching so you need to make the urls unique, which is why i added the _requestno parameter. However, setting the cache:false is said to also do this. This didn't fix it for someone who was complaining. function done(res, status) { var data = JSON.parse(res.responseText); hide_loading(); if (status == "success") { window.location.href="/bamo/battle/?{{ fb_sig}}"; } else { display_alert("Problem!",data.msg,$("#notifications")); } }; $(".monster_select_class").click(function() { $(this).attr("src","{{MEDIA_URL}}/bamo/button_select_click.png"); monster_class = $(this).attr("monster_class"); monster_type = $(this).attr("monster_type"); ajax_init_trainer(monster_class,monster_type); }); function ajax_init_trainer(trainer_class,monster_type) { var data = {trainer_class:trainer_class,monster_type:monster_type}; var d = new Date(); var args = { type:"POST",url:"/bamo/api/init_trainer/?_requestno="+d.getTime(),data:data,contentType:"application/json;", dataType: "json",cache:false,complete:done}; $.ajax(args); return false; };

    Read the article

  • How to get a nicely formatted PHP Web Service response?

    - by Bruno
    I called an API like this: $service = new Class_Service(); $parameters = new GetClasses(); $parameters->Request = new GetClassesRequest(); $parameters->Request->SourceCredentials = new SourceCredentials(); $parameters->Request->SourceCredentials->SourceName = "Name"; $parameters->Request->SourceCredentials->Password = "Pass"; $parameters->Request->SourceCredentials->SiteIDs = array( 12 ); $classes = $service->GetClasses($parameters); var_dump($classes); And received a response like this: object(GetClassesResponse)#7 (1) { ["GetClassesResult"]=> object(GetClassesResult)#8 (6 { ["Classes"]=> object(stdClass)#9 (1) { ["Class"]=> array(25) { [0]=> object(Mi_Class)#10 (21) { ["ClassScheduleID"]=> int(15) ["Visits"]=> NULL ["Clients"]=> NULL ["Location"]=> object(Location)#11 (30) { ["BusinessID"]=> NULL ["SiteID"]=> int(12) ["BusinessDescription"]=> NULL ["AdditionalImageURLs"]=> object(stdClass)#12 (0) { } ["FacilitySquareFeet"]=> NULL Does a response normally look like this? How do I go about getting the data in a formatted manner?

    Read the article

  • Allow login from another site

    - by tunmise fasipe
    I have web application where I store users' password and username. If you logon to this site, you can login with the password and username to have access to your profile. There is another option that requires you to login to my site from your site and have your profile within your site. This is because you might already have a site that your clients know you with. This latter part is what I don't know to implement. I have these ideas: Have a fixed IFrame within your site to contain my site: but I am concerned about size/layout since different clients have different layout/size for their content section. I am thinking of how to maintain session too A webservice: I don't know how feasible this is since the Password and ID are on my server. You may have to send them back and forth. It means client would have to code with my API. But I am not just returning data, I have to show them a page that contains the profile details OpenID, Single-SignOn: Just guessing - but the authentication and data resides on my server. there is nothing to access on your side in this case Any other methods/better approach Examples: like login into facebook within my site and still be able to do post updates, receive notifications Facebook implement some of these with IFrame e.g. the Like button

    Read the article

  • Slow loading of UITableView. How know why?

    - by mamcx
    I have a UITableView that show a long list of data. Use sections and follow the sugestion of http://stackoverflow.com/questions/695814/how-solve-slow-scrolling-in-uitableview . The flow is load a main UITableView & push a second selecting a row from there. However, with 3000 items take 11 seconds to show. I suspect first from the load of the records from sqlite (I preload the first 200). So I cut it to only 50. However, no matter if I preload only 1 or 500, the time is the same. The view is made from IB and all is opaque. I run out of ideas in how detect the problem. I run the Instruments tool but not know what to look. Also, when the user select a cell from the previous UITable, no visual feedback is show (ie: the cell not turn selected) for a while so he thinks he not select it and try several times. Is related to this problem. What to do? NOTE: The problem is only in the actual device: iPod Touch 2d generation Using fmdb as sqlite api Doing the caching in viewDidLoad Using NSDictionary for the caching Using a NSAutoreleasePool for the caching part. Only caching the row ID & mac 4 fields necesary to show the cell data UIView made with interface builder, SDK 2.2.1 Instruments say I use 2.5 MB in the device

    Read the article

  • Protecting routes with authentication in an AngularJS app

    - by Chris White
    Some of my AngularJS routes are to pages which require the user to be authenticated with my API. In those cases, I'd like the user to be redirected to the login page so they can authenticate. For example, if a guest accesses /account/settings, they should be redirected to the login form. From brainstorming I came up with listening for the $locationChangeStart event and if it's a location which requires authentication then redirect the user to the login form. I can do that simple enough in my applications run() event: .run(['$rootScope', function($rootScope) { $rootScope.$on('$locationChangeStart', function(event) { // Decide if this location required an authenticated user and redirect appropriately }); }]); The next step is keeping a list of all my applications routes that require authentication, so I tried adding them as parameters to my $routeProvider: $routeProvider.when('/account/settings', {templateUrl: '/partials/account/settings.html', controller: 'AccountSettingCtrl', requiresAuthentication: true}); But I don't see any way to get the requiresAuthentication key from within the $locationChangeStart event. Am I overthinking this? I tried to find a way for Angular to do this natively but couldn't find anything.

    Read the article

  • OCR combined with font recognition?

    - by Adam
    I have a bold idea where a user could take an image like the following and in a few seconds of processing, be able to edit a document which looks roughly the same. The software would use WhatTheFont (or something similar) to recognize the fonts used, and OCR and other software to handle the font size, color, line-spacing, and of course the text content itself. In the case of the example image, there would be three separate "textboxes" produced, each starting at the upper left corner of the text, and extending as far to the bottom right as it could before running into another text box. So the user would then see something like this: (The rectangles are just used to show the boundaries of each textbox.) From here, the user would be able to edit the text in each of these boxes to create a new document. Of course there are tons of obvious uses for such an application, especially on a mobile phone with a built in camera. So my questions are the following: I doubt the answer is yes, but does anything do this already? If I'm going to try to build this, what should I write it in? Can I use Python? What would be the best OCR libraries to start with? Is there a service other than WhatTheFont for font recognition that has better API support? Anybody want to help me build it? :) etc. etc. Update: One thing I wanted to mention (but forgot) is I would also like the background to be preserved. In other words, if the example above had an image behind the text, I'd like the document to use that image with text removed. I know this complicates things a lot because that would require some image editing techniques too (something akin to Photoshop CS5' "content-aware fill"). But if we can solve diminished reality on iPhones, I think we can figure this out!

    Read the article

  • Why use shorter VARCHAR(n) fields?

    - by chryss
    It is frequently advised to choose database field sizes to be as narrow as possible. I am wondering to what degree this applies to SQL Server 2005 VARCHAR columns: Storing 10-letter English words in a VARCHAR(255) field will not take up more storage than in a VARCHAR(10) field. Are there other reasons to restrict the size of VARCHAR fields to stick as closely as possible to the size of the data? I'm thinking of Performance: Is there an advantage to using a smaller n when selecting, filtering and sorting on the data? Memory, including on the application side (C++)? Style/validation: How important do you consider restricting colunm size to force non-sensical data imports to fail (such as 200-character surnames)? Anything else? Background: I help data integrators with the design of data flows into a database-backed system. They have to use an API that restricts their choice of data types. For character data, only VARCHAR(n) with n <= 255 is available; CHAR, NCHAR, NVARCHAR and TEXT are not. We're trying to lay down some "good practices" rules, and the question has come up if there is a real detriment to using VARCHAR(255) even for data where real maximum sizes will never exceed 30 bytes or so. Typical data volumes for one table are 1-10 Mio records with up to 150 attributes. Query performance (SELECT, with frequently extensive WHERE clauses) and application-side retrieval performance are paramount.

    Read the article

  • What is the best way of doing this? (WCF 4)

    - by Jason Porter
    I have a multith-threaded, continusly running application that connects with multiple devices via TCP/IP sockets and exposes a set of WCF API's for controlling, monitoring and reporting on these devices. I would like to host this on IIS for the ususal reasons of not having to worry about re-starting the app in case of errors. So the issue I have is the main application running in parallel with the WCF Servies. To accomplish this I use the static AppInitialize class to start a thread which has the main applicaiton loop. The WCF services mostly report or control the shared objects with this thread. There are two problems that I see with this approach. One is that if the thread dies, IIS has no clue to re-start it so I have to play some tricks with some WCF calls. The other is that the backrgound thread deals with potentially thousands of devices that are connected permanently (typically a thread per socket connection). So I am not sure if IIS is buying me anything in this case. Another approach that I am thinking is to use WF for the main application that deals with the sockets and host both the WF and my WCF services in IIS using AppFabric. Since I have not use WF or AppFabric I am reaching out to see if this would be good approach or there are better alternative.

    Read the article

  • Finding the unique paths through a Neo4j graph

    - by Larry
    I have a Neo4j graph with 12 inputs and 4 outputs, and am trying to write a query with the Java Traverser that will return the 14 unique paths from an input to an output node. All the queries I have tried return only a subset of the 14 paths. For example, the code below returns 4 paths, but the other 10 all stop 1 node short of the output. RelationshipType relType = RelationshipTypes.EDGE; TraversalDescription td = new TraversalDescriptionImpl() .depthFirst() .relationships(relType, Direction.OUTGOING); for (Node node : inputs){ Traverser tv = td.traverse(node); Iterator<Path> iter = tv.iterator(); // ... print path } I've tried uniqueness and depth settings as well, with no effect. The query below returns all 14 paths using the web interface, but when I use the ExecutionEngine class, I only get 13 paths back. START s=node(*) MATCH (s)-[p:EDGE*]->(c) WHERE s.type! = "INPUT" AND c.type! = "OUTPUT" RETURN p How do I get all the unique paths using the Java API?

    Read the article

< Previous Page | 453 454 455 456 457 458 459 460 461 462 463 464  | Next Page >