Search Results

Search found 21777 results on 872 pages for 'howard may'.

Page 186/872 | < Previous Page | 182 183 184 185 186 187 188 189 190 191 192 193  | Next Page >

  • Changing App.config at Runtime

    - by born to hula
    I'm writing a test winforms / C# / .NET 3.5 application for the system we're developing and we fell in the need to switch between .config files at runtime, but this is turning out to be a nightmare. Here's the scene: the Winforms application is aimed at testing a WebApp, divided into 5 subsystems. The test proccess works with messages being sent between the subsystems, and for this proccess to be sucessful each subsystem got to have its own .config file. For my Test Application I wrote 5 separate configuration files. I wish I was able to switch between these 5 files during runtime, but the problem is: I can programatically edit the application .config file inumerous times, but these changes will only take effect once. I've been searching a long time for a form to address this problem but I still wasn't sucessful. I know the problem definition may be a bit confusing but I would really appreciate it if someone helped me. Thanks in advance! --- UPDATE 01-06-10 --- There's something I didn't mention before. Originally, our system is a Web Application with WCF calls between each subsystem. For performance testing reasons (we're using ANTS 4), we had to create a local copy of the assemblies and reference them from the test project. It may sound a bit wrong, but we couldn't find a satisfying way to measure performance of a remote application. --- End Update --- Here's what I'm doing: public void UpdateAppSettings(string key, string value) { XmlDocument xmlDoc = new XmlDocument(); xmlDoc.Load(AppDomain.CurrentDomain.SetupInformation.ConfigurationFile); foreach (XmlElement item in xmlDoc.DocumentElement) { foreach (XmlNode node in item.ChildNodes) { if (node.Name == key) { node.Attributes[0].Value = value; break; } } } xmlDoc.Save(AppDomain.CurrentDomain.SetupInformation.ConfigurationFile); System.Configuration.ConfigurationManager.RefreshSection("section/subSection"); }

    Read the article

  • Flex 4 vs JavaScript Options (Cappuccino, JQuery, etc.)

    - by user320681
    Rehashing an older post: http://stackoverflow.com/questions/1570070/jquery-vs-flex-choosing-a-platform-for-saas We are preparing to develop an application that is exceptionally dynamic and interactive. It's particularly heavy on the graphics side. We are 85% convinced that Adobe Flash built atop Flex is the right path to take, however Cappuccino is quite nice and seems as though it may be able to nearly fit the bill. The only pause we have right now is portability for the iPhone. With the lack of blessings from Apple we will most certainly have to create a 2nd interface for the iPhone for the site, however... Having two interfaces may not be bad as it will likely have to be custom anyway to take advantage of the differences that it affords. Any further thoughts or reevaluations of points enumerated in the noted article? Further, Flex 4 adds a lot of strength to the position mentioned previously regarding UI development. Fx4 is very nice vs Fx3 and shaves 90% from the development time when coupled with Flash Catalyst, which is not really always fully appropriate, but with some round trip tricks it seems as though it can cut through things rather well... Please do advise and many thanks.

    Read the article

  • Entity Framework - Merging 2 physical tables into one "virtual" table problems...

    - by Keith Barrows
    I have been reading up on porting ASP.NET Membership Provider into .NET 3.5 using LINQ & Entities. However, the DB model that every single sample shows is the newer model while I've inherited a rather old model. Differences: The User Table is split into a pair of User & Membership Tables. All of the tables in the DB are prepended with aspnet_ I have Lowered versions of some columns (UserName, Email, etc) To work with this I have copied the properties from the Membership table into the User table (in the DB this is a 1<-1 relationship, not a 1<-0,1), renamed aspnet_Applications to Application, aspnet_Profiles to Profile, aspnet_Users to User and aspnet_Roles to Role. (See image) Link to full size image of model Now, I am running into one of 2 problems when I try to compile. Using the model in the image I get this error: Problem in Mapping Fragment starting at line 464: EntitySets 'UserSet' and 'aspnet_Membership' are both mapped to table 'aspnet_Membership'. Their Primary Keys may collide. If I delete the aspnet_Membership table from my model (to handle the above error) I then get: Problem in Mapping Fragment starting at line 384: Column aspnet_Membership.ApplicationId in table aspnet_Membership must be mapped: It has no default value and is not nullable. My ability to hand edit the backing stores is not the best and I don't want to just hack something in that may break other things. I am looking for suggestions, best practices, etc to handle this. Note: Moving the data tables themselves is not an option as I cannot replace all the logic in the existing apps. I am building this EF Provider for a new App. Over the next 6 months the old app(s) will migrate bit-by-bit to the new structures. Note: I added a link just under the image to the full size image for better viewing.

    Read the article

  • General Drools Question

    - by El Guapo
    For the last few months my company has been using a product from a company called Informatica (previously AgentLogic) called RulePoint. This product has proven itself very easy to use with a well-developed and easy-to-use SDK for customization. The way we use the product for CEP is fairly trivial, we have 2 sources which we monitor for our rule data, the first being a JMS Queue, the second being a Jabber IM account. The product runs on any java-based application server (WebLogic, Tomcat, etc) and runs just about flawlessly. Last week my boss says, "Hey, I've heard that we may be able to do the same thing we are doing with RulePoint with an open-source product called Drools. Check it out and let me know what you think." I've heard of people using Drools for flow-based operations (validation, etc), however, I've never heard of anyone using their CEP product (Fusion) in practice. So, being the diligent worker, I have undertaken this task. I've downloaded all the files (version 5.0) and accompanying documentation and have started to read. I've read through just about all the docs and run most of the examples, but I still don't really see HOW drools works for CEP. While there are examples for using Data (or Facts, I guess) from JMS, I don't see how this thing stays "running", continuously monitoring a queue until the application is actually stopped. RulePoint pretty must just sits and listens, however, Drools seems to not. I could probably write a full-blown command-line application for our needs, however, I was hoping to leverage some of the benefits of using a application server provides. I guess I'm looking for some good tutorials or an example of how someone is using Drools and CEP in production. Thanks in advanced for any information, advice you may be able to provide.

    Read the article

  • Where can I find clear examples of MVC?

    - by Tom
    I've read a couple of things about MVCs but I still don't understand when they should be used and when they shouldn't be used. I am looking for clear examples that say things like "if you're developing this then you should use MVC, like this" and "if you're developing this, you shouldn't use MVC." Most of the examples I've seen rely on complex frameworks which have already implemented everything and you have to learn the framework and use it a lot to understand what's really happening. To many programmers, phrasings such as "UI business logic" sound like marketing terms — for example, the words "Instead the View binds directly to a Presentation Model" are used in this post. I am aware of the dangers that may lurk in the shadows as MVC is a concept and everyone feels like they know it best, yet nobody really knows exactly how to use it because there may be a lot of variables involved and everyone is allowed to have a different perspective on how to dissect a project into the Model, the View and the Controller. There is a lot of theory out there but very few clear examples. What I'm looking for are not "the best" ways of doing it so this should not be considered as subjective; I'm looking for different simple implementations that would allow me to decide on my own which are the best approaches. Succinctly: What are good on-line resources that present pro and con arguments to using MVC in various situations and provide clear examples to help the reader understand the concept?

    Read the article

  • Twitter4J throws exception in TwitterFactory

    - by Philipp Andre
    Hello Guys, i'm trying to access twitter via oauth. Therefore, i registered my app, downloaded Twitter4j, added the jars in my Eclipse-Project, and then tried to execute the following code: Twitter twitter = new TwitterFactory().getOAuthAuthorizedInstance("[key]","[secretKey]); RequestToken requestToken = twitter.getOAuthRequestToken(); System.out.println(requestToken.getAuthorizationURL()); But it raises the following exception: [Sat May 29 11:19:11 CEST 2010]Using class twitter4j.internal.logging.StdOutLoggerFactory as logging factory. [Sat May 29 11:19:11 CEST 2010]Use twitter4j.internal.http.alternative.HttpClientImpl as HttpClient implementation. Exception in thread "main" java.lang.AssertionError: java.lang.reflect.InvocationTargetException at twitter4j.internal.http.HttpClientFactory.getInstance(HttpClientFactory.java:71) at twitter4j.internal.http.HttpClientWrapper.<init>(HttpClientWrapper.java:59) at twitter4j.http.OAuthAuthorization.init(OAuthAuthorization.java:83) at twitter4j.http.OAuthAuthorization.<init>(OAuthAuthorization.java:74) at twitter4j.TwitterFactory.getOAuthAuthorizedInstance(TwitterFactory.java:112) at MainProgram.main(MainProgram.java:18) Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown Source) at java.lang.reflect.Constructor.newInstance(Unknown Source) at twitter4j.internal.http.HttpClientFactory.getInstance(HttpClientFactory.java:65) ... 5 more Caused by: java.lang.NoClassDefFoundError: org/apache/http/impl/client/DefaultHttpClient at twitter4j.internal.http.alternative.HttpClientImpl.<init>(HttpClientImpl.java:63) ... 10 more I currently can't figure out why ... can you please give me some suggestions? Best regards Philipp

    Read the article

  • Pros & Cons of Google App Engine

    - by Rishi
    Pros & Cons of Google App Engine [An Updated List 21st Aug 09] Help me Compile a List of all the Advantages & Disadvantages of Building an Application on the Google App Engine Pros: 1) No Need to buy Servers or Server Space (no maintenance). 2) Makes solving the problem of scaling much easier. Cons: 1) Locked into Google App Engine ?? 2)Developers have read-only access to the filesystem on App Engine. 3)App Engine can only execute code called from an HTTP request (except for scheduled background tasks). 4)Users may upload arbitrary Python modules, but only if they are pure-Python; C and Pyrex modules are not supported. 5)App Engine limits the maximum rows returned from an entity get to 1000 rows per Datastore call. 6)Java applications may only use a subset (The JRE Class White List) of the classes from the JRE standard edition. 7)Java applications cannot create new threads. Known Issues!! http://code.google.com/p/googleappengine/issues/list Hard limits Apps per developer - 10 Time per request - 30 sec Files per app - 3,000 HTTP response size - 10 MB Datastore item size - 1 MB Application code size - 150 MB Pro or Con? App Engine's infrastructure removes many of the system administration and development challenges of building applications to scale to millions of hits. Google handles deploying code to a cluster, monitoring, failover, and launching application instances as necessary. While other services let users install and configure nearly any *NIX compatible software, App Engine requires developers to use Python or Java as the programming language and a limited set of APIs. Current APIs allow storing and retrieving data from a BigTable non-relational database; making HTTP requests; sending e-mail; manipulating images; and caching. Most existing Web applications can't run on App Engine without modification, because they require a relational database.

    Read the article

  • Service reference addition issue in visual studio 2010

    - by user293072
    I am currently working on an application that allows reverse geocoding using silverlight + bing maps. The thing is that I want to add a reference to the reverse geocoding service provided in msdn ( http://msdn.microsoft.com/en-us/library/cc879136.aspx) i.e. http:// dev.virtualearth.net/webservices/v1/geocodeservice/geocodeservice.svc?wsdl, but when I try to get a reference in vs2010, I get the following error: The document at the url http:// dev.virtualearth.net/webservices/v1/metadata/geocodeservice/geocodeservice.wsdl was not recognized as a known document type. The error message from each known type may help you fix the problem: Report from 'XML Schema' is ''', hexadecimal value 0x1F, is an invalid character. Line 1, position 1.'. Report from 'DISCO Document' is ''', hexadecimal value 0x1F, is an invalid character. Line 1, position 1.'. Report from 'WSDL Document' is 'There is an error in XML document (1, 1).'. '', hexadecimal value 0x1F, is an invalid character. Line 1, position 1. Metadata contains a reference that cannot be resolved: 'http://dev.virtualearth.net/webservices/v1/geocodeservice/geocodeservice.svc?wsdl'. Content Type application/soap+xml; charset=utf-8 was not supported by service http: //dev.virtualearth.net/webservices/v1/geocodeservice/geocodeservice.svc?wsdl. The client and service bindings may be mismatched. The remote server returned an error: (415) Unsupported Media Type. If the service is defined in the current solution, try building the solution and adding the service reference again. It is good to mention that I can access the service URL from the browser (with a no style information warning). I am aware that there are other reverse geolocoding services out there, but I am somewhat forced by certain circumstances to use only Microsoft-related components/services. Please help :)

    Read the article

  • Algorithm to match list of regular expressions

    - by DSII
    I have two algorithmic questions for a project I am working on. I have thought about these, and have some suspicions, but I would love to hear the community's input as well. Suppose I have a string, and a list of N regular expressions (actually they are wildcard patterns representing a subset of full regex functionality). I want to know whether the string matches at least one of the regular expressions in the list. Is there a data structure that can allow me to match the string against the list of regular expressions in sublinear (presumably logarithmic) time? This is an extension of the previous problem. Suppose I have the same situation: a string and a list of N regular expressions, only now each of the regular expressions is paired with an offset within the string at which the match must begin (or, if you prefer, each of the regular expressions must match a substring of the given string beginning at the given offset). To give an example, suppose I had the string: This is a test string and the regex patterns and offsets: (a) his.* at offset 0 (b) his.* at offset 1 The algorithm should return true. Although regex (a) does not match the string beginning at offset 0, regex (b) does match the substring beginning at offset 1 ("his is a test string"). Is there a data structure that can allow me to solve this problem in sublinear time? One possibly useful piece of information is that often, many of the offsets in the list of regular expressions are the same (i.e. often we are matching the substring at offset X many times). This may be useful to leverage the solution to problem #1 above. Thank you very much in advance for any suggestions you may have!

    Read the article

  • Weak event handler model for use with lambdas

    - by Benjol
    OK, so this is more of an answer than a question, but after asking this question, and pulling together the various bits from Dustin Campbell, Egor, and also one last tip from the 'IObservable/Rx/Reactive framework', I think I've worked out a workable solution for this particular problem. It may be completely superseded by IObservable/Rx/Reactive framework, but only experience will show that. I've deliberately created a new question, to give me space to explain how I got to this solution, as it may not be immediately obvious. There are many related questions, most telling you you can't use inline lambdas if you want to be able to detach them later: Weak events in .Net? Unhooking events with lambdas in C# Can using lambdas as event handlers cause a memory leak? How to unsubscribe from an event which uses a lambda expression? Unsubscribe anonymous method in C# And it is true that if YOU want to be able to detach them later, you need to keep a reference to your lambda. However, if you just want the event handler to detach itself when your subscriber falls out of scope, this answer is for you.

    Read the article

  • SQL Server CLR stored procedures in data processing tasks - good or evil?

    - by Gart
    In short - is it a good design solution to implement most of the business logic in CLR stored procedures? I have read much about them recently but I can't figure out when they should be used, what are the best practices, are they good enough or not. For example, my business application needs to parse a large fixed-length text file, extract some numbers from each line in the file, according to these numbers apply some complex business rules (involving regex matching, pattern matching against data from many tables in the database and such), and as a result of this calculation update records in the database. There is also a GUI for the user to select the file, view the results, etc. This application seems to be a good candidate to implement the classic 3-tier architecture: the Data Layer, the Logic Layer, and the GUI layer. The Data Layer would access the database The Logic Layer would run as a WCF service and implement the business rules, interacting with the Data Layer The GUI Layer would be a means of communication between the Logic Layer and the User. Now, thinking of this design, I can see that most of the business rules may be implemented in a SQL CLR and stored in SQL Server. I might store all my raw data in the database, run the processing there, and get the results. I see some advantages and disadvantages of this solution: Pros: The business logic runs close to the data, meaning less network traffic. Process all data at once, possibly utilizing parallelizm and optimal execution plan. Cons: Scattering of the business logic: some part is here, some part is there. Questionable design solution, may encounter unknown problems. Difficult to implement a progress indicator for the processing task. I would like to hear all your opinions about SQL CLR. Does anybody use it in production? Are there any problems with such design? Is it a good thing?

    Read the article

  • TableView Cells unresponsive

    - by John Donovan
    I have a TableView and I wish to be able to press several cells one after the other and have messages appear. At the moment the cells are often unresponsive. I found some coed for a similar problem someone was kind enough to post, however, although he claimed it worked 100% it doesn't work for me. The app won't even build. Here's the code: -(UIView*) hitTest:(CGPoint)point withEvent:(UIEvent*)event { // check to see if the hit is in this table view if ([self pointInside:point withEvent:event]) { UITableViewCell* newCell = nil; // hit is in this table view, find out // which cell it is in (if any) for (UITableViewCell* aCell in self.visibleCells) { if ([aCell pointInside:[self convertPoint:point toView:aCell] withEvent:nil]) { newCell = aCell; break; } } // if it touched a different cell, tell the previous cell to resign // this gives it a chance to hide the keyboard or date picker or whatever if (newCell != activeCell) { [activeCell resignFirstResponder]; self.activeCell = newCell; // may be nil } } // return the super's hitTest result return [super hitTest:point withEvent:event]; } With this code I get this warning: that my viewController may not respond to pointsInside:withEvent (it's a TableViewController). I also get some faults: request for member 'visibleCells' in something not a structure or a union. incompatible type for argument 1 of pointInsideWithEvent, expression does not have a valid object type and similar. I must admit I'm not so good at reading other people's code but I was wondering whether the problems here are obvious and if so if anyone could give me a pointer it would be greatly appreciated.

    Read the article

  • Graph limitations - Should I use Decorator?

    - by Nick Wiggill
    I have a functional AdjacencyListGraph class that adheres to a defined interface GraphStructure. In order to layer limitations on this (eg. acyclic, non-null, unique vertex data etc.), I can see two possible routes, each making use of the GraphStructure interface: Create a single class ("ControlledGraph") that has a set of bitflags specifying various possible limitations. Handle all limitations in this class. Update the class if new limitation requirements become apparent. Use the decorator pattern (DI, essentially) to create a separate class implementation for each individual limitation that a client class may wish to use. The benefit here is that we are adhering to the Single Responsibility Principle. I would lean toward the latter, but by Jove!, I hate the decorator Pattern. It is the epitome of clutter, IMO. Truthfully it all depends on how many decorators might be applied in the worst case -- in mine so far, the count is seven (the number of discrete limitations I've recognised at this stage). The other problem with decorator is that I'm going to have to do interface method wrapping in every... single... decorator class. Bah. Which would you go for, if either? Or, if you can suggest some more elegant solution, that would be welcome. EDIT: It occurs to me that using the proposed ControlledGraph class with the strategy pattern may help here... some sort of template method / functors setup, with individual bits applying separate controls in the various graph-canonical interface methods. Or am I losing the plot?

    Read the article

  • Using regex to extract variables from a plain-text form letter?

    - by Yaaqov
    Hi - I'm looking for a good example of using Regular Expressions in PHP to "reverse engineer" a form letter (with a known format, of course) that has been pasted into a multiline textbox and sent to a script for processing. So, for example, let's assume this is the original plain-text input (taken from a USDA press release): WASHINGTON, April 5, 2010 - North American Bison Co-Op, a New Rockford, N.D., establishment is recalling approximately 25,000 pounds of whole beef heads containing tongues that may not have had the tonsils completely removed, which is not compliant with regulations that require the removal of tonsils from cattle of all ages, the U.S. Department of Agriculture's Food Safety and Inspection Service (FSIS) announced today. For clarity, the fields that are variables are highlighted below: [pr_city=]WASHINGTON, [pr_date=]April 5, 2010 - [corp_name=]North American Bison Co-Op, a [corp_city=]New Rockford, [corp_state=]N.D., establishment is recalling approximately [amount=]25,000 pounds of [product=]whole beef heads containing tongues that may not have had the tonsils completely removed, which is not compliant with regulations that require [reason=]the removal of tonsils from cattle of all ages, the U.S. Department of Agriculture's Food Safety and Inspection Service (FSIS) announced today. How could I efficiently extract the contents of the pr_city pr_date corp_name corp_city corp_state amount product reason fields from my example? Any help would be appreciated, thanks.

    Read the article

  • Having trouble adding jquery to charisma

    - by kira423
    I am trying to add some jquery to this Charisma admin panel and have been having nothing but trouble. I am trying to add it to the charisma.js file. This is what I am adding // add multiple select / deselect functionality $("#selectall").click(function () { $('.checkbox').attr('checked', this.checked); }); // if all checkbox are selected, check the selectall checkbox // and viceversa $(".checkbox").click(function(){ if($(".checkbox").length == $(".checkbox:checked").length) { $("#selectall").attr("checked", "checked"); } else { $("#selectall").removeAttr("checked"); } }); I have tried this code wrapped in the anonymous $(function(){ as well as without, and I have inserted it into both $(document).ready(function(){ and docReady() as well as in the head of my code but I am not really "trained" on jquery so I am a bit lost as to what I am doing wrong. My class and div tags are correct for the code, as I have checked them several times for misspellings. I am not sure what I am doing wrong. Is there a better "check" all code I can use here, or am I just putting this all in the wrong place? UPDATE: I think the actual code may be working, I cannot tell, after I click the select all box it seems that I have to click the other boxes 3 times to get the check mark back into the box, so it seems like it is having trouble actually showing that the box is marked. This may be a problem with styling, but I don't know how to correct it.

    Read the article

  • Updating multiple related tables in SQLite with C#

    - by PerryJ
    Just some background, sorry so long winded. I'm using the System.Data.SQLite ADO.net adapter to create a local sqlite database and this will be the only process hitting the database, so I don't need to worry about concurrency. I'm building the database from various sources and don't want to build this all in memory using datasets or dataadapters or anything like that. I want to do this using SQL (DdCommands). I'm not very good with SQL and complete noob in sqlite. I'm basically using sqlite as a local database / save file structure. The database has a lot of related tables and the data has nothing to do with People or Regions or Districts, but to use a simple analogy, imagine: Region table with auto increment RegionID, RegionName column and various optional columns. District table with auto increment DistrictID, DistrictName, RegionId, and various optional columns Person table with auto increment PersonID, PersonName, DistrictID, and various optional columns So I get some data representing RegionName, DistrictName,PersonName, and other Person related data. The Region, District and/or Person may or may not be created at this point. Once again, not being the greatest with this, my thoughts would be something like: Check to see if Region exists and if so get the RegionID else create it and get RegionID Check to see if District exists and if so get the DistrictID else create it adding in RegionID from above and get DistrictID Check to see if Person exists and if so get the PersonID else create it adding in DistrictID from above and get PersonID Update Person with rest of data. In MS SQL Server I would create a stored procedure to handle all this. Only way I can see to do this with sqlite is a lot of commands. So I'm sure I'm not getting this. I've spent hours looking around on various sites but just don't feel like I'm going down the right road. Any suggestions would be greatly appreciated.

    Read the article

  • How does Subsonic handle connections?

    - by Quintin Par
    In Nhibernate you start a session by creating it during a BeginRequest and close at EndRequest public class Global: System.Web.HttpApplication { public static ISessionFactory SessionFactory = CreateSessionFactory(); protected static ISessionFactory CreateSessionFactory() { return new Configuration() .Configure(Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "hibernate.cfg.xml")) .BuildSessionFactory(); } public static ISession CurrentSession { get{ return (ISession)HttpContext.Current.Items["current.session"]; } set { HttpContext.Current.Items["current.session"] = value; } } protected void Global() { BeginRequest += delegate { CurrentSession = SessionFactory.OpenSession(); }; EndRequest += delegate { if(CurrentSession != null) CurrentSession.Dispose(); }; } } What’s the equivalent in Subsonic? The way I understand, Nhibernate will close all the connections at endrequest. Reason: While trouble shooting some legacy code in a Subsonic project I get a lot of MySQL timeouts,suggesting that the code is not closing the connections MySql.Data.MySqlClient.MySqlException: error connecting: Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached. Generated: Tue, 11 Aug 2009 05:26:05 GMT System.Web.HttpUnhandledException: Exception of type 'System.Web.HttpUnhandledException' was thrown. --- MySql.Data.MySqlClient.MySqlException: error connecting: Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached. at MySql.Data.MySqlClient.MySqlPool.GetConnection() at MySql.Data.MySqlClient.MySqlConnection.Open() at SubSonic.MySqlDataProvider.CreateConnection(String newConnectionString) at SubSonic.MySqlDataProvider.CreateConnection() at SubSonic.AutomaticConnectionScope..ctor(DataProvider provider) at SubSonic.MySqlDataProvider.GetReader(QueryCommand qry) at SubSonic.DataService.GetReader(QueryCommand cmd) at SubSonic.ReadOnlyRecord`1.LoadByParam(String columnName, Object paramValue) My connection string is as follows <connectionStrings> <add name="xx" connectionString="Data Source=xx.net; Port=3306; Database=db; UID=dbuid; PWD=xx;Pooling=true;Max Pool Size=12;Min Pool Size=2;Connection Lifetime=60" /> </connectionStrings>

    Read the article

  • JavaScript and JQuery - Encoding HTML

    - by user70192
    Hello, I have a web page that has a textarea defined on it like so: <textarea id="myTextArea" rows="6" cols="75"></textarea> There is a chance that a user may enter single and double quotes in this field. For instance, I have been testing with the following string: Just testin' using single and double "quotes". I'm hoping the end of this task is comin'. Additionally, the user may enter HTML code, which I would prefer to prevent. Regardless, I am passing the contents of this textarea onto web service. I must encode the contents of the textarea in JavaScript before I can send it on. Currently, I'm trying the following: var contents $('<div/>').text($("#myTextArea).val()).html(); alert(contents); I was expecting contents to display Just testin&#39; using single and double &#34;quotes&#34;. I&#39;m hoping the end of this task is comin&#39;. Instead, the original string is printed out. Beyond just double-and-single quotes, there are a variety of entities to consider. Because of this, I was assuming there would be a way to encode HTML before passing it on. Can someone please tell me how to do this? Thank you,

    Read the article

  • Convert 4 bytes to int

    - by Oscar Reyes
    I'm reading a binary file like this: InputStream in = new FileInputStream( file ); byte[] buffer = new byte[1024]; while( ( in.read(buffer ) > -1 ) { int a = // ??? } What I want to do it to read up to 4 bytes and create a int value from those but, I don't know how to do it. I kind of feel like I have to grab 4 bytes at a time, and perform one "byte" operation ( like << & FF and stuff like that ) to create the new int What's the idiom for this? EDIT Ooops this turn out to be a bit more complex ( to explain ) What I'm trying to do is, read a file ( may be ascii, binary, it doesn't matter ) and extract the integers it may have. For instance suppose the binary content ( in base 2 ) : 00000000 00000000 00000000 00000001 00000000 00000000 00000000 00000010 The integer representation should be 1 , 2 right? :- / 1 for the first 32 bits, and 2 for the remaining 32 bits. 11111111 11111111 11111111 11111111 Would be -1 and 01111111 11111111 11111111 11111111 Would be Integer.MAX_VALUE ( 2147483647 )

    Read the article

  • Producing Mini Dumps for _caught_ SEH exceptions in mixed code DLL

    - by Assaf Lavie
    I'm trying to use code similar to clrdump to create mini dumps in my managed process. This managed process invokes C++/CLI code which invokes some native C++ static lib code, wherein SEH exceptions may be thrown (e.g. the occasional access violation). C# WinForms -> C++/CLI DLL -> Static C++ Lib -> ACCESS VIOLATION Our policy is to produce mini dumps for all SEH exceptions (caught & uncaught) and then translate them to C++ exceptions to be handled by application code. This works for purely native processes just fine; but when the application is a C# application - not so much. The only way I see to produce dumps from SEH exceptions in a C# process is to not catch them - and then, as unhandled exceptions, use the Application.ThreadException handler to create a mini dump. The alternative is to let the CLR translate the SEH exception into a .Net exception and catch it (e.g. System.AccessViolationException) - but that would mean no dump is created, and information is lost (stack trace information in Exception isn't as rich as the mini dump). So how can I handle SEH exceptions by both creating a minidump and translating the exception into a .Net exception so that my application may try to recover?

    Read the article

  • .NET load assembly error metabase config

    - by peter
    Hi, Yesterday I found out that the mailroot directory had moved to a different location on the C drive. I don't know how this happened as no configs were changed, using IIS7 with IIS6 SMTP service on Windows 2008 R2 web edition. I wanted to restore it to the default c:\inetpub\mailroot location. So I opened system32/inetsrv/MetaBase.xml, and in the node was the problem, BadMailDirectory, DropDirectory etc were all pointing to the wrong location... I changed them back to the default c:\inetpub\mailroot. Later I discovered an ASP.NET application was throwing this error: An error occurred in the Microsoft .NET Framework while trying to load assembly id 1. The server may be running out of resources, or the assembly may not be trusted with PERMISSION_SET = EXTERNAL_ACCESS or UNSAFE. Run the query again, or check documentation to see how to solve the assembly trust issues. For more information about this error: System.IO.FileNotFoundException: Could not load file or assembly 'microsoft.sqlserver.types, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91' or one of its dependencies. The system cannot find the file specified. I figured it had to do with my changes to the mailroot locations, so I gave ASP.NET access to the mailroot directory and it was working.... My question is, how is this possible? Why does APS.NET require access to the mailroot directory to load assemblies? Thanks

    Read the article

  • SQL (mySQL) update some value in all records processed by a select

    - by jdmuys
    I am using mySQL from their C API, but that shouldn't be relevant. My code must process records from a table that match some criteria, and then update the said records to flag them as processed. The lines in the table are modified/inserted/deleted by another process I don't control. I am afraid in the following, the UPDATE might flag some records erroneously since the set of records matching might have changed between step 1 and step 3. SELECT * FROM myTable WHERE <CONDITION>; # step 1 <iterate over the selected set of lines. This may take some time.> # step 2 UPDATE myTable SET processed=1 WHERE <CONDITION> # step 3 What's the smart way to ensure that the UPDATE updates all the lines processed, and only them? A transaction doesn't seem to fit the bill as it doesn't provide isolation of that sort: a recently modified record not in the originally selected set might still be targeted by the UPDATE statement. For the same reason, SELECT ... FOR UPDATE doesn't seem to help, though it sounds promising :-) The only way I can see is to use a temporary table to memorize the set of rows to be processed, doing something like: CREATE TEMPORARY TABLE workOrder (jobId INT(11)); INSERT INTO workOrder SELECT myID as jobId FROM myTable WHERE <CONDITION>; SELECT * FROM myTable WHERE myID IN (SELECT * FROM workOrder); <iterate over the selected set of lines. This may take some time.> UPDATE myTable SET processed=1 WHERE myID IN (SELECT * FROM workOrder); DROP TABLE workOrder; But this seems wasteful and not very efficient. Is there anything smarter? Many thanks from a SQL newbie.

    Read the article

  • Throttling outbound API calls generated by a Rails app

    - by Sharpie
    I am not a professional web developer, but I like to wrench on websites as a hobby. Recently, I have been playing with developing a Rails app as a project to help me learn the framework. The goal of my toy app is to harvest data from another service through their API and make it available for me to query using a search function. However, the service I want to pull data from imposes a rate limit on the number of API calls that may be executed per minute. I plan on having my app run a daily update which may generate a burst of API calls that far exceeds the limit provided by the external service. I wish to respect the performance of the external site and so would like to throttle the rate at which my app executes the calls. I have done a little bit of searching and the overwhelming amount of tutorial material and pre-built libraries I have found cover throttling inbound API calls to a web app and I can find little discussion of controlling the flow of outbound calls. Being both an amateur web developer and a rails newbie, it is entirely possible that I have been executing the wrong searches in the wrong places. Therefore my questions are: Is there a nice website out there aggregating Rails tutorials that has material related to throttling outbound API requests? Are there any ruby gems or other libraries that would help me throttle the requests? I have some ideas of how I might go about writing a throttling system using a queue-based worker like DelayedJob or Resque to manage the API calls, but I would rather spend my weekends building the rest of the site if there is a good pre-built solution out there already.

    Read the article

  • Help to understand the issue with protected method

    - by zeroed
    I'm reading Sybex Complete Java 2 Certification Study Guide April 2005 (ISBN0782144195). This book is for java developers who wants to pass java certification. After a chapter about access modifiers (along with other modifiers) I found the following question (#17): True or false: If class Y extends class X, the two classes are in different packages, and class X has a protected method called abby(), then any instance of Y may call the abby() method of any other instance of Y. This question confused me a little. As far as I know you can call protected method on any variable of the same class (or subclasses). You cannot call it on variables, that higher in the hierarchy than you (e.g. interfaces that you implement). For example, you cannot clone any object just because you inherit it. But the questions says nothing about variable type, only about instance type. I was confused a little and answered "true". The answer in the book is False. An object that inherits a protected method from a superclass in a different package may call that method on itself but not on other instances of the same class. There is nothing here about variable type, only about instance type. This is very strange, I do not understand it. Can anybody explain what is going on here?

    Read the article

  • Redirect C++ std::clog to syslog on Unix

    - by kriss
    I work on Unix on a C++ program that send messages to syslog. The current code uses the syslog system call that works like printf. Now I would prefer to use a stream for that purpose instead, typically the built-in std::clog. But clog merely redirect output to stderr, not to syslog and that is useless for me as I also use stderr and stdout for other purposes. I've seen in another answer that it's quite easy to redirect it to a file using rdbuf() but I see no way to apply that method to call syslog as openlog does not return a file handler I could use to tie a stream on it. Is there another method to do that ? (looks pretty basic for unix programming) ? Edit: I'm looking for a solution that does not use external library. What @Chris is proposing could be a good start but is still a bit vague to become the accepted answer. Edit: using Boost.IOStreams is OK as my project already use Boost anyway. Linking with external library is possible but is also a concern as it's GPL code. Dependencies are also a burden as they may conflict with other components, not be available on my Linux distribution, introduce third-party bugs, etc. If this is the only solution I may consider completely avoiding streams... (a pity).

    Read the article

< Previous Page | 182 183 184 185 186 187 188 189 190 191 192 193  | Next Page >