Search Results

Search found 13366 results on 535 pages for 'non ascii'.

Page 437/535 | < Previous Page | 433 434 435 436 437 438 439 440 441 442 443 444  | Next Page >

  • When *not* to use prepared statements?

    - by Ben Blank
    I'm re-engineering a PHP-driven web site which uses a minimal database. The original version used "pseudo-prepared-statements" (PHP functions which did quoting and parameter replacement) to prevent injection attacks and to separate database logic from page logic. It seemed natural to replace these ad-hoc functions with an object which uses PDO and real prepared statements, but after doing my reading on them, I'm not so sure. PDO still seems like a great idea, but one of the primary selling points of prepared statements is being able to reuse them… which I never will. Here's my setup: The statements are all trivially simple. Most are in the form SELECT foo,bar FROM baz WHERE quux = ? ORDER BY bar LIMIT 1. The most complex statement in the lot is simply three such selects joined together with UNION ALLs. Each page hit executes at most one statement and executes it only once. I'm in a hosted environment and therefore leery of slamming their servers by doing any "stress tests" personally. Given that using prepared statements will, at minimum, double the number of database round-trips I'm making, am I better off avoiding them? Can I use PDO::MYSQL_ATTR_DIRECT_QUERY to avoid the overhead of multiple database trips while retaining the benefit of parametrization and injection defense? Or do the binary calls used by the prepared statement API perform well enough compared to executing non-prepared queries that I shouldn't worry about it? EDIT: Thanks for all the good advice, folks. This is one where I wish I could mark more than one answer as "accepted" — lots of different perspectives. Ultimately, though, I have to give rick his due… without his answer I would have blissfully gone off and done the completely Wrong Thing even after following everyone's advice. :-) Emulated prepared statements it is!

    Read the article

  • How to recursive rake? -- or suitable alternatives

    - by TerryP
    I want my projects top level Rakefile to build things using rakefiles deeper in the tree; i.e. the top level rakefile says how to build the project (big picture) and the lower level ones build a specific module (local picture). There is of course a shared set of configuration for the minute details of doing that whenever it can be shared between tasks: so it is mostly about keeping the descriptions of what needs building, as close to the sources being built. E.g. /Source/Module/code.foo and cie should be built using the instructions in /Source/Module/Rakefile; and /Rakefile understands the dependencies between modules. I don't care if it uses multiple rake processes (ala recursive make), or just creates separate build environments. Either way it should be self-containable enough to be processed by a queue: so that non-dependent modules could be built simultaneously. The problem is, how the heck do you actually do something like that with Rake!? I haven't been able to find anything meaningful on the Internet, nor in the documentation. I tried creating a new Rake::Application object and setting it up, but whatever methods I try invoking, only exceptions or "Don't know how to build task ':default'" errors get thrown. (Yes, all rakefiles have a :default). Obviously one could just execute 'rake' in a sub directory for a :modulename task, but that would ditch the options given to the top level; e.g. think of $(MAKE) and $(MAKEFLAGS). Anyone have a clue on how to properly do something like a recursive rake?

    Read the article

  • Elegantly handling constraint violations in EJB/JPA environment?

    - by hallidave
    I'm working with EJB and JPA on a Glassfish v3 app server. I have an Entity class where I'm forcing one of the fields to be unique with a @Column annotation. @Entity public class MyEntity implements Serializable { private String uniqueName; public MyEntity() { } @Column(unique = true, nullable = false) public String getUniqueName() { return uniqueName; } public void setUniqueName(String uniqueName) { this.uniqueName = uniqueName; } } When I try to persist an object with this field set to a non-unique value I get an exception (as expected) when the transaction managed by the EJB container commits. I have two problems I'd like to solve: 1) The exception I get is the unhelpful "javax.ejb.EJBException: Transaction aborted". If I recursively call getCause() enough times, I eventually reach the more useful "java.sql.SQLIntegrityConstraintViolationException", but this exception is part of the EclipseLink implementation and I'm not really comfortable relying on it's existence. Is there a better way to get detailed error information with JPA? 2) The EJB container insists on logging this error even though I catch it and handle it. Is there a better way to handle this error which will stop Glassfish from cluttering up my logs with useless exception information? Thanks.

    Read the article

  • Where are the real risks in network security?

    - by Barry Brown
    Anytime a username/password authentication is used, the common wisdom is to protect the transport of that data using encryption (SSL, HTTPS, etc). But that leaves the end points potentially vulnerable. Realistically, which is at greater risk of intrusion? Transport layer: Compromised via wireless packet sniffing, malicious wiretapping, etc. Transport devices: Risks include ISPs and Internet backbone operators sniffing data. End-user device: Vulnerable to spyware, key loggers, shoulder surfing, and so forth. Remote server: Many uncontrollable vulnerabilities including malicious operators, break-ins resulting in stolen data, physically heisting servers, backups kept in insecure places, and much more. My gut reaction is that although the transport layer is relatively easy to protect via SSL, the risks in the other areas are much, much greater, especially at the end points. For example, at home my computer connects directly to my router; from there it goes straight to my ISPs routers and onto the Internet. I would estimate the risks at the transport level (both software and hardware) at low to non-existant. But what security does the server I'm connected to have? Have they been hacked into? Is the operator collecting usernames and passwords, knowing that most people use the same information at other websites? Likewise, has my computer been compromised by malware? Those seem like much greater risks. What do you think?

    Read the article

  • Returned JSON from Twitter and displaying tweets using FlexSlider

    - by Trey Copeland
    After sending a request to the Twitter API using geocode, I'm getting back a json response with a list of tweets. I then that into a php array using json_decode() and use a foreach loop to output what I need. I'm using flex slider to show the tweets in a vertical fashion after wrapping them in a list. So what I want is for it to only show 10 tweets at a time and scroll through them infinitely like an escalator. Here's my loop to output the tweets: foreach ($tweets["results"] as $result) { $str = preg_replace('/[^\00-\255]+/u', '', $result["text"]); echo '<ul class="slides">'; echo '<li><a href="http://twitter.com/' . $result["from_user"] . '"><img src=' . $result["profile_image_url"] . '></a>' . $str . '</li><br /><br />'; echo '</ul>'; } My jQuery looks like this as of right now as I'm trying to play around with things: $(window).load(function() { $('.flexslider').flexslider({ slideDirection: "vertical", start: function(slider) { //$('.flexslider .slides > li gt(10)').hide(); }, after: function(slider) { // current.sl } }); }); Non-Working demo here - http://macklabmedia.com/tweet/

    Read the article

  • Why is it assumed that send may return with less than requested data transmitted on a blocking socke

    - by Ernelli
    The standard method to send data on a stream socket has always been to call send with a chunk of data to write, check the return value to see if all data was sent and then keep calling send again until the whole message has been accepted. For example this is a simple example of a common scheme: int send_all(int sock, unsigned char *buffer, int len) { int nsent; while(len 0) { nsent = send(sock, buffer, len, 0); if(nsent == -1) // error return -1; buffer += nsent; len -= nsent; } return 0; // ok, all data sent } Even the BSD manpage mentions that ...If no messages space is available at the socket to hold the message to be transmitted, then send() normally blocks... Which indicates that we should assume that send may return without sending all data. Now I find this rather broken but even W. Richard Stevens assumes this in his standard reference book about network programming, not in the beginning chapters, but the more advanced examples uses his own writen (write all data) function instead of calling write. Now I consider this still to be more or less broken, since if send is not able to transmit all data or accept the data in the underlying buffer and the socket is blocking, then send should block and return when the whole send request has been accepted. I mean, in the code example above, what will happen if send returns with less data sent is that it will be called right again with a new request. What has changed since last call? At max a few hundred CPU cycles have passed so the buffer is still full. If send now accepts the data why could'nt it accept it before? Otherwise we will end upp with an inefficient loop where we are trying to send data on a socket that cannot accept data and keep trying, or else? So it seems like the workaround, if needed, results in heavily inefficient code and in those circumstances blocking sockets should be avoided at all an non blocking sockets together with select should be used instead.

    Read the article

  • SqlLite/Fluent NHibernate integration test harness initialization not repeatable after large data se

    - by Mark Rogers
    In one of my main data integration test harnesses I create and use Fluent NHibernate's SingleConnectionSessionSourceForSQLiteInMemoryTesting, to get a fresh session for each test. After each test, I close the connection, session, and session factory, and throw out the nested StructureMap container they came from. This works for almost any simple data integration test I can think, including ones that utilize Fluent NHib's PersistenceSpecification object. When I test the application's lengthy database bootstrapping process, which creates and saves thousands of domain objects, I start seeing issues. It's not that the setup and tear down fails, in fact, the test successfully bootstraps the in-memory database as the application would bootstrap the real database in the production environment. The problem occurs when the database bootstrapping occurs a second time on a new in-memory database, with a new session and session factory. The error is: NHibernate.StaleStateException : Unexpected row count: 0; expected: 1 The row count is indeed Unexpected, the row that the application under test is looking for should be in the session. You see, it's not that any data from the last integration test is sticking around, it's that for some reason the session just stops working mid-database-boostrap. And I've looked everywhere for a place I might be holding on to an old session and I can't find one. I've searched through the code for static singleton objects, but there are none anywhere near the code in question. I have a couple StructureMap InstanceScope singleton's but they are getting thrown out with each nested container that is lost after every test teardown. I've tried every possible variation on disposing and closing every object involved with each test teardown and it still fails on this lengthy database bootstrap. But non-bootstrap related database tests appear to work fine. I'm starting to run out of options and may have to surrender lengthy database integration tests in favor of WatiN-based acceptance tests. Can anyone give me any clue about how I can figure out why some of my SingleConnectionSessionSourceForSQLiteInMemoryTesting aren't repeatable? Any advice at all, about how to make an NHibernate SqlLite database integration test harness repeatable?

    Read the article

  • Problem with cruise control and visual svn

    - by Andrew
    Hi Wonder if anyone can help, I am experiencing a strange issue with my configuration of cruisecontrol.net and visual svn. I am setting the current ccnet.config <sourcecontrol type="svn"> <trunkUrl>https://bladerunner.azullo.local:8443/svn/application/trunk</trunkUrl> <executable>C:\Program Files (x86)\VisualSVN Server\bin\svn.exe</executable> <username>test</username> <password>test</password> <workingDirectory>D:\Development\Build\application\</workingDirectory> </sourcecontrol> <publishers> <xmllogger/> </publishers> <modificationDelaySeconds>10</modificationDelaySeconds> </project> When I run this I expect it to go to https://bladerunner.azullo.local:8443/svn/application/trunk, however i get the following ThoughtWorks.CruiseControl.Core.CruiseControlException: Source control operation failed: svn: OPTIONS of 'http://bladerunner.azullo.local:8080/svn/application/trunk': could not connect to server (http://bladerunner.azullo.local:8080) . Process command: C:\Program Files (x86)\VisualSVN Server\bin\svn.exe update D:\Development\build\application\ --username test --password ** --no-auth-cache --non-interactive at ThoughtWorks.CruiseControl.Core.Sourcecontrol.ProcessSourceControl.Execute(ProcessInfo processInfo) at ThoughtWorks.CruiseControl.Core.Sourcecontrol.Svn.UpdateSource(IIntegrationResult result) at ThoughtWorks.CruiseControl.Core.IntegrationRunner.Build(IIntegrationResult result) at ThoughtWorks.CruiseControl.Core.IntegrationRunner.Integrate(IntegrationRequest request) So for some reason it goes to 'http://bladerunner.azullo.local:8080/svn/application/trunk' If I remove the username and password elements in the ccnet.config. It goes to the correct url. I dont understand this behaviour. I have configured visual svn with a certificate using Active directory Certificate Services, if this was the problem I would expect it to show an error regarding the certificate instead of changing the url. I have cleared our state etc Any Ideas

    Read the article

  • What's the compelling reason to upgrade to Visual Studio 2010 from VS2008?

    - by Cheeso
    Are there new features in Visual Studio 2010 that are must-haves? If so, which ones? For me, the big draws for VS2008 as compared to VS2005 were LINQ, .NET Framework multitargeting, WCF (REST + Syndication), and general devenv.exe reliability. Granted, some of these features are framework things, and not tool things. For the purposes of this discussion, I'm willing to combine them into one bucket. What is the list of must-have features for VS2010 versus VS2008? Are there any? I am particularly interested in C#. Update: I know how to google, so I can get the official list from Microsoft. I guess what I really wanted was, the assessment from people using it, as to which things are really notable. Microsoft went on for 3 pages about 2008/3.5 features, and many people sort of boiled it down to LINQ, and a few other things. What is that short list for VS2010? Summary so far, what people think is cool or compelling: Visual Studio engine multi-monitor support new extensibility model based on WPF, prettier and more usable new TFS stuff, incl automated test tools parallel debugging .NET Framework parallel extensions for .NET C# 4.0 generic variance optional and named params easier interop with non-managed environments, like COM or Javascript VB 10.0 collection and array literals / initializers automatic properties anonymous methods / statement lambdas I read up on these at Zander's blog. He described these and other features. Nobody on this list said anything about: Visual Studio engine F# support Javascript code-completion JQuery is now included UML better Sharepoint capabilities C++ moves to msbuild project files

    Read the article

  • Flex 3 - Issues with textArea "editable" property

    - by BS_C3
    Hello Community! I'm having issues with the property "editable" of textArea control. I have a component: OrderView.mxml and it's associated data class OrderViewData.as. Orderview.mxml is inside a viewStack to enable navigation from a component to another. In this particular case, OrderView.mxml is called by another component: SearchResult.mxml. I can thus navigate from SearchResult.mxml to OrderView.mxml, and back to SearchResult.mxml... OrderView.mxml has textArea and textInput control, that have to be editable or nonEditable depending on the property var isEditable:Boolean from OrderViewData.as. When the application is launched, isEditable = true. So, all textInput and textArea controls are editable the first time the user gets to OrderView.mxml. When the user clicks on the button order from OrderView.mxml, isEditable = false. When the user goes back to SearchResult.mxml, isEditable = true (again) -- Until here, everything works fine. The thing is: when the user goes back to OrderView.mxml for the second time (and beyond), even if the property isEditable = true, textArea controls are still non editable... But the textInput controls are editable! Here is some code for your comprehension: OrderView.mxml <mx:Canvas xmlns:mx="http://www.adobe.com/2006/mxml" backgroundColor="#F3EDEC"> <mx:TextArea id="contentTA" text="{OrderViewData.instance.contentTA}" enabled="{OrderViewData.instance.isEnabled}" width="100%" height="51" maxChars="18" styleName="ORTextInput" focusIn="if(OrderViewData.instance.isEditable) contentTA.setSelection(0, contentTA.length)"/> <mx:TextInput id="contentTI" text="{OrderViewData.instance.contentTI}" width="40" height="18" maxChars="4" styleName="ORTextInput" change="contentTI_change()" focusIn="if(OrderViewData.instance.isEditable) contentTI.setSelection(0, contentTI.length)" editable="{OrderViewData.instance.isEditable}"/> </mx:Canvas> Am I missing something? Thanks for any help you can provide. Regards. BS_C3

    Read the article

  • Purge complete Python installation on OS X

    - by Konrad Rudolph
    I’m working on a recently-upgraded OS X Snow Leopard and MacPorts and I’m running into problems at every corner. The first problem is the sheer number of installed Python versions: altogether, there are four: 2.5, 2.6 and 3.0 in /Library/Frameworks/Python.framework 2.6 in /opt/local/Library/Frameworks/Python.framework/ (MacPorts installation) So there are at least two useless/redundant versions: 2.5 and the redundant 2.6. Additionally, the pre-installed Python is giving me severe problems because some of the pre-installed libraries (in particular, scipy, numpy and matplotlib) don’t work properly. I am sorely tempted to purge the complete /Library/Frameworks/Python.framework path, as well as the MacPorts Python installation. After that, I’ll start from a clean slate by installing a properly configured Python, e.g. that from Enthought. Am I running headlong into trouble? Or is this a sane undertaking? (In particular, I need a working Python in the next few days and if I end up with a non-working Python this would be a catastrophe of medium proportions. On the other hand, some features I need from matplotlib aren’t working now.)

    Read the article

  • Android 2.1 fling gesture captured on textview but still a contextmenu opens

    - by hermo
    The following problem seems unique to 2.1, happens both on an emulator and on a nexus. The same example works fine on other platforms I've tested (1.5, 1.6 and 2.0 emulators). I've added created gestureListener as described in this post. The difference is that I've added the listener on a TextView which also has a contextMenu registered, i.e. sth like the following: onCreate(...) { ... // Layout contains a large TextView on which I want to add a context menu tv = findViewById(R.id.text_view); tv.registerForContextMenu(this); // create the gestureListener according above mentioned post. gestureListener = ... // set the listener on the text-view tv.setOnTouchListener(gestureListener); ... } When testing it, the correct gesture is recognized alright, but every other time it also causes the context menu to be opened. As the same example is working on non 2.1 platforms, I've got a feeling it is not my code that is the problem... Thankful for any suggestions.

    Read the article

  • nhibernate not taking mappings from assembly

    - by cvista
    Hi I'm using fnh and castle nhib facility. I followed the advice from mike hadlow here: http://mikehadlow.blogspot.com/2009/01/integrating-fluent-nhibernate-and.html here is my FluentNHibernateConfigurationBuilder: public Configuration GetConfiguration(IConfiguration facilityConfiguration) { var defaultConfigurationBuilder = new DefaultConfigurationBuilder(); var configuration = defaultConfigurationBuilder.GetConfiguration(facilityConfiguration); configuration.AddMappingsFromAssembly(typeof(User).Assembly); return configuration; } i know the facility is picking it up as i can break inside that method and it steps through. however, when it's done, non of the mappings are created and i get the following error when i try to save an entity: No persister for: IsItGd.Model.Entities.User here is my user class: //simple model of web user public class User { public virtual int Id { get; set; } public virtual string FullName { get; set; } } and here is the mapping: public class UserMap : ClassMap<User> { public UserMap() { Id(x=>x.Id); Map(x=>x.FullName); } } i really can't see what the problem is. the strange thing is - is that if i use automapping it picks everything up - but i don't want to use automapping as i can't do certain things in that scenario. any clues? w://

    Read the article

  • Normalize whitespace and other plain-text formatting routines

    - by dreftymac
    Background: The language is JavaScript. The goal is to find a library or pre-existing code to do low-level plain-text formatting. I can write it myself, but why re-invent the wheel. The issue is: it is tough to determine if a "wheel" is out there, since any search for JavaScript libraries pulls up an ocean of HTML-centric stuff. I am not interested in HTML necessarily, just text. Example: I need a JavaScript function that changes this: BEFORE: nisi ut aliquip | ex ea commodo consequat duis |aute irure dolor in esse cillum dolore | eu fugiat nulla pariatur |excepteur sint occa in culpa qui | officia deserunt mollit anim id |est laborum ... into this ... AFTER: nisi ut aliquip | ex ea commodo consequat duis | aute irure dolor in esse cillum dolore | eu fugiat nulla pariatur | excepteur sint occa in culpa qui | officia deserunt mollit anim id | est laborum Question: Does it exist, a JavaScript library that is non-html-web-development-centric that has functions for normalizing spaces in delimited plain text, justifying and spacing plain text? Rationale: Investigating JavaScript for use in a programmer's text editor.

    Read the article

  • possible xml elements in Android resources xmls ?

    - by Vijay C
    I recently came across a forum post in thelink text android developers group. (link below) It has one post which contains following xml. It is xml file which can be used in drawable folder. <selector xmlns:android="http://schemas.android.com/apk/res/android"> <!-- Non focused states --> <item android:state_focused="false" android:state_selected="false" android:state_pressed="false" android:drawable="@drawable/tab_unselected" /> <item android:state_focused="false" android:state_selected="true" android:state_pressed="false" android:drawable="@drawable/tab_selected" /> <!-- Focused states --> <item android:state_focused="true" android:state_selected="false" android:state_pressed="false" android:drawable="@drawable/tab_focus" /> <item android:state_focused="true" android:state_selected="true" android:state_pressed="false" android:drawable="@drawable/tab_focus"/> <!-- Pressed --> <item android:state_pressed="true" android:drawable="@drawable/tab_press" /> </selector> Now my question is, where is the documentation which tells me that i can use the following xml tags <selector> , <item>, <color>, <integer-array>, <string-array> and many others tag to create xml resources...? I hope, i have explained what i want to know ?

    Read the article

  • What is the 'page lifecycle' of an ASP.NET MVC page, compared to ASP.NET WebForms?

    - by Simon
    What is the 'page lifecycle' of an ASP.NET MVC page, compared to ASP.NET WebForms? I'm tryin to better understand this 'simple' question in order to determine whether or not existing pages I have in a (very) simple site can be easily converted from ASP.NET WebForms. Either a 'conversion' of the process below, or an alternative lifecycle would be what I'm looking for. What I'm currently doing: (yes i know that anyone capable of answering my question already knows all this -- i'm just tryin to get a comparison of the 'lifecycle' so i thought i'd start by filling in what we already all know) Rendering the page: I have a master page which contains my basic template I have content pages that give me named regions from the master page into which I put content. In an event handler for each content page I load data from the database (mostly read-only). I bind this data to ASP.NET controls representing grids, dropdowns or repeaters. This data all 'lives' inside the HTML generated. Some of it gets into ViewState (but I wont go into that too much!) I set properties or bind data to certain items like Image or TextBox controls on the page. The page gets sent to the client rendered as non-reusable HTML. I try to avoid using ViewState other than what the page needs as a minimum. Client side (not using ASP.NET AJAX): I may use JQuery and some nasty tricks to find controls on the page and perform operations on them. If the user selects from a dropdown -- a postback is generated which triggers a C# event in my codebehind. This event may go to the database, but whatever it does a completely newly generated HTML page ends up getting sent back to the client. I may use Page.Session to store key value pairs I need to reuse later So with MVC how does this 'lifecycle' change?

    Read the article

  • Ideas for a rudimentary software licensing implementation

    - by Ross
    I'm trying to decide how to implement a very basic licensing solution for some software I wrote. The software will run on my (hypothetical) clients' machines, with the idea being that the software will immediately quit (with a friendly message) if the client is running it on greater-than-n machines (n being the number of licenses they have purchased). Additionally, the clients are non-tech-savvy to the point where "basic" is good enough. Here is my current design, but given that I have little to no experience in the topic, I wanted to ask SO before I started any development on it: A remote server hosts a MySQL database with a table containing two columns: client-key and license quantity The client-side application connects to the MySQL database on startup, offering it's client-key that I've put into a properties file packaged into the distribution (I would create a new distribution for each new client) Chances are, I'll need a second table to store validation history, so that with some short logic, the software can decide if it can be run on a given machine (maybe a sliding window of n machines using the software per 24 hours) If the software cannot establish a connection to the MySQL database, or decides that it's over the n allowed machines per day, it closes The connection info for the remote server hosting the MySQL database should be hard-coded into the app? (That sounds like a bad idea, but otherwise they could point it to some other always-validates-to-success server) I think that about covers my initial design. The intent being that while it certainly isn't full-proof, I think I've made it at least somewhat difficult to create an easily-sharable cracking solution. Also, I can easily adjust the license amount for a given client/key pair. I gotta figure this has been done a million times before, so tell me about a better solution that's just as simple to implement and provides the same (low) amount of security. In the event that external libraries are used, I prefer Java, as that's what the software has been written in.

    Read the article

  • How to access application.xml file of an EAR deployed to IBM WebSphere 6.1

    - by Matt1776
    I am deploying an EAR file to the IBM WebSpehre server 6.1 - I want to be able to access the EAR application name which is stored in the deployment file under 'display-name'. Looking through stack overflow posts on related subjects, I've been able to gather that this is possible via the Java MBean API - or IBM's WAS API - Problem is I cannot find a place where these API lists are summarized, i.e. cannot figure out which one to begin looking at. I could hardcode the WAS install location and find the file by looking in the 'installedApps' directory, but this is not dynamic. Does anyone have any experience working with these APIs? Any other way to dynamically find the deployed EAR's display name? EDIT - I should add that the reason I would like this information is to dynamically load our properties files - that are named by the following convention "EARAppName.properties" - so you see there IS a reasonable 'rationale' behind desiring this information in my application EDIT 2 - I should also note that this app will always be deployed on a WAS - but in the case that it isnt, a generic non-proprietary solution would be preferred, but not necessary at this moment. EDIT 3 - What I want to accomplish: Is there a way to dynamically find the deployed EAR's display name from within the application code?

    Read the article

  • What's correct way to remove a boost::shared_ptr from a list?

    - by Catskul
    I have a std::list of boost::shared_ptr<T> and I want to remove an item from it but I only have a pointer of type T* which matches one of the items in the list. However I cant use myList.remove( tPtr ) I'm guessing because shared_ptr does not implement == for its template argument type. My immediate thought was to try myList.remove( shared_ptr<T>(tPtr) ) which is syntactically correct but it will crash from a double delete since the temporary shared_ptr has a separate use_count. std::list< boost::shared_ptr<T> > myList; T* tThisPtr = new T(); // This is wrong; only done for example code. // stand-in for actual code in T using // T's actual "this" pointer from within T { boost::shared_ptr<T> toAdd( tThisPtr ); // typically would be new T() myList.push_back( toAdd ); } { //T has pointer to myList so that upon a certain action, // it will remove itself romt the list //myList.remove( tThisPtr); //doesn't compile myList.remove( boost::shared_ptr<T>(tThisPtr) ); // compiles, but causes // double delete } The only options I see remaining are to use std::find with a custom compare, or to loop through the list brute force and find it myself, but it seems there should be a better way. Am I missing something obvious, or is this just too non-standard a use to be doing a remove the clean/normal way?

    Read the article

  • Which PHP library I should choose to work with CouchDB?

    - by Guss
    I want to try playing with CouchDB for a new project I'm writing (as a hobby, not part of my job). I'm well versed in PHP, but I haven't programmed with CouchDB at all, and also I have little experience with non-SQL databases. From looking at CouchDB's "Getting Started with PHP" document they recommend using a third-party library or writing your own client using their RESTful HTTP API. I think I'd rather not mess with writing protocol implementations myself at this point, but what is your experience with writing PHP to work with CouchDB? I haven't tested any of the alternatives yet, but I looked at: PHPillow : I'm interested in the way they implement ORM. I wasn't planning to do ORM, but my problem domain probably map well to that method. PHP Object Freezer: seems like a poor man's ORM - I can use it to implement an actual ORM, or just as an easy store/retrieve document API but it seems too primitive. PHP-on-Couch : Also a bit simple, but they have an interesting API for views and from the documentation it looks usable enough. PHP CouchDB Extension : From the listed options this looks like it has the best chance of making it into the PHP mainline itself, and also has the most complete API. Any opinion one wish to share on each library is welcome.

    Read the article

  • Datagridview Winforms Add or Delete Rows in Edit Mode with Collection Binding

    - by user630548
    In my datagridview, I bind a List of objects named 'ProductLine'. But unfortunately with this approach I cannot 'Add' or 'Delete' rows to the datagridview in edit mode. When I create a fresh new order I can Add or Delete rows but once I save it and try to open it in Edit then it doesn't let me 'Add' or 'Delete' (via keyboard). Any idea? Here is the code for this: If it is a fresh Order then I do something like this: private void Save(){ for (int i = 0; i <= dtgProdSer.RowCount - 1; i++) { if ((itemList != null) && (itemList.Count > i)) productLine = itemList[i]; else productLine = new ProductLine(); productLine.Amount = Convert.ToDouble(dataGridViewTextBoxCell.Value); } } And if it is an Edit then in the Form_Load I check ProductId is NON Zero then I do the following: private void fillScreen{ dtgProdSer.DataSource = itemList; } But with this I cannot Add or Delete Rows in Edit mode. Any advise is greatly appreciated.

    Read the article

  • Resolving Circular References for Objects Implementing ISerializable

    - by Chris
    I'm writing my own IFormatter implementation and I cannot think of a way to resolve circular references between two types that both implement ISerializable. Here's the usual pattern: [Serializable] class Foo : ISerializable { private Bar m_bar; public Foo(Bar bar) { m_bar = bar; m_bar.Foo = this; } public Bar Bar { get { return m_bar; } } protected Foo(SerializationInfo info, StreamingContext context) { m_bar = (Bar)info.GetValue("1", typeof(Bar)); } public void GetObjectData(SerializationInfo info, StreamingContext context) { info.AddValue("1", m_bar); } } [Serializable] class Bar : ISerializable { private Foo m_foo; public Foo Foo { get { return m_foo; } set { m_foo = value; } } public Bar() { } protected Bar(SerializationInfo info, StreamingContext context) { m_foo = (Foo)info.GetValue("1", typeof(Foo)); } public void GetObjectData(SerializationInfo info, StreamingContext context) { info.AddValue("1", m_foo); } } I then do this: Bar b = new Bar(); Foo f = new Foo(b); bool equal = ReferenceEquals(b, b.Foo.Bar); // true // Serialise and deserialise b equal = ReferenceEquals(b, b.Foo.Bar); If I use an out-of-the-box BinaryFormatter to serialise and deserialise b, the above test for reference-equality returns true as one would expect. But I cannot conceive of a way to achieve this in my custom IFormatter. In a non-ISerializable situation I can simply revisit "pending" object fields using reflection once the target references have been resolved. But for objects implementing ISerializable it is not possible to inject new data using SerializationInfo. Can anyone point me in the right direction?

    Read the article

  • ASP.NET MVC on GoDaddy Not Working (Not Primary Domain Deployment)

    - by JPrescottSanders
    I am trying to get ASP.NET MVC working on GoDaddy and I'm not having much luck. I have read the post on SO that covers the subject, but I must have a slightly different configuration or must be missing somehting along the way because the main MVC page comes up, but all links seem to fail and no amount of tweaking the URLs seems to get it to work. A little back ground. I have a single hosting plan with many domains pointed to sub folders of the main domain. Basic ASP.NET web forms pages work just fine, but of course I wanted to try and host a sample MVC site in one of these non-primary domains. You can go to the URL here. As you can see this first page comes up, but if you click on Home or About it doesn't work. Clicking on Home creates this link "http://www.jprescottsanders.com/jps/" and clicking on about creates this link "http://www.jprescottsanders.com/jps/Home/About". As you can see JPS sneaks in there, this of course is the sub folder that i place my web app files in. I would like to know if this is a MVC related issue or a GoDaddy issue. I suspect that MVC may want to sit in the root directory of the site, and when it puts the "jps" into the URLs it breaks the routing mechanisms (but this is conjecture). I know Dan said this was possible so I'm hoping he sees this and helps me get to the bottom of this deployment strategy for MVC.

    Read the article

  • Classification: Dealing with Abstain/Rejected Class

    - by abner.ayala
    I am asking for your input and/help on a classification problem. If anyone have any references that I can read to help me solve my problem even better. I have a classification problem of four discrete and very well separated classes. However my input is continuous and has a high frequency (50Hz), since its a real-time problem. The circles represent the clusters of the classes, the blue line the decision boundary and Class 5 equals the (neutral/resting do nothing class). This class is the rejected class. However the problem is that when I move from one class to the other I activate a lot of false positives in the transition movements, since the movement is clearly non-linear. For example, every time I move from class 5 (neutral class) to 1 I first see a lot of 3's before getting to the 1 class. Ideally, I will want my decision boundary to look like the one in the picture below where the rejected class is Class =5. Has a higher decision boundary than the others classes to avoid misclassification during transition. I am currently implementing my algorithm in Matlab using naive bayes, kNN, and SVMs optimized algorithms using Matlab. Question: What is the best/common way to handle abstain/rejected classes classes? Should I use (fuzzy logic, loss function, should I include resting cluster in the training)?

    Read the article

  • Memory Bandwidth Performance for Modern Machines

    - by porgarmingduod
    I'm designing a real-time system that occasionally has to duplicate a large amount of memory. The memory consists of non-tiny regions, so I expect the copying performance will be fairly close to the maximum bandwidth the relevant components (CPU, RAM, MB) can do. This led me to wonder what kind of raw memory bandwidth modern commodity machine can muster? My aging Core2Duo gives me 1.5 GB/s if I use 1 thread to memcpy() (and understandably less if I memcpy() with both cores simultaneously.) While 1.5 GB is a fair amount of data, the real-time application I'm working on will have have something like 1/50th of a second, which means 30 MB. Basically, almost nothing. And perhaps worst of all, as I add multiple cores, I can process a lot more data without any increased performance for the needed duplication step. But a low-end Core2Due isn't exactly hot stuff these days. Are there any sites with information, such as actual benchmarks, on raw memory bandwidth on current and near-future hardware? Furthermore, for duplicating large amounts of data in memory, are there any shortcuts, or is memcpy() as good as it will get? Given a bunch of cores with nothing to do but duplicate as much memory as possible in a short amount of time, what's the best I can do?

    Read the article

< Previous Page | 433 434 435 436 437 438 439 440 441 442 443 444  | Next Page >