Search Results

Search found 20484 results on 820 pages for 'small projects'.

Page 224/820 | < Previous Page | 220 221 222 223 224 225 226 227 228 229 230 231  | Next Page >

  • How can you make a PHP application require a key to work?

    - by jasondavis
    About 4 years ago I used a php product called amember pro, it is a membership script which has plugins for lie 30 different payment processors, it was an easy way to set up an automated membership site where users would pay a payment and get access to a certain area. The script used ioncube http://www.ioncube.com/sa_encoder.php to prevent non-paying users from using the script, it requered that you register the domain that the script would be used on, you were then given a key to enter into the file that would make the system/script work. Now I am wanting to know how to do such a task, I know ioncube encoder just makes it hard to see the code, in the script I mention, they would just have a small section at the tp of 1 of the included pages that was encrypted and without that part of the code it would break and in addition if the owner of the script did not put you domain in the list and give you a valid key it would not work, also if you tried to use the script on a different domain it would not work. I realize that somewhere in the encrypted code that is must of sent you key to there server and checked that it was valid for the domain name it is on, or possibly it did not even do that, maybe the key would just verify that it matched the domain the script was on, that more likely what it did. Here is where the real question is, How would you make a script require the portion that is encrypted? If I made a script and had a small encrypted part at the top, it would seem a user would be able to easily just remove the encrypted part and figure out what the non encrypted part is doing and fix it to work. Any ideas?

    Read the article

  • Eclipse PDT "tips" ?

    - by Pascal MARTIN
    Hi ! (Yes, this is a quite opened and general and subjective question -- it's by design, cause I want tips you think are great !) I'm using Eclipse PDT 2.1 to work in PHP, either for small and/or big projects -- I've been doing so for quite some times, now, actually (since before 1.0 stable, if I remember well)... I was wondering if any of you did know "tips" to be more efficient. Let met explain more in details : I know about things like plugins like Aptana (better editor for JS/CSS), Subversive (for SVN access), RSE, Filesync, integrating Xdebug's debugger, ... What I mean by "tips" is more some little things you discovered one day and since use all the time -- and allow you to be more efficient in your PHP projects. Some examples of "tips" that come to my mind, and that already know and use : ctrl+space to open the list of suggestions for functions / variables names ctrl+shift+R (navigate > open resource) to open a popup which show only files which names contain what you type ; ie, quick opening of files this one might be the perfect example : I know this one is not often known by coworkers and they find it as useful as I do ; so, I guess there might be lots of other things like this one I don't know myself ^^ ctrl+M to switch to full-screen view for the editor (instead of double-click on tabs bar) shift+F2 while on a function name, to open it's page if the PHP manual in a browser Attention Mac Users use Command instead Control. I guess you get the point ; but I'm really open to any suggestion (be it eclipse-related in general, of more PHP/PDT-specific) that can help be be more efficient :-) Anyway, thanks in advance for your help !

    Read the article

  • Cannot connect to Github?

    - by user2973438
    so I tried to push some updates onto my repo on github via terminal on Mac OSX 10.8.4 and it doesn't work. I've been getting the same error many times: Lillys-MacBook-Air:Yuewei Lilly$ git push origin master error: Failed connect to github.com:443; Operation timed out while accessing https://github.com/lillybeans/Yuewei.git/info/refs?service=git-receive-pack fatal: HTTP request failed Some background: I've pushed many projects onto github before using terminal (when I was in Canada). I am currently in Shanghai, China, could it be the GFW? But when I was in Beijing, I was able to push projects onto github still. when I do ping github.com: Lillys-MacBook-Air:Yuewei Lilly$ ping github.com PING github.com (192.30.252.131): 56 data bytes Request timeout for icmp_seq 0 Request timeout for icmp_seq 1 ping: sendto: No route to host Request timeout for icmp_seq 2 ping: sendto: Host is down Request timeout for icmp_seq 3 ping: sendto: Host is down Request timeout for icmp_seq 4 ping: sendto: Host is down Request timeout for icmp_seq 5 ping: sendto: Host is down Request timeout for icmp_seq 6 ping: sendto: Host is down Request timeout for icmp_seq 7 ^C --- github.com ping statistics --- 9 packets transmitted, 0 packets received, 100.0% packet loss Lillys-MacBook-Air:Yuewei Lilly$ I have ShadowSocks (proxy) turned on. Without it I can't access github.com via browser, with it, I can. also when I do "git remote -v" I see both my pull and push remote repos correctly listed. Thank you in advance!

    Read the article

  • Windows 7 Create Folder in "Program Files" failing in C# code even thought I have admin rights!

    - by Shiva
    Hi, I am unable to create a file under "program files" folder on my Windows 7 64-bit machine in VS 2008 WPF C# code. The error I get on the following code myFile = File.Create(logFile); is the following. (this is the innerException stack trace). at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath) at System.IO.FileStream.Init(String path, FileMode mode, FileAccess access, Int32 rights, Boolean useRights, FileShare share, Int32 bufferSize, FileOptions options, SECURITY_ATTRIBUTES secAttrs, String msgPath, Boolean bFromProxy) at System.IO.FileStream..ctor(String path, FileMode mode, FileAccess access, FileShare share, Int32 bufferSize, FileOptions options) at System.IO.File.Create(String path) at MyFirm.MyPricingApp.UI.App.InitializeLogging() in C:\Projects\MyPricingApp\App.xaml.cs:line 150 at MyFirm.MyPricingApp.UI.App.Application_Startup(Object sender, StartupEventArgs e) in C:\Projects\MyPricingApp\App.xaml.cs:line 38 at System.Windows.Application.OnStartup(StartupEventArgs e) at System.Windows.Application.<.ctor>b__0(Object unused) at System.Windows.Threading.ExceptionWrapper.InternalRealCall(Delegate callback, Object args, Boolean isSingleParameter) at System.Windows.Threading.ExceptionWrapper.TryCatchWhen(Object source, Delegate callback, Object args, Boolean isSingleParameter, Delegate catchHandler) this seems like it has something to do with UAC in Windows 7, because why else would I get this since my user is already Admin on the machine ?! Also since the WinIOError has SECURITY_ATTRIBUTES, I am thinking this has something to do with a "new way" security is handled in Windows 7. I tried to browse to the "Program Files" folder, under which the log folder and file were to be created. I can create the folder by hand, but when i try to create the file, I get the similar "Access Denied" exception.

    Read the article

  • What are the advantages of using J2EE over ASP.net?

    - by m_oLogin
    We are currently planning to launch a couple of internal web projects in the future. Our company's dev teams are mostly experienced in J2EE and have worked with it for years. Today, we have the choice of launching a couple of our projects on .net. I have checked out a couple of sources on the net, and it seems like the "J2EE vs ASP.net" combat brings out as much discord as the overseen "Apple vs Microsoft" or "Free Eclipse vs Visual Studio"... Nevertheless, I have been somewhat quite impressed with ASP.net's abilities to create great things with huge simplicity (for ex. asp.net ajax's demos). No more tons of xmls to play with, no more tons of frameworks to configure (we usually use the famous combo struts/spring/hibernate)... It just seemed to me that ASP.net had some good advantages over J2EE, but then again, I may speak by ignorance. What I want to know is this : What are the real advantages of using J2EE over ASP.net? Is there anything that cannot be done in ASP.net that can be done in J2EE? Once the frameworks are all in place and configured, is it faster to develop apps in J2EE than it is in .net? Are the applications generally easier to maintain in J2EE than in ASP.net? Is it worth it for some developpers to leave their J2EE knowledge on the side and move on to ASP.net if it does exactly the same thing?

    Read the article

  • What is a reasonable OSGi development workflow?

    - by levand
    I'm using OSGi for my latest project at work, and it's pretty beautiful as far as modularity and functionality. But I'm not happy with the development workflow. Eventually, I plan to have 30-50 separate bundles, arranged in a dependency graph - supposedly, this is what OSGi is designed for. But I can't figure out a clean way to manage dependencies at compile time. Example: You have bundles A and B. B depends on packages defined in A. Each bundle is developed as a separate Java project. In order to compile B, A has to be on the javac classpath. Do you: Reference the file system location of project A in B's build script? Build A and throw the jar into B's lib directory? Rely on Eclipse's "referenced projects" feature and always use Eclipse's classpath to build (ugh) Use a common "lib" directory for all projects and dump the bundle jars there after compilation? Set up a bundle repository, parse the manifest from the build script and pull down the required bundles from the repository? No. 5 sounds the cleanest, but also like a lot of overhead.

    Read the article

  • How do I make JPA POJO classes + Netbeans forms play well together?

    - by Zak
    I started using netbeans to design forms to edit the instances of various classes I have made in a small app I am writing. Basically, the app starts, an initial set of objects is selected from the DB and presented in a list, then an item in the list can be selected for editing. When the editor comes up it has form fields for many of the data fields in the class. The problem I run into is that I have to create a controller that maps each of the data elements to the correct form element, and create an inordinate number of small conversion mapping lines of code to convert numbers into strings and set the correct element in a dropdown, then another inordinate amount of code to go back and update the underlying object with all the values from the form when the save button is clicked. My question is; is there a more directly way to make the editing of the form directly modify the contents of my class instance? I would like to be able to have a default mapping "controller" that I can configure, then override the getter/setter for a particular field if needed. Ideally, there would be standard field validation for things like phone numbers, integers, floats, zip codes, etc... I'm not averse to writing this myself, I would just like to see if it is already out there and use the right tool for the right job.

    Read the article

  • Tips for Using Multiple Development Systems

    - by Tim Lytle
    When I travel, I don't pack up the desktop I use in the office and take it with me. Maybe I should, but I don't. However, since I'm a contract programmer I like to be able to work wherever I am: I'm mostly thinking of web development here. Version Control goes a long way in keeping sane and working on multiple projects on multiple systems (two or three computers); however, there are the issues of: IDE settings - different display sizes mean the IDE settings can't be completely synced, if at all. Database - if the database is 'external' (even if it's running on the same system, it's not in version control), how do you maintain the needed syncs of structure. Development Stack - Some projects need non-standard extensions, libraries, etc installed. Just an overview of some of the hassle involved with developing on multiple systems. I'll probably end up asking some specific questions, but I thought a CW style tips might reveal some things I would even think to ask about. Update: I guess this would also address tips to make upgrading/replacing your development system easier (something I've just done). So, one tip per answer please, so the 'top' tips are easy to find. How do you make it easier to develop on multiple systems, or to transfer work after upgrading/replaceing a development system?

    Read the article

  • Is it possible in Perl to require a subroutine call is made?

    - by MitchelWB
    I don't know enough about Perl to even know what I'm asking for exactly, but I'm writing a series of subroutines to be available for many individual scripts that all process different incoming flat files. The process is far from perfect, but it's what I've got to deal with and I'm trying to build myself a small library of subs that make it easier for me to manage it all. Each script handles a different incoming flat file with it's own formatting, sorting, grouping and outputting requirements. One common aspect is that we have small text files that house counters that are used to name the output files so that we have no duplicate file names. Because the processing of the data is different for each file, I need to open the file to get my counter value, because this is a common operation, I'd like to put it in a sub to retrieve the counter. But then need to write specific code to process the data. And would like a second sub that allows me to update the counter with the counter once I've processed the data. Is there a way to make the second sub call a requirement if the first one is called? Ideally if it could even be an error that would prevent the script from running at all much like a syntax error. EDIT: Here is a little [ugly and simplified] psuedo-code to give a better feel for what the current process is: require "importLibrary.plx"; #open data source file open DataIn, $filename; #call getCounterInfo from importLibrary.plx to get the counter value from counter file $counter = &getCounterInfo($counterFileName); while (<DataIn>) { #Process data based on unique formatting and requirements #output to task files based on requirements and name files using the $counter #increment $counter } #update counter file with new value of $counter &updateCounterInfo($counter);

    Read the article

  • Project management and bundling dependencies

    - by Joshua
    I've been looking for ways to learn about the right way to manage a software project, and I've stumbled upon the following blog post. I've learned some of the things mentioned the hard way, others make sense, and yet others are still unclear to me. To sum up, the author lists a bunch of features of a project and how much those features contribute to a project's 'suckiness' for a lack of a better term. You can find the full article here: http://spot.livejournal.com/308370.html In particular, I don't understand the author's stance on bundling dependencies with your project. These are: == Bundling == Your source only comes with other code projects that it depends on [ +20 points of FAIL ] Why is this a problem, (especially given the last point)? If your source code cannot be built without first building the bundled code bits [ +10 points of FAIL ] Doesn't this necessarily have to be the case for software built against 3rd party libs? Your code needs that other code to be compiled into its library before the linker can work? If you have modified those other bundled code bits [ +40 points of FAIL ] If this is necessary for your project, then it naturally follows that you've bundled said code with yours. If you want to customize a build of some lib,say WxWidgets, you'll have to edit that projects build scripts to bulid the library that you want. Subsequently, you'll have to publish those changes to people who wish to build your code, so why not use a high level make script with the params already written in, and distribute that? Furthermore, (especially in a windows env) if your code base is dependent on a particular version of a lib (that you also need to custom compile for your project) wouldn't it be easier to give the user the code yourself (because in this case, it is unlikely that the user will already have the correct version installed)? So how would you respond to these comments, and what points may I be failing to take into consideration? Would you agree or disagree with the author's take (or mine), and why?

    Read the article

  • Building out a well-structured service layer

    - by Chris Stewart
    First, I want to say that it has been awhile since I've gotten into the kind of detail I am at currently. Lately, I've been very much in the SharePoint world and my entire thought process was focused there for quite some time. I'm very glad to be creating databases again, writing "lower level" code to deal with data access, and so forth. I'm working on a very simple web application and taking the opportunity to reacquaint myself with the way I used to structure my projects and various layers of code. For instance, I might have created something like this the last time I went about building something basic from scratch: - MyProject/ -- Domain/ --- Impl/ ---- Person -- Model/ --- IPersonRepository --- Impl/ ---- PersonRepository : IPersonRepository -- Services --- IPersonService --- Impl/ ---- PersonService : IPersonService That would have been the project I did the real work in, and then referenced in the ASP.NET project. My approach was very much inspired by what I saw from the CodeCampServer project as at that time ASP.NET MVC was still very new and it was the only open project I could find actively being developed, and by solid people at that. What ways are you going about structuring your projects and code, when it comes to a general problem you're working on? Certainly various problems can put constraints on this, but assume it's a basic problem without specific needs that affect the structure and layout of your code.

    Read the article

  • Auto scale and rotate images

    - by Dave Jarvis
    Given: two images of the same subject matter; the images have the same resolution, colour depth, and file format; the images differ in size and rotation; and two lists of (x, y) co-ordinates that correlate the images. I would like to know: How do you transform the larger image so that it visually aligns to the second image? (Optional.) What are the minimum number of points needed to get an accurate transformation? (Optional.) How far apart do the points need to be to get an accurate transformation? The transformation would need to rotate, scale, and possibly shear the larger image. Essentially, I want to create (or find) a program that does the following: Input two images (e.g., TIFFs). Click several anchor points on the small image. Click the several corresponding anchor points on the large image. Transform the large image such that it maps to the small image by aligning the anchor points. This would help align pictures of the same stellar object. (For example, a hand-drawn picture from 1855 mapped to a photograph taken by Hubble in 2000.) Many thanks in advance for any algorithms (preferably Java or similar pseudo-code), ideas or links to related open-source software packages.

    Read the article

  • Developer friendly open-source license?

    - by Francisco Garcia
    As a software engineer/programmer myself, I love the possibility to download the code and learn from it. However building software is what brings food to my table. I have doubts regarding the type of license I should use for my own personal projects or when picking up one project to learn from. There are already many questions about licenses on Stackoverflow, but I would like to make this one much more specific. If your main profession and way of living is building software, which type of license do you find more useful for you? And I mean, the license that can benefit you most as a professional because it gives you more freedom to reuse the experience you gain. GPL is a great license to build communities because it forces you to give back your work. However I like BSD licenses because of their extra freedom. I know that if the code I am exploring is BSD licensed, I might be able to expand not only my skills, but also my programmer toolbox. Whenever I am working for a company, I might recall that something similar was done in another project and I will be able to copy or imitate certain part of the code. I know that there are religious wars regarding GPL vs BSD and it is not my intention to start one. Probably many companies already take snipsets from GPL projects anyway. I just want to insist in the factor of professional enrichment. I do not intend to discriminate any license. I said I prefer BSD licenses but I also use Linux because the user base is bigger and also the market demand.

    Read the article

  • Am I understanding premature optimization correctly?

    - by Ed Mazur
    I've been struggling with an application I'm writing and I think I'm beginning to see that my problem is premature optimization. The perfectionist side of me wants to make everything optimal and perfect the first time through, but I'm finding this is complicating the design quite a bit. Instead of writing small, testable functions that do one simple thing well, I'm leaning towards cramming in as much functionality as possible in order to be more efficient. For example, I'm avoiding multiple trips to the database for the same piece of information at the cost of my code becoming more complex. One part of me wants to just not worry about redundant database calls. It would make it easier to write correct code and the amount of data being fetched is small anyway. The other part of me feels very dirty and unclean doing this. :-) I'm leaning towards just going to the database multiple times, which I think is the right move here. It's more important that I finish the project and I feel like I'm getting hung up because of optimizations like this. My question is: is this the right strategy to be using when avoiding premature optimization?

    Read the article

  • Can this code cause a memory leak (Arduino)

    - by tbraun89
    I have a arduino project and I created this struct: struct Project { boolean status; String name; struct Project* nextProject; }; In my application I parse some data and create Project objects. To have them in a list there is a pointer to the nextProject in each Project object expect the last. This is the code where I add new projects: void RssParser::addProject(boolean tempProjectStatus, String tempData) { if (!startProject) { startProject = true; firstProject.status = tempProjectStatus; firstProject.name = tempData; firstProject.nextProject = NULL; ptrToLastProject = &firstProject; } else { ptrToLastProject->nextProject = new Project(); ptrToLastProject->nextProject->status = tempProjectStatus; ptrToLastProject->nextProject->name = tempData; ptrToLastProject->nextProject->nextProject = NULL; ptrToLastProject = ptrToLastProject->nextProject; } } firstProject is an private instance variable and defined in the header file like this: Project firstProject; So if there actually no project was added, I use firstProject, to add a new one, if firstProject is set I use the nextProject pointer. Also I have a reset() method that deletes the pointer to the projects: void RssParser::reset() { delete ptrToLastProject; delete firstProject.nextProject; startProject = false; } After each parsing run I call reset() the problem is that the memory used is not released. If I comment out the addProject method there are no issues with my memory. Someone can tell me what could cause the memory leak?

    Read the article

  • Debugged Program Window Won't Close

    - by Marc Bernier
    Hi, I'm using VS 2008 on a 64-bit XP machine. I'm debugging a 32-bit C++ DLL via a console program. The DLL and EXE projects are contained in the same SLN so that I can modify the DLL as I test. What happens is that every once in a while I kill the program with Debug | Stop Debugging (Shift-F5). VS stops the program, but the console window stays open! If I'm sitting at a breakpoint and hit Shift-F5, it will terminate properly, but if the program is running full-tilt when I stop it, I often see this instead. The big problem is that I can't close these zombie windows. Using End Task in Task Manager does nothing (no message, no nothing). When I shut down the machine, it is unable to due to the orphans and I have to resort to actually turning off the power. I think this is connected to having the DLL and EXE project in the same SLN, as for months I worked on this project in 2 VS instances, one for the DLL and the other for the EXE. I would continually jump back and forth between the windows as I worked. This problem never happened until I put the two projects into a single SLN. The single SLN works a lot better, but this anomaly is very irritating. Any ideas anyone? UPDATE After a bit of searching (here), I found that it appears to have to do with one of the updates from last Tuesday (KB977165 or KB978037). Thank you Microsoft for your excellent pre-release testing.

    Read the article

  • LINQ Joins - Performance

    - by Meiscooldude
    I am curious on how exactly LINQ (not LINQ to SQL) is performing is joins behind the scenes in relation to how Sql Server performs joins. Sql Server before executing a query, generates an Execution Plan. The Execution Plan is basically an Expression Tree on what it believes is the best way to execute the query. Each node provides information on whether to do a Sort, Scan, Select, Join, ect. On a 'Join' node in our execution plan, we can see three possible algorithms; Hash Join, Merge Join, and Nested Loops Join. Sql Server will choose which algorithm to for each Join operation based on expected number of rows in Inner and Outer tables, what type of join we are doing (some algorithms don't support all types of joins), whether we need data ordered, and probably many other factors. Join Algorithms: Nested Loop Join: Best for small inputs, can be optimized with ordered inner table. Merge Join: Best for medium to large inputs sorted inputs, or an output that needs to be ordered. Hash Join: Best for medium to large inputs, can be parallelized to scale linearly. LINQ Query: DataTable firstTable, secondTable; ... var rows = from firstRow in firstTable.AsEnumerable () join secondRow in secondTable.AsEnumerable () on firstRow.Field<object> (randomObject.Property) equals secondRow.Field<object> (randomObject.Property) select new {firstRow, secondRow}; SQL Query: SELECT * FROM firstTable fT INNER JOIN secondTable sT ON fT.Property = sT.Property Sql Server might use a Nested Loop Join if it knows there are a small number of rows from each table, a merge join if it knows one of the tables has an index, and Hash join if it knows there are a lot of rows on either table and neither has an index. Does Linq choose its algorithm for joins? or does it always use one?

    Read the article

  • context.Scale() with non-aspect ratio preserving parameters screws effective lineWith

    - by rrenaud
    I am trying to apply some natural transformations whereby the x axis is remapped to some very small domain, like from 0 to 1, whereas y is remapped to some small, but substantially larger domain, like 0 to 30. This way, drawing code can be nice and clean and only care about the model space. However, if I apply a scale, then lines are also scaled, which means that horizontal lines become extremely fat relative to vertical ones. Here is some sample code. When natural_height is much less than natural_height, the picture doesn't look as intended. I want the picture to look like this, which is what happens with a scale that preserves aspect ratio. rftgstats.c om/canvas_good.png However, with a non-aspect ratio preserving scale, the results look like this. rftgstats.c om/canvas_bad.png <html><head><title>Busted example</title></head> <body> <canvas id=example height=300 width=300> <script> var canvas = document.getElementById('example'); var ctx = canvas.getContext('2d'); var natural_width = 10; var natural_height = 50; ctx.scale(canvas.width / natural_width, canvas.height / natural_height); var numLines = 20; ctx.beginPath(); for (var i = 0; i < numLines; ++i) { ctx.moveTo(natural_width / 2, natural_height / 2); var angle = 2 * Math.PI * i / numLines; // yay for screen size independent draw calls. ctx.lineTo(natural_width / 2 + natural_width * Math.cos(angle), natural_height / 2 + natural_height * Math.sin(angle)); } ctx.stroke(); ctx.closePath(); </script> </body> </html>

    Read the article

  • Handling multiple media queries in Sass with Twitter Bootstrap

    - by Keith
    I have a Sass mixin for my media queries based on Twitter Bootstrap's responsive media queries: @mixin respond-to($media) { @if $media == handhelds { /* Landscape phones and down */ @media (max-width: 480px) { @content; } } @else if $media == small { /* Landscape phone to portrait tablet */ @media (max-width: 767px) {@content; } } @else if $media == medium { /* Portrait tablet to landscape and desktop */ @media (min-width: 768px) and (max-width: 979px) { @content; } } @else if $media == large { /* Large desktop */ @media (min-width: 1200px) { @content; } } @else { @media only screen and (max-width: #{$media}px) { @content; } } } And I call them throughout my SCSS file like so: .link { color:blue; @include respond-to(medium) { color: red; } } However, sometimes I want to style multiple queries with the same styles. Right now I'm doing them like this: .link { color:blue; /* this is fine for handheld and small sizes*/ /*now I want to change the styles that are cascading to medium and large*/ @include respond-to(medium) { color: red; } @include respond-to(large) { color: red; } } but I'm repeating code so I'm wondering if there is a more concise way to write it so I can target multiple queries. Something like this so I don't need to repeat my code (I know this doesn't work): @include respond-to(medium, large) { color: red; } Any suggestions on the best way to handle this?

    Read the article

  • Can I use foreign key restrictions to return meaningful UI errors with PHP

    - by Shane
    I want to start by saying that I am a big fan of using foreign keys and have a tendency to use them even on small projects to keep my database from being filled with orphaned data. On larger projects I end up with gobs of keys which end up covering upwards of 8 - 10 layers of data. I want to know if anyone could suggest a graceful way of handling 'expected errors' from the MySQL database in a way that I can construct meaningful messages for the end user. I will explain 'expected errors' with an example. Lets say I have a set of tables used for basic discussions: discussion questions responses users Hierarchically they would probably look something like this: -users --discussion ---questions ----responses When I attempt to delete a user the FKs will check discussions and if any discussion exist the deletion is restricted, deleting discussion checks questions, deleting questions checks responses. An 'expected error' in this case would be attempting to delete a user--unless they are newly created I can anticipate that one or more foreign keys will fail causing an error. What I WANT to do is to catch that error on deletion and be able to tell the end user something like 'We're sorry, but all discussions must be removed before you can delete this user...'. Now I know I can keep and maintain matching arrays in PHP and map specific errors to messages but that is messy and prone to becoming stagnant, or I could manually run a set of selects prior to attempting the deletion, but then I am doing just as much work as without using FKs. Any help here would be greatly appreciated, or if I am just looking at this completely wrong then please let me know. On a side note I generally use CodeIgniter for my application development, so if that would open up an avenue through that framework please consider that in your answers. Thanks in Advance

    Read the article

  • Java "Pool" of longs or Oracle sequence with reusable values

    - by Anthony Accioly
    Several months ago I implemented a solution to choose unique values from a range between 1 and 65535 (16 bits). This range is used to generate unique Route Targets suffixes, which for this customer massive network (it's a huge ISP) are a very disputed resource, so any free index needs to become immediately available to the end user. To tackle this requirement I used a BitSet. Allocate on the RT index with set and deallocate a suffix with clear. The method nextClearBit() can find the next available index. I handle synchronization / concurrency issues manually. This works pretty well for a small range... The entire index is small (around 10k), it is blazing fast and can be easy serialized into a Blob field. The problem is, some new devices can handle RTs of 32 bits (range 1 / 4294967296). Which can't be managed with a BitSet (it would, by itself, consume around 600Mb, plus be limited to int range). Even with this massive range available, the client still wants to free available Route Targets for the end user, mainly because the lowest ones (up to 65535) - which are compatible with old routers - are being heavily disputed. Before I tell the customer that this is impossible and he will have to conform with my reusable index for lower RTs (up to 65550) and use a database sequence for the other ones (which means that when the user frees a Route Target, it will not become available again). Would anyone shed some light? Maybe some kind soul already implemented a high performance number pool for Java (6 if it matters), or I am missing a killer feature of Oracle database (11R2 if it matters)... Wishful thinking. Thank you very much in advance.

    Read the article

  • Why does mobile first responsive design tend to not use max-width queries alongside the min-width queries?

    - by Sam
    First off, I understand the basic principles behind mobile first responsive web design, and totally agree with them. But one thing I don't understand: In my experience, not all styles for small screens can be used for the larger version of a website. For example, usually smaller versions tend to have larger clickable areas, hamburger navigation, etc. So I sometimes have to override these specific styles, aside from just progressively enhancing the base styles. So I was wondering: why is max-width rarely mentioned (or used) in the context of mobile-first responsive web design? Because it looks like it could be used to isolate styles for smaller screens that are not useful for larger screens, and would thus prevent unnecessary duplication of code. A quote which mentions min-width as typically mobile-first, but not max-width: Mobile first, from a coding perspective, means that your base style is typically a single-column, fully-fluid layout. You use @media (min-width: whatever) to add a grid-based layout on top of that. from: http://gomakethings.com/mobile-first-and-internet-explorer/ EDIT: So to be more specific: I was wondering if there is a reason to exclude max-width from a mobile-first responsive design (as it seems like it can be useful for writing your css as DRY as possible, as some styles for small screens will not be used for bigger screens).

    Read the article

  • How to access a web service behind a NAT?

    - by jr
    We have a product we are deploying to some small businesses. It is basically a RESTful API over SSL using Tomcat. This is installed on the server in the small business and is accessed via an iPhone or other device portable device. So, the devices connecting to the server could come from any number of IP addresses. The problem comes with the installation. When we install this service, it seems to always become a problem when doing port forwarding so the outside world can gain access to tomcat. It seems most time the owner doesn't know router password, etc, etc. I am trying to research other ways we can accomplish this. I've come up with the following and would like to hear other thoughts on the topic. Setup a SSH tunnel from each client office to a central server. Basically the remote devices would connect to that central server on a port and that traffic would be tunneled back to Tomcat in the office. Seems kind of redundant to have SSH and then SSL, but really no other way to accomplish it since end-to-end I need SSL (from device to office). Not sure of performance implications here, but I know it would work. Would need to monitor the tunnel and bring it back up if it goes done, would need to handle SSH key exchanges, etc. Setup uPNP to try and configure the hole for me. Would likely work most of the time, but uPNP isn't guaranteed to be turned on. May be a good next step. Come up with some type of NAT transversal scheme. I'm just not familiar with these and uncertain of how they exactly work. We have access to a centralized server which is required for the authentication if that makes it any easier. What else should I be looking at to get this accomplished?

    Read the article

  • Testing When Correctness is Poorly Defined?

    - by dsimcha
    I generally try to use unit tests for any code that has easily defined correct behavior given some reasonably small, well-defined set of inputs. This works quite well for catching bugs, and I do it all the time in my personal library of generic functions. However, a lot of the code I write is data mining code that basically looks for significant patterns in large datasets. Correct behavior in this case is often not well defined and depends on a lot of different inputs in ways that are not easy for a human to predict (i.e. the math can't reasonably be done by hand, which is why I'm using a computer to solve the problem in the first place). These inputs can be very complex, to the point where coming up with a reasonable test case is near impossible. Identifying the edge cases that are worth testing is extremely difficult. Sometimes the algorithm isn't even deterministic. Usually, I do the best I can by using asserts for sanity checks and creating a small toy test case with a known pattern and informally seeing if the answer at least "looks reasonable", without it necessarily being objectively correct. Is there any better way to test these kinds of cases?

    Read the article

  • NHibernate + Fluent long startup time

    - by PaRa
    Hi all, am new to NHibernate. When performing below test took 11.2 seconds (debug mode) i am seeing this large startup time in all my tests (basically creating the first session takes a tone of time) setup = Windows 2003 SP2 / Oracle10gR2 latest CPU / ODP.net 2.111.7.20 / FNH 1.0.0.636 / NHibernate 2.1.2.4000 / NUnit 2.5.2.9222 / VS2008 SP1 using System; using System.Collections; using System.Data; using System.Globalization; using System.IO; using System.Text; using System.Data; using NUnit.Framework; using System.Collections.Generic; using System.Data.Common; using NHibernate; using log4net.Config; using System.Configuration; using FluentNHibernate; [Test()] public void GetEmailById() { Email result; using (EmailRepository repository = new EmailRepository()) { results = repository.GetById(1111); } Assert.IsTrue(results != null); } public class EmailRepository : RepositoryBase { public EmailRepository():base() { } } In my RepositoryBase public T GetById(object id) { using (var session = sessionFactory.OpenSession()) using (var transaction = session.BeginTransaction()) { try { T returnVal = session.Get(id); transaction.Commit(); return returnVal; } catch (HibernateException ex) { // Logging here transaction.Rollback(); return null; } } } The query time is very small. The resulting entity is really small. Subsequent queries are fine. Its seems to be getting the first session started. Has anyone else seen something similar?

    Read the article

< Previous Page | 220 221 222 223 224 225 226 227 228 229 230 231  | Next Page >