Search Results

Search found 4557 results on 183 pages for 'prefer'.

Page 149/183 | < Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >

  • Ideas for a rudimentary software licensing implementation

    - by Ross
    I'm trying to decide how to implement a very basic licensing solution for some software I wrote. The software will run on my (hypothetical) clients' machines, with the idea being that the software will immediately quit (with a friendly message) if the client is running it on greater-than-n machines (n being the number of licenses they have purchased). Additionally, the clients are non-tech-savvy to the point where "basic" is good enough. Here is my current design, but given that I have little to no experience in the topic, I wanted to ask SO before I started any development on it: A remote server hosts a MySQL database with a table containing two columns: client-key and license quantity The client-side application connects to the MySQL database on startup, offering it's client-key that I've put into a properties file packaged into the distribution (I would create a new distribution for each new client) Chances are, I'll need a second table to store validation history, so that with some short logic, the software can decide if it can be run on a given machine (maybe a sliding window of n machines using the software per 24 hours) If the software cannot establish a connection to the MySQL database, or decides that it's over the n allowed machines per day, it closes The connection info for the remote server hosting the MySQL database should be hard-coded into the app? (That sounds like a bad idea, but otherwise they could point it to some other always-validates-to-success server) I think that about covers my initial design. The intent being that while it certainly isn't full-proof, I think I've made it at least somewhat difficult to create an easily-sharable cracking solution. Also, I can easily adjust the license amount for a given client/key pair. I gotta figure this has been done a million times before, so tell me about a better solution that's just as simple to implement and provides the same (low) amount of security. In the event that external libraries are used, I prefer Java, as that's what the software has been written in.

    Read the article

  • C# style Action<T>, Func<T,T>, etc in C++0x

    - by Austin Hyde
    C# has generic function types such as Action<T> or Func<T,U,V,...> With the advent of C++0x and the ability to have template typedef's and variadic template parameters, it seems this should be possible. The obvious solution to me would be this: template <typename T> using Action<T> = void (*)(T); however, this does not accommodate for functors or C++0x lambdas, and beyond that, does not compile with the error "expected unqualified-id before 'using'" My next attempt was to perhaps use boost::function: template <typename T> using Action<T> = boost::function<void (T)>; This doesn't compile either, for the same reason. My only other idea would be STL style template arguments: template <typename T, typename Action> void foo(T value, Action f) { f(value); } But this doesn't provide a strongly typed solution, and is only relevant inside the templated function. Now, I will be the first to admit that I am not the C++ wiz I prefer to think I am, so it's very possible there is an obvious solution I'm not seeing. Is it possible to have C# style generic function types in C++?

    Read the article

  • How to make Resig's micro-templates XHTML compliant?

    - by mshelv
    Hello, I have been experimenting with John Resig's micro-template, which works great. However, the mark-up will not pass the XHTML 1.0 Transitional validation test. (Among other things, id attributes yield errors.) Replacing tags identifiers <, with [[,]], passes validation. Thus, I created a js script which at load time (jQuery document ready) converts the square brackets back to regular markers. This works fine in FF, but not in IE, Chrome, etc. Scripts embedded within CDATA tags validate as well. Question: Is there a way to insert micro-template in a script and still pass XHTML validation? My idea was to remove the CDATA tags once the page has been loaded. But there are probably smarter ways. (Note: I'd prefer not to inject HTML via js since the mark-up will be difficult to maintain.) PS: I looked at other js templates, but they are either not XHTML compliant or too bulky. TIA for any hints.

    Read the article

  • rails i18n - translating text with inside.

    - by egarcia
    Hi there! I'd like to i18n a text that looks like this: Already signed up? Log in! Note that there is a link on the text. On this example it points to google - in reality it will point to my app's log_in_path. I've found two ways of doing this, but none of them looks "right". The first way I know involves having this my en.yml: log_in_message: "Already signed up? <a href='{{url}}'>Log in!</a>" And in my view: <p> <%= t('log_in_message', :url => login_path) %> </p> This works, but having the <a href=...</a> part on the en.yml doesn't look very clean to me. The other option I know is using localized views - login.en.html.erb, and login.es.html.erb. This also doesn't feel right since the only different line would be the aforementioned one; the rest of the view (~30 lines) would be repeated for all views. It would not be very DRY. I guess I could use "localized partials" but that seems too cumberstone; I think I prefer the first option to having so many tiny view files. So my question is: is there a "proper" way to implement this?

    Read the article

  • Spring MVC: How to resolve the path to subdirectories of the root 'JSP' folder in a web application

    - by chrisjleu
    What is a simple way to resolve the path to a JSP file that is not located in the root JSP directory of a web application using SpringMVCs viewResolvers? For example, suppose we have the following web application structure: web-app |-WEB-INF |-jsp |-secure |-admin.jsp |-admin2.jsp index.jsp login.jsp I would like to use some out-of-the-box components to resolve the JSP files within the jsp root folder and the secure subdirectory. I have a *-servlet.xml file that defines: an out-of-the-box, InternalResourceViewResolver: <bean id="jspViewResolver" class="org.springframework.web.servlet.view.InternalResourceViewResolver"> <property name="viewClass" value="org.springframework.web.servlet.view.JstlView"></property> <property name="prefix" value="/WEB-INF/jsp/"></property> <property name="suffix" value=".jsp"></property> </bean> a handler mapping: <bean id="handlerMapping" class="org.springframework.web.servlet.handler.SimpleUrlHandlerMapping"> <property name="mappings"> <props> <prop key="/index.htm">urlFilenameViewController</prop> <prop key="/login.htm">urlFilenameViewController</prop> <prop key="/secure/**">urlFilenameViewController</prop> </props> </property> </bean> an out-of-the-box UrlFilenameViewController controller: <bean id="urlFilenameViewController" class="org.springframework.web.servlet.mvc.UrlFilenameViewController"> </bean> The problem I have is that requests to the JSPs in the secure directory cannot be resolved, as the jspViewResolver only has a prefix defined as /jsp/ and not /jsp/secure/. Is there a way to handle subdirectories like this? I would prefer to keep this structure because I'm also trying to make use of Spring Security and having all secure pages in a subdirectory is a nice way to do this. There's probably a simple way to acheive this but I'm new to Spring and the Spring MVC framework so any pointers would be appreciated.

    Read the article

  • Default template parameters with forward declaration

    - by Seth Johnson
    Is it possible to forward declare a class that uses default arguments without specifying or knowing those arguments? For example, I would like to declare a boost::ptr_list< TYPE > in a Traits class without dragging the entire Boost library into every file that includes the traits. I would like to declare namespace boost { template<class T> class ptr_list< T >; }, but that doesn't work because it doesn't exactly match the true class declaration: template < class T, class CloneAllocator = heap_clone_allocator, class Allocator = std::allocator<void*> > class ptr_list { ... }; Are my options only to live with it or to specify boost::ptr_list< TYPE, boost::heap_clone_allocator, std::allocator<void*> in my traits class? (If I use the latter, I'll also have to forward declare boost::heap_clone_allocator and include <memory>, I suppose.) I've looked through Stroustrup's book, SO, and the rest of the internet and haven't found a solution. Usually people are concerned about not including STL, and the solution is "just include the STL headers." However, Boost is a much more massive and compiler-intensive library, so I'd prefer to leave it out unless I absolutely have to.

    Read the article

  • FLEX: the custom component is still a Null Object when I invoke its method

    - by Patrick
    Hi, I've created a custom component in Flex, and I've created it from the main application with actionscript. Successively I invoke its "setName" method to pass a String. I get the following run-time error (occurring only if I use the setName method): TypeError: Error #1009: Cannot access a property or method of a null object reference. I guess I get it because I'm calling to newUser.setName method from main application before the component is completely created. How can I ask actionscript to "wait" until when the component is created to call the method ? Should I create an event listener in the main application waiting for it ? I would prefer to avoid it if possible. Here is the code: Main app ... newUser = new userComp(); //newUser.setName("name"); Component: <?xml version="1.0" encoding="utf-8"?> <mx:VBox xmlns:mx="http://www.adobe.com/2006/mxml" width="100" height="200" > <mx:Script> <![CDATA[ public function setName(name:String):void { username.text = name; } public function setTags(Tags:String):void { } ]]> </mx:Script> <mx:HBox id="tagsPopup" visible="false"> <mx:LinkButton label="Tag1" /> <mx:LinkButton label="Tag2" /> <mx:LinkButton label="Tag3" /> </mx:HBox> <mx:Image source="@Embed(source='../icons/userIcon.png')"/> <mx:Label id="username" text="Nickname" visible="false"/> </mx:VBox> thanks

    Read the article

  • Single Responsibility Principle usage how can i call sub method correctly?

    - by Phsika
    i try to learn SOLID prencibles. i writed two type of code style. which one is : 1)Single Responsibility Principle_2.cs : if you look main program all instance generated from interface 1)Single Responsibility Principle_3.cs : if you look main program all instance genareted from normal class My question: which one is correct usage? which one can i prefer? namespace Single_Responsibility_Principle_2 { class Program { static void Main(string[] args) { IReportManager raporcu = new ReportManager(); IReport wordraporu = new WordRaporu(); raporcu.RaporHazirla(wordraporu, "data"); Console.ReadKey(); } } interface IReportManager { void RaporHazirla(IReport rapor, string bilgi); } class ReportManager : IReportManager { public void RaporHazirla(IReport rapor, string bilgi) { rapor.RaporYarat(bilgi); } } interface IReport { void RaporYarat(string bilgi); } class WordRaporu : IReport { public void RaporYarat(string bilgi) { Console.WriteLine("Word Raporu yaratildi:{0}",bilgi); } } class ExcellRaporu : IReport { public void RaporYarat(string bilgi) { Console.WriteLine("Excell raporu yaratildi:{0}",bilgi); } } class PdfRaporu : IReport { public void RaporYarat(string bilgi) { Console.WriteLine("pdf raporu yaratildi:{0}",bilgi); } } } Second 0ne all instance genareted from normal class namespace Single_Responsibility_Principle_3 { class Program { static void Main(string[] args) { WordRaporu word = new WordRaporu(); ReportManager manager = new ReportManager(); manager.RaporHazirla(word,"test"); } } interface IReportManager { void RaporHazirla(IReport rapor, string bilgi); } class ReportManager : IReportManager { public void RaporHazirla(IReport rapor, string bilgi) { rapor.RaporYarat(bilgi); } } interface IReport { void RaporYarat(string bilgi); } class WordRaporu : IReport { public void RaporYarat(string bilgi) { Console.WriteLine("Word Raporu yaratildi:{0}",bilgi); } } class ExcellRaporu : IReport { public void RaporYarat(string bilgi) { Console.WriteLine("Excell raporu yaratildi:{0}",bilgi); } } class PdfRaporu : IReport { public void RaporYarat(string bilgi) { Console.WriteLine("pdf raporu yaratildi:{0}",bilgi); } } }

    Read the article

  • Automatic web form testing/filling

    - by Polatrite
    I recently became lead on getting an inordinate amount of testing done in a very short period of time. We have many different web forms, using custom (Telerik) controls that need to be tested for proper data validation and sensible handling of the data. Some of the forms are several pages long with 30-80 different controls for data entry. I am looking for a software solution (that is free) that would allow me to automate the process of filling in these forms by designing a script, or using a UI. The other requirement is that I can't use any browsers but IE6 (terrible, I know). I have previously used AutoHotkey to great success for automatic Windows form testing, since Autohotkey's API allows you to directly reference controls on the Windows form. However Autohotkey does not have similar support for web forms (everything is just one big "InternetExplorer" control). While I would prefer that I could script some variance in the data to help serialize each test, it's not necessary, as I could go back through and manually edit a field or two (plus "break" whatever control I'm currently testing) to serialize each test. If you've ever seen Spawner: http://forge.mysql.com/projects/project.php?id=214 It's almost exactly the sort of thing I'm looking for (Spawner generates dummy SQL data, as opposed to dummy webform data) - but I won't be picky, I've got a really short deadline to meet and had this thrust in my lap just today. ;) Edit1: One of the challenges of just using Autohotkey to simulate keyboard input (tabbing through controls) is that some controls don't currently have tab index (bug), and some controls cause a page reload after modification, resulting in inconsistent control focus (tabbing screwed up). Our application makes heavy use of page reloads to populate fields (select a location, it auto-populates a city, for example).

    Read the article

  • Parsing basic math equations for children's educational software?

    - by Simucal
    Inspired by a recent TED talk, I want to write a small piece of educational software. The researcher created little miniature computers in the shape of blocks called "Siftables". [David Merril, inventor - with Siftables in the background.] There were many applications he used the blocks in but my favorite was when each block was a number or basic operation symbol. You could then re-arrange the blocks of numbers or operation symbols in a line, and it would display an answer on another siftable block. So, I've decided I wanted to implemented a software version of "Math Siftables" on a limited scale as my final project for a CS course I'm taking. What is the generally accepted way for parsing and interpreting a string of math expressions, and if they are valid, perform the operation? Is this a case where I should implement a full parser/lexer? I would imagine interpreting basic math expressions would be a semi-common problem in computer science so I'm looking for the right way to approach this. For example, if my Math Siftable blocks where arranged like: [1] [+] [2] This would be a valid sequence and I would perform the necessary operation to arrive at "3". However, if the child were to drag several operation blocks together such as: [2] [\] [\] [5] It would obviously be invalid. Ultimately, I want to be able to parse and interpret any number of chains of operations with the blocks that the user can drag together. Can anyone explain to me or point me to resources for parsing basic math expressions? I'd prefer as much of a language agnostic answer as possible.

    Read the article

  • Using git-svn with slightly strange svn layout

    - by Ibrahim
    Hi guys, I'm doing an internship and they are using SVN (although there has been some discussion of moving to hg or git but that's not in the immediate future). I like git so I would like to use git-svn to interact with the svn repository and be able to do local commits and branches and stuff like that (rebasing before committing to svn of course). However, there is one slight wrinkle, the svn repository layout is a little weird. It basically looks like this /FOO +-branches +-tags +-trunk +-FOO +-myproject Basically, my project has been stuck into a subdirectory of trunk, and there is another project that is also a subdirectory of the trunk. If I use git-svn and only clone the directory for my project instead of the root, will it get confused or cause any problems? I just wonder because the commit numbers are incremented for the entire repository and not just my project, so would commits be off or anything like that? I probably wouldn't push any branches or tags to SVN because I'd prefer to just do those locally in git and I don't know how git-svn deals with branches and tags anyway, and no one else uses them so I find little point in doing so. Thanks for the help!

    Read the article

  • Why is curl in Ruby slower than command-line curl?

    - by Stiivi
    I am trying to download more than 1m pages (URLs ending by a sequence ID). I have implemented kind of multi-purpose download manager with configurable number of download threads and one processing thread. The downloader downloads files in batches: curl = Curl::Easy.new batch_urls.each { |url_info| curl.url = url_info[:url] curl.perform file = File.new(url_info[:file], "wb") file << curl.body_str file.close # ... some other stuff } I have tried to download 8000 pages sample. When using the code above, I get 1000 in 2 minutes. When I write all URLs into a file and do in shell: cat list | xargs curl I gen all 8000 pages in two minutes. Thing is, I need it to have it in ruby code, because there is other monitoring and processing code. I have tried: Curl::Multi - it is somehow faster, but misses 50-90% of files (does not download them and gives no reason/code) multiple threads with Curl::Easy - around the same speed as single threaded Why is reused Curl::Easy slower than subsequent command line curl calls and how can I make it faster? Or what I am doing wrong? I would prefer to fix my download manager code than to make downloading for this case in a different way. Before this, I was calling command-line wget which I provided with a file with list of URLs. Howerver, not all errors were handled, also it was not possible to specify output file for each URL separately when using URL list. Now it seems to me that the best way would be to use multiple threads with system call to 'curl' command. But why when I can use directly Curl in Ruby? Code for the download manager is here, if it might help: Download Manager (I have played with timeouts, from not-setting it to various values, it did not seem help) Any hints appreciated.

    Read the article

  • img onload doesn't work well in IE7

    - by rmeador
    I have an img tag in my webapp that uses the onload handler to resize the image: <img onLoad="SizeImage(this);" src="foo" > This works fine in Firefox 3, but fails in IE7 because the image object being passed to the SizeImage() function has a width and height of 0 for some reason -- maybe IE calls the function before it finishes loading?. In researching this, I have discovered that other people have had this same problem with IE. I have also discovered that this isn't valid HTML 4. This is our doctype, so I don't know if it's valid or not: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> Is there a reasonable solution for resizing an image as it is loaded, preferably one that is standards-compliant? The image is being used for the user to upload a photo of themselves, which can be nearly any size, and we want to display it at a maximum of 150x150. If your solution is to resize the image server-side on upload, I know that is the correct solution, but I am forbidden from implementing it :( It must be done client side, and it must be done on display. Thanks. Edit: Due to the structure of our app, it is impractical (bordering on impossible) to run this script in the document's onload. I can only reasonably edit the image tag and the code near it (for instance I could add a <script> right below it). Also, we already have Prototype and EXT JS libraries... management would prefer to not have to add another (some answers have suggested jQuery). If this can be solved using those frameworks, that would be great. Edit 2: Unfortunately, we must support Firefox 3, IE 6 and IE 7. It is desirable to support all Webkit-based browsers as well, but as our site doesn't currently support them, we can tolerate solutions that only work in the Big 3.

    Read the article

  • What are some choices to port existing Windows GUI app written in C to Linux?

    - by Warner Young
    I've been tasked with porting an existing Windows GUI app to Linux. Ideally, I'd like to do this so the same code base can be used to build either the Windows version or the Linux version. I'll be doing my work on Ubuntu 9.04. After searching around, it's unclear to me what tools are best suited to help me with this. A list of loose requirements would be: The code is in C, not C++, and should compile to build both Windows and Linux versions. Since it's existing code, and fairly large, converting to a managed language like .NET is out of the question for now. I would prefer if I can use the same dialogs in both systems. In Windows, putting up a dialog is pretty simple. You build the dialog in the Resource Editor in Visual Studio, then call DialogBox() API, and handle the event messages. I would really like to find something that can do the equivalent on the Linux side. It would also be nice to have a good IDE similar to Visual Studio. Any helps or hints would be appreciated. Thanks,

    Read the article

  • Add new SVN "repo" in poorly constructed repo/project setup

    - by Dave Masselink
    Unfortunately, the answer to this question isn't quite as simple as it sounds... but I hope it can still be relatively simple. Please read all the way through before telling me that the answer is: "svnadmin create... duh" I'm working for a company that set up their SVN server in an odd way (at least in terms of what I'm used to). We've all been there, right? Rather than giving each project a separate repository... they have a folder on the server called "/var/www/svn/repos/" which is the actual SVN repo (has conf/, db/, README.txt, etc. in it). Then they distinguish their projects by adding top level folders into the ONE repository (ex: Project1, Project2, etc.) I don't like this setup and might one day get around to converting the setup to what I'm used to, where each project is its own repository (with separate logs, dbs, etc.) But my question is this: What is the best way to add a new empty project to the current setup? Is there anyway to add a new top level folder/project to the repo through use of svnadmin? It can/should just be an empty folder that I'll start building a new project in. I know that I could do this by checking out the whole singular repository and then adding a new top level folder into my local checkout, then re-committing. But I'd really prefer not to do this because someone has created folders/projects that are just GBs of log data... and I don't want to wait through the download of this just to add a single empty folder. Let me know if there is any more info you'd need to know. I do have root/sudo access on the server in question. Thanks in advance for your help! Dave

    Read the article

  • Does HttpListener work well on Mono?

    - by billpg
    Hi everyone. I'm looking to write a small web service to run on a small Linux box. I prefer to code in C#, so I'm looking to use Mono. I don't want the overhead of running a full web server or Mono's version of ASP.NET. I'm thinking of having a single process with a thread dealing with each client connection. Shared memory between threads instead of a database. I've read a little on Microsoft's version of HttpListener and how it works with the Http.sys driver. Alas, Mono's documentation on this class is just the automated class interface with no discussion of how it works under the hood. (Linux doesn't have Http.sys, so I imagine it's implemented substantially differently.) Could anyone point me towards some resources discussing this module please? Many thanks, Bill, billpg.com (A little background to my question for the interested.) Some time ago, I asked this question, interested in keeping a long conversation open with lots of back-and-forth. I had settled on designing my own ad-hoc protocol, but people I spoke to really wanted a REST interface, even at the cost of the "Okay, send your command now" signal. So, I wondered about running ASP.NET on a Linux/Mono server, but stumbled upon HttpListener. This seemed ideal, as each "conversation" could run in a separate thread. The thread that calls HttpListener in a loop can look for which thread each incomming connection is for and pass the reference to that thread. The alternative for an ASP.NET driven service, would be to have the ASPX code pick up the state from a database, and write back the new state when it finishes. Yes, it would work, but that's a lot of overhead.

    Read the article

  • Complex derived attributes in Django models

    - by rabidpebble
    What I want to do is implement submission scoring for a site with users voting on the content, much like in e.g. reddit (see the 'hot' function in http://code.reddit.com/browser/sql/functions.sql). My submission model currently keeps track of up and down vote totals. Currently, when a user votes I create and save a related Vote object and then use F() expressions to update the Submission object's voting totals. The problem is that I want to update the score for the submission at the same time, but F() expressions are limited to only simple operations (it's missing support for log(), date_part(), sign() etc.) From my limited experience with Django I can see 4 options here: extend F() somehow (haven't looked at the code yet) to support the missing SQL functions; this is my preferred option and seems to fit within the Django framework the best define a scoring function (much like reddit's 'hot' function) in my database, and have Django use the value of that function for the value of the score field; as far as I can tell, #2 is not possible wrap my two step voting process in a suitably isolated transaction so that I can calculate the voting totals in Python and then update the Submission's voting totals without fear that another vote against the submission could be added/changed in the meantime; I'm hesitant to take this route because it seems overly complex - what is a "suitably isolated transaction" in this case anyway? use raw SQL; I would prefer to avoid this entirely -- what's the point of an ORM if I have to revert to SQL for such a common use case as this! (Note that this coming from somebody who loves sprocs, but is using Django for ease of development.) Before I embark on this mission to extend F() (which I'm not sure is even possible), am I about to reinvent the wheel? Is there a more standard way to do this? It seems like such a common use case and yet in an hour of searching I have yet to find a common solution...

    Read the article

  • How do I get the inserted id (or object) after an insert with the FormView/ObjectDataSource controls

    - by drs9222
    I have a series of classes that loosely fit the following pattern: public class CustomerInfo { public int Id {get; set;} public string Name {get; set;} } public class CustomerTable { public bool Insert(CustomerInfo info) { /*...*/ } public bool Update(CustomerInfo info) { /*...*/ } public CustomerInfo Get(int id) { /*...*/ } /*...*/ } After a successful insert the Insert method will set the Id property of the CustomerInfo object that was passed in. I've used these classes in several places and would prefer to altering them. Now I'm experimenting with writing an ASP.NET page for inserting and updating the records. I'm currently using the ObjectDataSource and FormView controls: <asp:ObjectDataSource TypeName="CustomerTable" DataObjectTypeName="CustomerInfo" InsertMethod="Insert" UpdateMethod="Update" SelectMethod="Get" /> I can successfully Insert and Update records. I would like to switch the FormView's mode from Insert to Edit after a successful insert. My first attempt was to switch the mode in the ItemInserted event. This of course did not work. I was using a QueryStringParameter for the id which of course wan't set when inserting. So, I switched to manually populating the InputParameters during the ObjectDataSource's Selecting event. The problem with this is I need to know the id of the newly inserted record which I can't find a good way to get. I understand that I can access the Insert method's return value, and out parameters in the ItemInserted event of course my method doesn't return the id using any of these methods. I can't find anyway to access the id or the CustomerInfo object that was inserted after the insert completes. The best I've been able to do is to save the CustomerInfo object in the ObjectDataSource's Inserting event. This feels like an odd way to do this. I figure that there must be a better way to do this and I'll kick myself when I see it. Any ideas?

    Read the article

  • Misalignement in the output Bitmap created from a byte array

    - by Daniel
    I am trying to understand why I have troubles creating a Bitmap from a byte array. I post this after a careful scrutiny of the existing posts about Bitmap creation from byte arrays, like the followings: Creating a bitmap from a byte[], Working with Image and Bitmap in c#?, C#: Bitmap Creation using bytes array My code is aimed to execute a filter on a digital image 8bppIndexed writing the pixel value on a byte [] buffer to be converted again (after some processing to manage gray levels) in a 8BppIndexed Bitmap My input image is a trivial image created by means of specific perl code: https://www.box.com/shared/zqt46c4pcvmxhc92i7ct Of course, after executing the filter the output image has lost the first and last rows and the first and last columns, due to the way the filter manage borders, so from the original 256 x 256 image i get a 254 x 254 image. Just to stay focused on the issue I have commented the code responsible for executing the filter so that the operation really performed is an obvious: ComputedPixel = InputImage.GetPixel(myColumn, myRow).R; I know, i should use lock and unlock but I prefer one headache one by one. Anyway this code should be a sort of identity transform, and at last i use: private unsafe void FillOutputImage() { OutputImage = new Bitmap (OutputImageCols, OutputImageRows , PixelFormat .Format8bppIndexed); ColorPalette ncp = OutputImage.Palette; for (int i = 0; i < 256; i++) ncp.Entries[i] = Color .FromArgb(255, i, i, i); OutputImage.Palette = ncp; Rectangle area = new Rectangle(0, 0, OutputImageCols, OutputImageRows); var data = OutputImage.LockBits(area, ImageLockMode.WriteOnly, OutputImage.PixelFormat); Marshal .Copy (byteBuffer, 0, data.Scan0, byteBuffer.Length); OutputImage.UnlockBits(data); } The output image I get is the following: https://www.box.com/shared/p6tubyi6dsf7cyregg9e It is quite clear that I am losing a pixel per row, but i cannot understand why: I have carefully controlled all the parameters: OutputImageCols, OutputImageRows and the byte [] byteBuffer length and content even writing known values as way to test. The code is nearly identical to other code posted in stackOverflow and elsewhere. Someone maybe could help to identify where the problem is? Thanks a lot

    Read the article

  • Set up Gitosis, but can't clone

    - by Tim Rupe
    I've set up Gitosis on a remote Ubuntu box which I will refer to as linuxserver as my host in the following commands. I'm also connecting from a Windows box using Cygwin. I followed the instructions according to: http://scie.nti.st/2007/11/14/hosting-git-repositories-the-easy-and-secure-way I had no problems up until I needed to clone the gitosis-admin repository to my local machine git clone git@linuxserver:gitosis-admin.git When I do this, the command executes, but hangs there displaying nothing until I ctrl-c to get back to a command prompt. No messages are displayed at all. I'm pretty sure I have my ssh keys set up properly, because logging in using "ssh linuxserver" into my regular account works perfectly without asking for a password. Edit: Over the weekend I set up a near identical Ubuntu box at home, and had no problem setting up Gitosis. The only difference was that I was connecting from OSX instead of Cygwin. Edit: I've also discovered that when using the Bash Shell provided with "Git Extensions", I have no problems, so the issue definitely seems to be some kind of Cygwin conflict. Edit: Just an update, but about a month after posting this question, I switched to Mercurial, and found that I prefer it much more than git. Thanks for the suggestions, but I don't plan on going back to git to try any of them out.

    Read the article

  • 301 Redirecting URLs based on GET variables in .htaccess

    - by technicalbloke
    I have a few messy old URLs like... http://www.example.com/bunch.of/unneeded/crap?opendocument&part=1 http://www.example.com/bunch.of/unneeded/crap?opendocument&part=2 ...that I want to redirect to the newer, cleaner form... http://www.example.com/page.php/welcome http://www.example.com/page.php/prices I understand I can redirect one page to another with a simple redirect i.e. Redirect 301 /bunch.of/unneeded/crap http://www.example.com/page.php But the source page doesn't change, only it's GET vars. I can't figure out how to base the redirect on the value of these GET variables. Can anybody help pls!? I'm fairly handy with the old regexes so I can have a pop at using mod-rewrite if I have to but I'm not clear on the syntax for rewriting GET vars and I'd prefer to avoid the performance hit and use the cleaner Redirect directive. Is there a way? and if not can anyone clue me in as to the right mod-rewrite syntax pls? Cheers, Roger.

    Read the article

  • SQL Server Index cost

    - by yellowstar
    I have read that one of the tradeoffs for adding table indexes in SQL Server is the increased cost of insert/update/delete queries to benefit the performance of select queries. I can conceptually understand what happens in the case of an insert because SQL Server has to write entries into each index matching the new rows, but update and delete are a little more murky to me because I can't quite wrap my head around what the database engine has to do. Let's take DELETE as an example and assume I have the following schema (pardon the pseudo-SQL) TABLE Foo col1 int ,col2 int ,col3 int ,col4 int PRIMARY KEY (col1,col2) INDEX IX_1 col3 INCLUDE col4 Now, if I issue the statement DELETE FROM Foo WHERE col1=12 AND col2 > 34 I understand what the engine must do to update the table (or clustered index if you prefer). The index is set up to make it easy to find the range of rows to be removed and do so. However, at this point it also needs to update IX_1 and the query that I gave it gives no obvious efficient way for the database engine to find the rows to update. Is it forced to do a full index scan at this point? Does the engine read the rows from the clustered index first and generate a smarter internal delete against the index? It might help me to wrap my head around this if I understood better what is going on under the hood, but I guess my real question is this. I have a database that is spending a significant amount of time in delete and I'm trying to figure out what I can do about it. When I display the execution plan for the deletion, it just shows an entry for "Clustered Index Delete" on table Foo which lists in the details section the other indices that need to be updated but I don't get any indication of the relative cost of these other indices. Are they all equal in this case? Is there some way that I can estimate the impact of removing one or more of these indices without having to actually try it?

    Read the article

  • restrict documents for mapreduce with mongoid

    - by theBernd
    I implemented the pearson product correlation via map / reduce / finalize. The missing part is to restrict the documents (representing users) to be processed via a filter query. For a simple query like mapreduce(mapper, reducer, :finalize => finalizer, :query => { :name => 'Bernd' }) I get this to work. But my filter criteria is a little bit more complicated: I have one set of preferences which need to have at least one common element and another set of preferences which may not have a common element. In a later step I also want to restrict this to documents (users) within a certain geographical distance. Currently I have this code working in my map function, but I would prefer to separate this into either query params as supported by mongoid or a javascript function. All my attempts to solve this failed since the code is either ignored or raises an error. I did a couple of tests. A regular find like User.where(:name.in => ['Arno', 'Bernd', 'Claudia']) works and returns #<Mongoid::Criteria:0x00000101f0ea40 @selector={:name=>{"$in"=>["Arno", "Bernd", "Claudia"]}}, @options={}, @klass=User, @documents=[]> Trying the same with mapreduce User.collection. mapreduce(mapper, reducer, :finalize => finalizer, :query => { :name.in => ['Arno', 'Bernd', 'Claudia'] }) fails with `serialize': keys must be strings or symbols (TypeError) in bson-1.1.5 The intermediate query parameter looks like this :query=>{#<Mongoid::Criterion::Complex:0x00000101a209e8 @key=:name, @operator="in">=>["Arno", "Bernd", "Claudia"]} and at least @operator looks a bit weird to me. I'm also uncertain if the class name can be omitted. BTW - I'm using mongodb 1.6.5-x86_64, and the mongoid 2.0.0.beta.20, mongo 1.1.5 and bson 1.1.5 gems on MacOS. What am I doing wrong? Thanks in advance.

    Read the article

  • How to customize RESTful Routes in Rails (basics)

    - by viatropos
    I have read through the Rails docs for Routing, Restful Resources, and the UrlHelper, and still don't understand best practices for creating complex/nested routes. The example I'm working on now is for events, which has_many rsvps. So a user's looking through a list of events, and clicks register, and goes through a registration process, etc. I want the urls to look like this: /events /events/123 # possible without title, like SO /events/123/my-event-title # canonical version /events/my-category/123/my-event-title # also possible like this /events/123/my-event-title/registration/new ... and all the restful nested resouces. Question is, how do I accomplish this with the minimal amount of code? Here's what I currently have: map.resources :events do |event| event.resources :rsvps, :as => "registration" end That gets me this: /events/123/registration What's the best way to accomplish the other 2 routes? /events/123/my-event-title # canonical version /events/my-category/123/my-event-title # also possible like this Where my-category is just an array of 10 possible types the event can be. I've modified Event#to_param to return "#{self.id.to_s}-#{self.title.parameterize}", but I'd prefer to have /id/title with the whole canonical-ness

    Read the article

  • In IIS6, how to provide authenticated access to static files on remote server

    - by frankadelic
    We have a library of ZIP files that we would like to make available for download at an ASP.NET site. The files are sitting on a NAS device that is accessible from out web farm. Here is our initial strategy: Map an IIS virtual directory to the shared drive at path /zipfiles Users can download the zip files when given the URL However, if users share links to the files, anyone can download them. We would instead like to make use of the ASP.NET forms authentication in our site to validate users' requests before initiating the file transfer. A few problems: A request for a zip file is handled by IIS, not ASP.NET. So it is not subject to forms authentication. In addition, we don't want ASP.NET to handle the request, because it uses up an ASP.NET thread and is not scalable for download of large files. So, configuring the asp.net dll to handle *.zip requests is not an option. Any ideas on this? One idea we've tossed around is this: Initial request for download will be for an ashx handler. This handler will, after authentication, generate a download token which is saved to a database. Then, the user is redirected to the file with token appended in QueryString (e.g. /files/xyz.zip?token=123456789). An ISAPI plugin will be used to check the token. Also, the token will expire after x amount of time. Any thoughts on this? I have not implemented an ISAPI plugin so I'm not sure if this will even work. I would like to avoid custom coding since security is an issue and I'd prefer to use a time-tested solution.

    Read the article

< Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >