Search Results

Search found 5998 results on 240 pages for 'rise against'.

Page 189/240 | < Previous Page | 185 186 187 188 189 190 191 192 193 194 195 196  | Next Page >

  • How can I protect my .NET assemblies from decompilation?

    - by Holli
    One if the first things I learned when I started with C# was the most important one. You can decompile any .NET assembly with Reflector or other tools. Many developers are not aware of this fact and most of them are shocked when I show them their source code. Protection against decompilation is still a difficult task. I am still looking for a fast, easy and secure way to do it. I don't want to obfuscate my code so my method names will be a,b,c or so. Reflector or other tools should be unable to recognize my application as .NET assembly at all. I know about some tools already but they are very expensive. Is there any other way to protect my applications? EDIT: The reason for my question is not to prevent piracy. I only want to stop competitors from reading my code. I know they will and they already did. They even told me so. Maybe I am a bit paranoid but business rivals reading my code doesn't make me feel good.

    Read the article

  • Compiling gstreamer plugin in windows

    - by utnapistim
    Hello all, My question: What is the correct way to compile a gstreamer plugin in windows, so that it will be accepted by gstreamer (actually Songbird on top of gstreamer). My setup: I have downloaded the songbird sources following the steps described here and I have a trunk/dependencies/windows-i686-msvc8 directory within my svn sources with all the gstreamer binaries. I have created a gstreamer empty plugin skeleton following the steps detailed in the GStreamer Plugin Writer's Guide, and compiled it against the gstreamer binaries in the Songbird dependencies folder. The compilation was done with VS2010 RC1 (Visual Studio 2008 yelded the same results), using an empty DLL project with the .h and .c files generated using the GStreamer Plugin Writer's Guide. The DLL was lined with libcpmt.lib, libcmt.lib, ws2_32.lib, gobject-2.0.lib, gthread-2.0.lib, gstreamer-0.10-0.lib, glib-2.0.lib, kernel32.lib, nspr4.lib and ignoring all default libraries. I have compiled the files as both .c and .cpp with the same results. Testing: I have installed the Songbird binaries corresponding to the correct svn version, then installed Songbird Developer Tools addon and used it to create an addon for testing my gstreamer plugin. Songbird will not load the pluggin. I have also tried to load it with gst-launch.exe from the trunk/dependencies/windows-i686-msvc8/[...] directory and that generated runtime error R6034: An application has made an attempt to load the C runtime library incorrectly. Most resources I found for this problem recommended restarting or reinstalling windows :(.

    Read the article

  • recommended format to save time with MJD + BCD format in database

    - by pierr
    Hi, There is a time represented in MJD and BCD format with 5 bytes .I am wondering what is the recommended format to save this date-time in the sqlite database so that user can search against it ? My first attempt is to save it just as it is, that is a 5 bytes string. The user will use the same format to search and the result will be converted to unix time by the user with following code. However, later, I was suggested to save the time in the integer - the UTC time, for example. But I can not find a standard way to do the conversion. I feel this is a common issue and would like to hear your comments. time_t sidate_to_unixtime(unsigned char sidate[]) { int k = 0; struct tm tm; double mjd; /* check for the undefined value */ if ((sidate[0] == 0xff) && (sidate[1] == 0xff) && (sidate[2] == 0xff) && (sidate[3] == 0xff) && (sidate[4] == 0xff)) { return -1; } memset(&tm, 0, sizeof(tm)); mjd = (sidate[0] << 8) | sidate[1]; tm.tm_year = (int) ((mjd - 15078.2) / 365.25); tm.tm_mon = (int) (((mjd - 14956.1) - (int) (tm.tm_year * 365.25)) / 30.6001); tm.tm_mday = (int) mjd - 14956 - (int) (tm.tm_year * 365.25) - (int) (tm.tm_mon * 30.6001); if ((tm.tm_mon == 14) || (tm.tm_mon == 15)) k = 1; tm.tm_year += k; tm.tm_mon = tm.tm_mon - 2 - k * 12; tm.tm_sec = bcd_to_integer(sidate[4]); tm.tm_min = bcd_to_integer(sidate[3]); tm.tm_hour = bcd_to_integer(sidate[2]); return mktime(&tm); }

    Read the article

  • File Handler returning garbled files.

    - by forcripesake
    For the past 3 months my site has been using a PHP file handler in combination with htaccess. Users accessing the uploads folder of the site would be redirected to the handler as such: RewriteRule ^(.+)\.*$ downloader.php?f=%{REQUEST_FILENAME} [L] The purpose of the file handler is pseudo coded below, followed by actual code. //Check if file exists and user is downloading from uploads directory; True. //Check against a file type white list and set the mime type(); $ctype = mime type; header("Pragma: public"); // required header("Expires: 0"); header("Cache-Control: must-revalidate, post-check=0, pre-check=0"); header("Cache-Control: private",false); // required for certain browsers header("Content-Type: $ctype"); header("Content-Disposition: attachment; filename=\"".basename($filename)."\";" ); header("Content-Transfer-Encoding: binary"); header("Content-Length: ".filesize($filename)); readfile("$filename"); As of yesterday, the handler started returning garbled files, unreadable images, and had to be bypassed. I'm wondering what settings could have gone awry to cause this.

    Read the article

  • OpenGL equivalent of GDI's HatchBrush or PatternBrush?

    - by Ptah- Opener of the Mouth
    I have a VB6 application (please don't laugh) which does a lot of drawing via BitBlt and the standard VB6 drawing functions. I am running up against performance issues (yes, I do the regular tricks like drawing to memory). So, I decided to investigate other ways of drawing, and have come upon OpenGL. I've been doing some experimenting, and it seems straightforward to do most of what I want; the application mostly only uses very simple drawing -- relatively large 2D rectangles of solid colors and such -- but I haven't been able to find an equivalent to something like a HatchBrush or PatternBrush. More specifically, I want to be able to specify a small monochrome pixel pattern, choose a color, and whenever I draw a polygon (or whatever), instead of it being solid, have it automatically tiled with that pattern, not translated or rotated or skewed or stretched, with the "on" bits of the pattern showing up in the specified color, and the "off" bits of the pattern left displaying whatever had been drawn under the area that I am now drawing on. Obviously I could do all the calculations myself. That is, instead of drawing as a polygon which will somehow automatically be tiled for me, I could calculate all of the lines or pixels or whatever that actually need to be drawn, then draw them as lines or pixels or whatever. But is there an easier way? Like in GDI, where you just say "draw this polygon using this brush"? I am guessing that "textures" might be able to accomplish what I want, but it's not clear to me (I'm totally new to this and the documentation I've found is not entirely obvious); it seems like textures might skew or translate or stretch the pattern, based upon the vertices of the polygon? Whereas I want the pattern tiled. Is there a way to do this, or something like it, other than brute force calculation of exactly the pixels/lines/whatever that need to be drawn? Thanks in advance for any help.

    Read the article

  • Why do my CouchDB databases grow so fast?

    - by konrad
    I was wondering why my CouchDB database was growing to fast so I wrote a little test script. This script changes an attributed of a CouchDB document 1200 times and takes the size of the database after each change. After performing these 1200 writing steps the database is doing a compaction step and the db size is measured again. In the end the script plots the databases size against the revision numbers. The benchmarking is run twice: The first time the default number of document revision (=1000) is used (_revs_limit). The second time the number of document revisions is set to 1. The first run produces the following plot The second run produces this plot For me this is quite an unexpected behavior. In the first run I would have expected a linear growth as every change produces a new revision. When the 1000 revisions are reached the size value should be constant as the older revisions are discarded. After the compaction the size should fall significantly. In the second run the first revision should result in certain database size that is then keeps during the following writing steps as every new revision leads to the deletion of the previous one. I could understand if there is a little bit of overhead needed to manage the changes but this growth behavior seems weird to me. Can anybody explain this phenomenon or correct my assumptions that lead to the wrong expectations?

    Read the article

  • Java map with values limited by key's type parameter

    - by Ashley Mercer
    Is there a way in Java to have a map where the type parameter of a value is tied to the type parameter of a key? What I want to write is something like the following: public class Foo { // This declaration won't compile - what should it be? private static Map<Class<T>, T> defaultValues; // These two methods are just fine public static <T> void setDefaultValue(Class<T> clazz, T value) { defaultValues.put(clazz, value); } public static <T> T getDefaultValue(Class<T> clazz) { return defaultValues.get(clazz); } } That is, I can store any default value against a Class object, provided the value's type matches that of the Class object. I don't see why this shouldn't be allowed since I can ensure when setting/getting values that the types are correct. EDIT: Thanks to cletus for his answer. I don't actually need the type parameters on the map itself since I can ensure consistency in the methods which get/set values, even if it means using some slightly ugly casts.

    Read the article

  • Best way to detect similar email addresses?

    - by Chris
    I have a list of ~20,000 email addresses, some of which I know to be fraudulent attempts to get around a "1 per e-mail" limit. ([email protected], [email protected], [email protected], etc...). I want to find similar email addresses for evaluation. Currently I'm using a levenshtein algorithm to check each e-mail against the others in the list and report any with an edit distance of less than 2. However, this is painstakingly slow. Is there a more efficient approach? The test code I'm using now is: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.IO; using System.Threading; namespace LevenshteinAnalyzer { class Program { const string INPUT_FILE = @"C:\Input.txt"; const string OUTPUT_FILE = @"C:\Output.txt"; static void Main(string[] args) { var inputWords = File.ReadAllLines(INPUT_FILE); var outputWords = new SortedSet<string>(); for (var i = 0; i < inputWords.Length; i++) { if (i % 100 == 0) Console.WriteLine("Processing record #" + i); var word1 = inputWords[i].ToLower(); for (var n = i + 1; n < inputWords.Length; n++) { if (i == n) continue; var word2 = inputWords[n].ToLower(); if (word1 == word2) continue; if (outputWords.Contains(word1)) continue; if (outputWords.Contains(word2)) continue; var distance = LevenshteinAlgorithm.Compute(word1, word2); if (distance <= 2) { outputWords.Add(word1); outputWords.Add(word2); } } } File.WriteAllLines(OUTPUT_FILE, outputWords.ToArray()); Console.WriteLine("Found {0} words", outputWords.Count); } } }

    Read the article

  • How to Make a DVCS Completely Interoperable with Subversion?

    - by David M
    What architectural changes would a DVCS need to be completely interoperable with Subversion? Many DVCSs have some kind of bidirectional interface with Subversion, but there are limitations and caveats. For instance, git-svn can create a repository that mirrors Subversion, and changes to that repo can be sent back to Subversion via 'dcommit'. But the git-svn manpage explicitly cautions against making clones of that repository, so essentially, it's a Subversion working copy that you can use git commands on. Bazaar has a bidirectional Subversion capability too, but its documentation notes that Subversion properties aren't supported at all. Here's the end that I'm pursuing. I want a Subversion repository and a DVCS repository that, in the steady state, have identical content. When something is changed on one, it's automatically mirrored to the other. Subversion users interact with the Subversion repository normally. DVCS users clone the DVCS repository, pull changes from it, and push changes back to it. Most importantly, they don't need to know that this special DVCS repository is associated with a Subversion repository. It would probably be nifty if any clone of the special repository is itself a special repository and could commit directly to Subversion, but it might be sufficient if only the special repository directly interacts with Subversion. I think that's what mostly needed is to improve the bidirectional capability so that changes to Subversion properties are translated to changes in the DVCS repository. Some changes in the DVCS repository would be translated to changes to Subversion properties. Or is the answer to create a new capability in Subversion that interacts with a DVCS repository, using the DVCS repository as just a special storage layer such as fsfs or bdb? If there's not a direct mapping between the things that Subversion and a DVCS regard as having versions, does that imply that there's always going to be some activity that cannot be recorded properly on one or the other?

    Read the article

  • Ruby on Rails 2.3.5: Populating my prod and devel database with data (migration or fixture?)

    - by randombits
    I need to populate my production database app with data in particular tables. This is before anyone ever even touches the application. This data would also be required in development mode as it's required for testing against. Fixtures are normally the way to go for testing data, but what's the "best practice" for Ruby on Rails to ship this data to the live database also upon db creation? ultimately this is a two part question I suppose. 1) What's the best way to load test data into my database for development, this will be roughly 1,000 items. Is it through a migration or through fixtures? The reason this is a different answer than the question below is that in development, there's certain fields in the tables that I'd like to make random. In production, these fields would all start with the same value of 0. 2) What's the best way to bootstrap a production db with live data I need in it, is this also through a migration or fixture? I think the answer is to seed as described here: http://lptf.blogspot.com/2009/09/seed-data-in-rails-234.html but I need a way to seed for development and seed for production. Also, why bother using Fixtures if seeding is available? When does one seed and when does one use fixtures?

    Read the article

  • Solving Naked Triples in Sudoku

    - by Kave
    Hello, I wished I paid more attention to the math classes back in Uni. :) How do I implement this math formula for naked triples? Naked Triples Take three cells C = {c1, c2, c3} that share a unit U. Take three numbers N = {n1, n2, n3}. If each cell in C has as its candidates ci ? N then we can remove all ni ? N from the other cells in U.** I have a method that takes a Unit (e.g. a Box, a row or a column) as parameter. The unit contains 9 cells, therefore I need to compare all combinations of 3 cells at a time that from the box, perhaps put them into a stack or collection for further calculation. Next step would be taking these 3-cell-combinations one by one and compare their candidates against 3 numbers. Again these 3 numbers can be any possible combination from 1 to 9. Thats all I need. But how would I do that? How many combinations would I get? Do I get 3 x 9 = 27 combinations for cells and then the same for numbers (N)? How would you solve this in classic C# loops? No Lambda expression please I am already confused enough :) Many Thanks for any help,

    Read the article

  • How does real world login process happen in web application in Java?

    - by Nitesh Panchal
    Hello, I am very much confused regarding login process that happen in Java web application. I read many tutorials regarding jdbcRealm and JAAS. But, one thing that i don't understand is that why should i use them ? Can't i simply check directly against my database of users? and once they successfully login to the site, i store some variable in session as a flag. And probably check that session variable on all restricted pages (I mean keep a filter for restricted resources url pattern).If the flag doesn't exist simply redirect the user to login page. Is this approach correct?Does this approch sound correct? If yes, then why did all this JAAS and jdbcRealm came into existence? Secondly, I am trying to completely implement SAS(Software as service) in my web application, meaning everything is done through web services.If i use webservices, is it possible to use jdbcRealm? If not, then is it possible to use JAAS? If yes, then please show me some example which uses mySql as a database and then authenticates and authorizes. I even heard about Spring Security. But, i am confused about that too in the sense that how do i use webservice with Spring Security. Please help me. I am really very confused. I read sun's tutorials but they only keep talking about theories. For programmers to understand a simple concept, they show a 100 page theory first before they finally come to one example.

    Read the article

  • Bilinear interpolation - DirectX vs. GDI+

    - by holtavolt
    I have a C# app for which I've written GDI+ code that uses Bitmap/TextureBrush rendering to present 2D images, which can have various image processing functions applied. This code is a new path in an application that mimics existing DX9 code, and they share a common library to perform all vector and matrix (e.g. ViewToWorld/WorldToView) operations. My test bed consists of DX9 output images that I compare against the output of the new GDI+ code. A simple test case that renders to a viewport that matches the Bitmap dimensions (i.e. no zoom or pan) does match pixel-perfect (no binary diff) - but as soon as the image is zoomed up (magnified), I get very minor differences in 5-10% of the pixels. The magnitude of the difference is 1 (occasionally 2)/256. I suspect this is due to interpolation differences. Question: For a DX9 ortho projection (and identity world space), with a camera perpendicular and centered on a textured quad, is it reasonable to expect DirectX.Direct3D.TextureFilter.Linear to generate identical output to a GDI+ TextureBrush filled rectangle/polygon when using the System.Drawing.Drawing2D.InterpolationMode.Bilinear setting? For this (magnification) case, the DX9 code is using this (MinFilter,MipFilter set similarly): Device.SetSamplerState(0, SamplerStageStates.MagFilter, (int)TextureFilter.Linear); and the GDI+ path is using: g.InterpolationMode = InterpolationMode.Bilinear; I thought that "Bilinear Interpolation" was a fairly specific filter definition, but then I noticed that there is another option in GDI+ for "HighQualityBilinear" (which I've tried, with no difference - which makes sense given the description of "added prefiltering for shrinking") Followup Question: Is it reasonable to expect pixel-perfect output matching between DirectX and GDI+ (assuming all external coordinates passed in are equal)? If not, why not? Finally, there are a number of other APIs I could be using (Direct2D, WPF, GDI, etc.) - and this question generally applies to comparing the output of "equivalent" bilinear interpolated output images across any two of these. Thanks!

    Read the article

  • Javascript library: to obfuscate or not to obfuscate - that is the question

    - by morpheous
    I need to write a GUI related javascript library. It will give my website a bit of an edge (in terms of functionality I can offer) - up until my competitors play with it long enough to figure out how to write it by themselves. I can accept the fact that it will be emulated over time - thats par for the course (its part of business). However, what I cannot bear, is the idea of effectively, simply handing over all the hard work that would have gone into the library to my competitors, by using plain javascript that anyone can download and use. It is an established fact that no none in the industry I am "attacking" has this functionality, so the value of such a library is undeniable and is not up for discussion (i.e. thats not what I'm asking here). What I am seeking to find out are the pros and cons of obfuscating a javascript library, so that I can come to a final decision. Two of my biggest concerns are debugging, and subtle errors that may be introduced by the obfuscator. I would like to know: How can I manage those risks (being able to debug faulty code, ensuring/minimizing against obfuscation errors) Are there any good quality industry standard obfuscators you can recommend (preferably something you use yourself). What are your experiences of using obfuscated code in a production environment?

    Read the article

  • How can I superimpose modified loess lines on a ggplot2 qplot?

    - by briandk
    Background Right now, I'm creating a multiple-predictor linear model and generating diagnostic plots to assess regression assumptions. (It's for a multiple regression analysis stats class that I'm loving at the moment :-) My textbook (Cohen, Cohen, West, and Aiken 2003) recommends plotting each predictor against the residuals to make sure that: The residuals don't systematically covary with the predictor The residuals are homoscedastic with respect to each predictor in the model On point (2), my textbook has this to say: Some statistical packages allow the analyst to plot lowess fit lines at the mean of the residuals (0-line), 1 standard deviation above the mean, and 1 standard deviation below the mean of the residuals....In the present case {their example}, the two lines {mean + 1sd and mean - 1sd} remain roughly parallel to the lowess {0} line, consistent with the interpretation that the variance of the residuals does not change as a function of X. (p. 131) How can I modify loess lines? I know how to generate a scatterplot with a "0-line,": # First, I'll make a simple linear model and get its diagnostic stats library(ggplot2) data(cars) mod <- fortify(lm(speed ~ dist, data = cars)) attach(mod) str(mod) # Now I want to make sure the residuals are homoscedastic qplot (x = dist, y = .resid, data = mod) + geom_smooth(se = FALSE) # "se = FALSE" Removes the standard error bands But does anyone know how I can use ggplot2 and qplot to generate plots where the 0-line, "mean + 1sd" AND "mean - 1sd" lines would be superimposed? Is that a weird/complex question to be asking?

    Read the article

  • Where in maven project's path should I put configuration files that are not considered resources

    - by Paralife
    I have a simple java maven project. One of my classes when executing needs to load an xml configuration file from the classpath. I dont want to package such xml file when producing the jar but I want to include a default xml file in a zip assemply under a conf subfolder and I also want this default xml to be available in the unit tests to test against it. As I see it there are 2 possible places of this default xml: src/main/resources/conf/default.xml src/main/conf/default.xml Both solutions demand special pom actions: In solution 1, I get the auto copy to target folder during build which means it is available in testing but I also get it in the produced jar which i dont want. In solution 2, I get the jar as I want it(free of the xml) but I manually have to copy the xml to the target folder to be available for testing. (I dont want to add src's subfolders in test classpath. I think it is bad practice). The question is what is the best solution of the two? If the correct is 2, what is the best way to copy it to target folder? Is there any other solution better and more common than those two? (I also read http://stackoverflow.com/questions/465001/where-should-i-put-application-configuration-files-for-a-maven-project but I would like to know the most "correct solution" from the "convention over configuration" point of view and this link provides some configuration type solutions but not any convention oriented. Maybe there isnt one but I ask anyway. Also the solutions provided include AntRun plugin and appAssembler plugin and I wonder if I could do it with out them.)

    Read the article

  • Force a view change from a button when using UITabBarController

    - by user342197
    Hello - When using a UITabBarController, when the user enters some data on View1 and presses a button, I need to perform some calculations and present the results on View2. I have an AppDelegate, View1Controller, View2Controller, and View3Controller (View3 is a basically static view). My AppDelgate declares UITabBarController *rootController; On View1, I have the calculations being performed in an IBAction for buttonPressed; however, I can't seem to force the view to switch to View2 programmatically. I have done a lot of searching for similar problems, and think I should be doing something like "self.rootController.selectedIndex = 1"; however,when I do this from within buttonPressed on my View1Controller, I get an error "request for member rootController in something not in a structure or union". I think I'm missing something basic here... probably need do do something with my AppDelegate, but I'm banging my head against the wall. Can anyone provide some guidance in this situation...like key things I should do in View1Controller header and implementation with reference to my AppDelgate? Thank you!

    Read the article

  • SVN checkout or export for production environment?

    - by Eran Galperin
    In a project I am working on, we have an ongoing discussion amongst the dev team - should the production environment be deployed as a checkout from the SVN repository or as an export? The development environment is obviously a checkout, since it is constantly updated. For the production, I'm personally for checking out the main trunk, since it makes future updates easier (just run svn update). However some of the devs are against it, as svn creates files with the group/owner and permissions of the svn process (this is on a linux OS, so those things matter), and also having the .svn directories on the production seem to them to be somewhat dirty. Also, if it is a checkout - how do you push individual features to the production without including in-development code? do you use tags or branch out for each feature? any alternatives? EDIT: I might not have been clear - one of the requirement is to be able to constantly be able to push fixes to the production environment. We want to avoid a complete build (which takes much longer than a simple update) just for pushing critical fixes.

    Read the article

  • How to include all objects of an archive in a shared object?

    - by Didier Trosset
    When compiling our project, we create several archives (static libraries), say liby.a and libz.a that each contains an object file defining a function y_function() and z_function(). Then, these archives are joined in a shared object, say libyz.so, that is one of our main distributable target. g++ -fPIC -c -o y.o y.cpp ar cr liby.a y.o g++ -fPIC -c -o z.o z.cpp ar cr libz.a z.o g++ -shared -L. -ly -lz -o libyz.so When using this shared object into the example program, say x.c, the link fails because of an undefined references to functions y_function() and z_function(). g++ x.o -L. -lyz -o xyz It works however when I link the final executable directly with the archives (static libraries). g++ x.o -L. -ly -lz -o xyz My guess is that the object files contained in the archives are not linked into the shared library because they are not used in it. How to force inclusion? Edit: Inclusion can be forced using --whole-archive ld option. But if results in compilation errors: g++ -shared '-Wl,--whole-archive' -L. -ly -lz -o libyz.so /usr/lib/libc_nonshared.a(elf-init.oS): In function `__libc_csu_init': (.text+0x1d): undefined reference to `__init_array_end' /usr/bin/ld: /usr/lib/libc_nonshared.a(elf-init.oS): relocation R_X86_64_PC32 against undefined hidden symbol `__init_array_end' can not be used when making a shared object /usr/bin/ld: final link failed: Bad value Any idea where this comes from?

    Read the article

  • Pros and cons of each JEE server for developing within Eclipse

    - by Thorbjørn Ravn Andersen
    Eclipse JEE has a lot of server adapters allowing development against many different application servers like JBoss, Glassfish and WebSphere. Frequently you can benefit from using another server for developing new features than for production, simply because it may be able to deploy changes much faster and when the functionality is in place, you can iron out bugs for the production platform. Unfortunately finding that server is a time consuming process, where others experience is invaluable. If you have experience with any server with an Eclipse Server Adapter, please add your findings and your recommendation. I believe that the following is of interest: Does saving a file trigger an update in the server, giving save edit+reload browser functionality? How fast is a deployment? (Saved a JSP? Java class? Static file?) Can the actual server be downloaded by the Server Adapter Wizard allowing for easy installation? Are there known bugs and issues with suitable work-arounds? Is debugging fully supported? Is profiling? Would you recommend this server? Note: Eclipse can also work with Tomcat but that is a web container, which cannot deploy EAR files.

    Read the article

  • Could my forms be hacked.

    - by Mike Sandman
    Hi there, I posted a question yesterday, which I intend to get back to today however I wrote some JavaScript as a first line of prevention against XSS. However when testing this on my live server I catch some invalid input as the javascript catches the php section. My form uses post and php isn't in my form items (i haven't typed it in). Could this be picking up the form action or something? I'm baffeled, Any ideas Here is my code, it is triggered on the submit button. function validateForBadNess(){ var theShit = new Array("*","^", "$", "(",")","{", "}","[", "]","\", "|", "'","/","?",",","=","","gt","lt", "<","script","`","´","php"); var tagName = new Array(); tagName[0] = "input"; tagName[1] = "select"; tagName[2] = "textbox"; tagName[3] = "textarea"; for (ms=0;ms // loop through the elements of the form var formItems = document.getElementsByTagName(tagName[ms]); for (var xs=0;xs var thisString = formItems[xs].value; // loop through bad array for (zs in theShit){ //alert(thisString + " " + thisString.indexOf(theShit[zs])) if(thisString.indexOf(theShit[zs]) >= 0){ alert("Sorry but the following character: " + theShit[zs] + " is not permitted. Please omit it from your input.\nIf this is part of your password please contact us to heave your password reset.") return false; } } // loop for formitems } // tagName toop } // original condition }

    Read the article

  • Get Function Pointer to function in a shared library I didn't directly load

    - by bdk
    My Linux application (A) links against a Third Party shared Library (B) which I don't have source code to. This library makes use of another third party shared library that I don't have source code to (C). I believe that (B) uses dlopen to access (C) instead of directly linking. My reasoning for this is that 'ldd' on (B) does not show (C) and objdump -X (B) shows references to dlopen/dlclose/dlsym. My requirement is that I need to in my code for (A) get a function pointer to a function foo() located in (C). Normally I'd use dlsym for this, but I need to pass it the handle returned from dlopen which I don't have since (B) does not expose this. - For the larger context: I need to modify the function in (C) such that everytime it calls its helper function bar() (also located in (C)), it also calls a function with the same signature located in (A) with the same parameters (Basically inject my code into the codepath of (C) foo()-bar(). I believe I've found a way to accomplish this using gdb, but in order to port my gdb command list, but I'm stuck on the step of getting the function pointer. I'm also open to alternatives to accomplish the same task rather than the exact problem as stated above Edit: After writing this I realized I can probably just do another dlopen on the file in my code and the symbols returned via dlsym on that handle should be the same as received via the original dlopen, If I'm reading the dlopen man page correctly. However I'm still interested in advice or assistance with the my larger context, If theres a better way to go about this

    Read the article

  • Advanced All In One .NET Framework

    - by alfredo dobrekk
    Hi, i m starting a new project that would basically take input from user and save them to database among about 30 screens, and i would like to find a framework that will allow the maximum number of these features out of the box : .net c#. windows form. unit testing continuous integration screens with lists, combo boxes, text boxes, add, delete, save, cancel that are easy to update when you add a property to your classes or a field to your database. auto completion on controls to help user find its way use of an orm like nhibernate easy multithreading and display of wait screens for user easy undo redo tabbed child windows search forms ability to grant access to some functionnalities according to user profiles mvp/mvvm or whatever design patterns either some code generation from database to c# classe or generation of database schema from c# classes some kind of database versioning / upgrade to easily update database when i release patches to application once in production automatic control resizing code metrics analysis some code generator i can use against my entities that would generate some rough form i can rearrange after code documentation generator ... Any ideas ? I know its lot but i really would like to use existing code to build upon so i can focus on business rules. Could splitting the requirements on 3 or 4 existing open source framework be possible ? Do u have any suggestion to add to the list before starting ? What open source tools would u use to achieve these ?

    Read the article

  • xsd.exe creating invalid constraints in dataset from xsd file

    - by Dan Neely
    I have a sequence with an allowed minimum length of zero in my xsd. When I try and load an xml file which doesn't have any elements of the sequence into the DataSet that xsd.exe created I get an exception indicating that my file violated one of the DataSet's constraints. The xml file validates against the schema so I know it's valid. Is there anything I can do to make the tool generate a valid dataset? <xs:sequence minOccurs="0" maxOccurs="unbounded"> <xs:element name="Numbers" type="xs:double"/> </xs:sequence> Edit: if I change my schema to this the generated code works properly. It looks wrong to me though since it appears to be implying that I could have sequence items with nothing in them, which doesn't make any sense. <xs:sequence minOccurs="0" maxOccurs="unbounded"> <xs:element name="Numbers" type="xs:double" minOccurs="0"/> </xs:sequence>

    Read the article

  • Optimzing TSQL code

    - by adopilot
    My job is the maintain one application which heavy use SQL server (MSSQL2005). Until now middle server stores TSQL codes in XML and send dynamic TSQL queries without using stored procs. As I am able change those XML queries I want to migrate most of my queries to stored procs. Question is folowing: Most of my queries have same Where conditions against one table Sample: Select ..... from .... where .... and (a.vrsta_id = @vrsta_id or @vrsta_id = 0) and (a.podvrsta_id = @podvrsta_id or @podvrsta_id = 0) and (a.podgrupa_2 = @podgrupa2_id or @podgrupa2_id = 0) and ( (a.id in (select art_id from osobina_veze where podosobina_id in (select ado from dbo.fn_ado_param_int(@podosobina)) group by art_id having count(art_id)= @podosobina_count )) or ('0' = @podosobina) ) They also have same where conditions on other table. How I should organize my code ? What is proper way ? Should I make table valued function that I will use in all queries or use #Temp tables and simple inner join my query to that each time when proc executing? or use #temp filed by table valued function ? or leave all queries with this large where clause and hope that index is going to do their jobs. or use WITH(statement)

    Read the article

< Previous Page | 185 186 187 188 189 190 191 192 193 194 195 196  | Next Page >