Search Results

Search found 6992 results on 280 pages for 'exist'.

Page 224/280 | < Previous Page | 220 221 222 223 224 225 226 227 228 229 230 231  | Next Page >

  • why won't Eclipse use the compiler I specify for my project?

    - by codeman73
    I'm using Eclipse 3.3. In my project, I've set the compiler compliance level to 5.0 In the build path for the project. I've added the Java 1.5 JDK in the Installed JREs section and am referencing that System Library in my project build path. However, I'm getting compile errors for a class that implements PreparedStatement for not implementing abstract methods that only exist in Java 1.6 PreparedStatement. Specifically, the methods setAsciiStream(int, InputStream, long) and setAsciiStream(int, InputStream) Strangely enough, it worked when we were compiling it against Java 1.4, which it was originally written for. We added the JREs for Java 1.4 and referenced that system library in the project, and set the project's compiler level to 1.4, and it works fine. But when I do the same changes to try to point to Java 5.0, it instead uses Java 6. Any ideas why? I wrote a similar question earlier, here: http://stackoverflow.com/questions/2540548/how-do-i-get-eclipse-to-use-a-different-compiler-version-for-java I know how you're supposed to choose a different compiler but it seems Eclipse isn't taking it. It seems to be defaulting to Java 6, even though I have deleted all Java 6 JDKs and JREs that I could find. I've also updated the -vm option in my eclipse.ini to point to the Java5 JDK.

    Read the article

  • Entity Framework - Insert/Update new entity with child-entities

    - by Christina Mayers
    I have found many questions here on SO and articles all over the internet but none really tackled my problem. My model looks like this (I striped all non essential Properties): Everyday or so "Play" gets updated (via a XML-file containing the information). internal Play ParsePlayInfo(XDocument doc) { Play play = (from p in doc.Descendants("Play") select new Play { Theatre = new Theatre() { //Properties }, //Properties LastUpdate = DateTime.Now }).SingleOrDefault(); var actors = (from a in doc.XPathSelectElement(".//Play//Actors").Nodes() select new Lecturer() { //Properties }); var parts = (from p in doc.XPathSelectElement(".//Play//Parts").Nodes() select new Part() { //Properties }).ToList(); foreach (var item in parts) { play.Parts.Add(item); } var reviews = (from r in doc.XPathSelectElement(".//Play//Reviews").Nodes() select new Review { //Properties }).ToList(); for (int i = 0; i < reviews.Count(); i++) { PlayReviews pR = new PlayReviews() { Review = reviews[i], Play = play, //Properties }; play.PlayReviews.Add(pR); } return play; } If I add this "play" via Add() every Childobject of Play will be inserted - regardless if some exist already. Since I need to update existing entries I have to do something about that. As far as I can tell I have the following options: add/update the child entities in my PlayRepositories Add-Method restructure and rewrite ParsePlayInfo() so that get all the child entities first, add or update them and then create a new Play. The only problem I have here is that I wanted ParsePlayInfo() to be persistence ignorant, I could work around this by creating multiple parse methods (eg ParseActors() ) and assign them to play in my controller (I'm using ASP.net MVC) after everything was parsed and added Currently I am implementing option 1 - but it feels wrong. I'd appreciate it if someone could guide me in the right direction on this one.

    Read the article

  • Maven build fails on an Ant FTP task failure

    - by fraido
    I'm using the FTP Ant task with maven-antrun-plugin <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-antrun-plugin</artifactId> <executions> <execution> <id>ftp</id> <phase>generate-resources</phase> <configuration> <tasks> <ftp action="get" server="${ftp.server.ip}" userid="${ftp.server.userid}" password="${ftp.server.password}" remotedir="${ftp.server.remotedir}" depends="yes" verbose="yes" skipFailedTransfers="true" ignoreNoncriticalErrors="true"> <fileset dir="target/test-classes/testdata"> <include name="**/*.html" /> </fileset> </ftp> </tasks> </configuration> <goals> <goal>run</goal> </goals> </execution> </executions> ... the problem is that my build fails when the folder ${ftp.server.remotedir} doesn't exist. I tried to specify skipFailedTransfers="true" ignoreNoncriticalErrors="true but these don't fix the problem and the build keeps failing. An Ant BuildException has occured: could not change remote directory: 550 /myBadDir: The system cannot find the file specified. Do you know how to instruct my maven build to don't care about this Ant task error

    Read the article

  • Association and model data saving problem

    - by Zhlobopotam
    Developing with cakephp 1.3 (latest from github). There are 2 models bind with hasAndBelongsToMany: documents and tags. Document can have many tags in other words. I've add a new document submitting form there user can enter a list of tags separated with commas (new tag will be added, if not exist already). I looked at cakephp bakery 2.0 source code on github and found the solution. But it seems that something is wrong. class Document extends AppModel { public $hasAndBelongsToMany = array('Tag'); public function beforeSave($options = array()) { if (isset($this->data[$this->alias]['tags']) && !empty($this- >data[$this->alias]['tags'])) { $tagIds = $this->Tag->saveDocTags($this->data[$this->alias] ['tags']); unset($this->data[$this->alias]['tags']); $this->data[$this->Tag->alias][$this->Tag->alias] = $tagIds; } return true; } } class Tag extends AppModel { public $hasAndBelongsToMany = array ('Document'); public function saveDocTags($commalist = '') { if ($commalist == '') return null; $tags = explode(',',$commalist); if (empty($tags)) return null; $existing = $this->find('all', array( 'conditions' => array('title' => $tags) )); $return = Set::extract($existing,'/Tag/id'); if (sizeof($existing) == sizeof($tags)) { return $return; } $existing = Set::extract($existing,'/Tag/title'); foreach ($tags as $tag) { if (!in_array($tag, $existing)) { $this->create(array('title' => $tag)); $this->save(); $return[] = $this->id; } } return $return; } } So, new tags creation works well but document model can't save association data and tells: SQL Error: 1054: Unknown column 'Array' in 'field list' Query: INSERT INTO documents (title, content, shortnfo, date, status) VALUES ('Document with tags', '', '', Array, 1) Any ideas how to solve this problem?

    Read the article

  • Maven compile plugin

    - by phanikiran
    Hi every body, in pom of a project, i have added dependency with scope compile . which is a jar file which contains some class file and jar's as well. my current java file needs internal jars of dependent jar to compile. But maven compile goal returning compilation error . :banghead: All the jar's needed to compile are in the single jar file which is added in dependency............................. Please help me! my pom: <dependency> <groupId>eagle</groupId> <artifactId>zkui</artifactId> <version>360LTS</version> <type>jar</type> <scope>compile</scope> </dependency> <build> <sourceDirectory>./src/main/java/</sourceDirectory> <outputDirectory>./target/classes/</outputDirectory> <finalName>${project.groupId}-${project.artifactId}</finalName> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>2.2</version> <configuration> <source>1.6</source> <target>1.6</target> </plugin> </plugins> </build> </project> error: package org.zkoss.zk.ui does not exist this package org.zkoss.zk.ui is in jar file zkex.jar which is in dependency jar eagle:zkui:360LTS jar file Please Help ME!!!! :jumpingjoy: Advance Thanks

    Read the article

  • ASP.NET error on Bitmap.Save "Exception (0x80004005): A generic error occurred in GDI+."

    - by Batu
    Hi, I have a function which first reads an image from disk, resizes it and then saves to another directory. when i use the Bitmap.Save(directory + theimagename) it returns the error as i stated in the question title. i checked the directory is right, and the given image name doesn't exist in that dir. what is weird, is that the same code works great on the local machine. but when i upload it to my shared server. it just doesn't work. the code is below. bmpOut = new Bitmap(Size, Size); Graphics g = Graphics.FromImage(bmpOut); g.InterpolationMode = System.Drawing.Drawing2D.InterpolationMode.HighQualityBicubic; g.FillRectangle(Brushes.White, 0, 0, Size, Size); int topBottomPadding = 0; int leftRightPadding = 0; if (Size > lnNewWidth + 1) leftRightPadding = Convert.ToInt32((Size - lnNewWidth) / 2); else if (Size > lnNewHeight + 1) topBottomPadding = Convert.ToInt32((Size - lnNewHeight) / 2); g.DrawImage(loBMP, leftRightPadding, topBottomPadding, lnNewWidth, lnNewHeight); Bitmap bmp = new Bitmap(bmpOut); if (bmp != null) bmp.Save(ResizedOutput); bmp.Dispose(); bmpOut.Dispose(); g.Dispose(); loBMP.Dispose(); stack trace: [ExternalException (0x80004005): A generic error occurred in GDI+.] System.Drawing.Image.Save(String filename, ImageCodecInfo encoder, EncoderParameters encoderParams) +377630 System.Drawing.Image.Save(String filename, ImageFormat format) +69 System.Drawing.Image.Save(String filename) +25 Utilities.ResizeImage(String fileName, String mode) in c:\inetpub\vhosts\batuhanakcay.com\httpdocs\App_Code\Utilities.cs:181 Link.ToProductImage(String fileName) in c:\inetpub\vhosts\batuhanakcay.com\httpdocs\App_Code\Link.cs:79 Product.PopulateControls(ProductDetails pd) in c:\inetpub\vhosts\batuhanakcay.com\httpdocs\Product.aspx.cs:37 Product.Page_Load(Object sender, EventArgs e) in c:\inetpub\vhosts\batuhanakcay.com\httpdocs\Product.aspx.cs:20

    Read the article

  • Quitting an application - is that frowned upon?

    - by Ted
    Moving on in my attempt to learn Android I just read the following: Question: Does the user have a choice to kill the application unless we put a menu option in to kill it? If no such option exists, how does the user terminate the application? Answert (Romain Guy): The user doesn't, the system handles this automatically. That's what the activity lifecycle (especially onPause/onStop/onDestroy) is for. No matter what you do, do not put a "quit" or "exit" application button. It is useless with Android's application model. This is also contrary to how core applications work. Hehe, for every step I take in the Android world I run into some sort of problem =( Apparently, you cannot quit an application in Android (but Android can very well totally destroy your app whenever it feels like it). Whats up with that? I am starting to think that its impossible to write an app that functions as a "normal app" - that the user can quit the app when he/she decides to do so. That is not something that should be relied upon the OS to do. The application I am trying to create is not an application for the Android Market. It is not an application for "wide use" by the general public, it is a business app that is going to be used in a very narrow business field. I was actually really looking forward to developing for the Android-platform, since it addresses a lot of issues that exist in Windows Mobile and .NET. However, the last week has been somewhat of a turnoff for me... I hope I dont have to abandon Android, but it doesnt look very good right now =( Is there a way for me to really quit the application?

    Read the article

  • Assigning console.log to another object (Safari issue)

    - by Trevor Burnham
    I wanted to keep my logging statements as short as possible while preventing console from being accessed when it doesn't exist; I came up with the following solution: var _ = {}; if (console) { _.log = console.debug; } else { _.log = function() { } } To me, this seems quite elegant, and it works great in Firefox 3.6 (including preserving the line numbers that make console.debug more useful than console.log). But it doesn't work in Safari 4. (Haven't tested in other browsers yet.) If I follow the above with console.debug('A') _.log('B'); the first statement works fine in both browsers, but the second generates a "TypeError: Type Error" in Safari. Is this just a difference between how Firebug and the Safari Web Developer Tools implement console? If so, it is VERY annoying on Apple's part. (I get the same results in both browsers if I bind the console function to a prototype and then instantiate, rather than binding it directly to the object.) I could, of course, just call console.debug from an anonymous function assigned to _.log, but then I'd lose my line numbers. Any other ideas?

    Read the article

  • Assembly unavailable after Web.config change

    - by tags2k
    I'm using a custom framework that uses reflection to do a GetTypeByName(string fullName) on the fully-qualified type name that it gets from the database, to create an instance of said type and add it to the page, resulting in a standard modular kind of thing. GetTypeByName is a utility function of mine that simply iterates through Thread.GetDomain().GetAssemblies(), then performs an assembly.GetType(fullName) to find the relevant type. Obviously this result gets cached for future reference and speed. However, I'm experiencing some issues whereby if the web.config gets updated (and, in some scarier instances if the application pool gets recycled) then it will lose all knowledge of certain assemblies, resulting in the inability to render an instance of the module type. Debugging shows that the missing assembly literally does not exist in the current thread assemblies list. To get around this I added a second check which is a bit dirty but recurses through the /bin/ directory's DLLs and checks that each one exists in the assemblies list. If it doesn't, it loads it using Assembly.Load and fixing the context issue thanks to 'Solving the Assembly Load Context Problem'. This would work, only it seems that (and I'm aware this shouldn't be possible) some projects still have access to the missing assembly, for example my actual web project rather than the framework itself - and it then complains that duplicate references have been added! Has anyone ever heard of anything like this, or have any ideas why an assembly would simply drop out of existence on a config change? Short of a solution, what is the most elegant workaround to get all the assemblies in the bin to reload? It needs to be all in one "hit" so that the site visitors don't see any difference other than a small delay, so an app_offline.htm file is out of the question. Programatically renaming a DLL in the bin and then naming it back does work, but requires "modify" permissions for the IIS user account, which is insane. Thanks for any pointers the community can gather!

    Read the article

  • Listen to double click not click

    - by Mohsen
    I'm just wondering why click event happening when I dbclick an element? I have this code:(JSBIN) HTML <p id="hello">Hello World</p> JavaScript document.getElementById('hello').addEventListener('click', function(e){ e.preventDefault(); this.style.background = 'red'; }, false); document.getElementById('hello').addEventListener('dbclick', function(){ this.style.background = 'yellow'; }, false); It should do different things for click and double click, but it seems when you double click on the p it catch click event in advance and ignore double click. I tried preventDefault the click event too. How can I listen to just dbclick? UPDATE I had a typo in my code. dbclick is wrong. It's dblclick. Anyway the problem still exist. When user double clicks the click event happens. This is updated code that prove it:(JSBin) document.getElementById('hello').addEventListener('click', function(e){ e.preventDefault(); this.style.background = 'red'; this.innerText = "Hello World clicked"; }, false); document.getElementById('hello').addEventListener('dblclick', function(){ this.style.background = 'green'; }, false);

    Read the article

  • File Storage for Web Applications: Filesystem vs DB vs NoSQL engines

    - by El Yobo
    I have a web application that stores a lot of user generated files. Currently these are all stored on the server filesystem, which has several downsides for me. When we move "folders" (as defined by our application) we also have to move the files on disk (although this is more due to strange design decisions on the part of the original developers than a requirement of storing things on the filesystem). It's hard to write tests for file system actions; I have a mock filesystem class that logs actions like move, delete etc, without performing them, which more or less does the job, but I don't have 100% confidence in the tests. I will be adding some other jobs which need to access the files from other service to perform additional tasks (e.g. indexing in Solr, generating thumbnails, movie format conversion), so I need to get at the files remotely. Doing this over network shares seems dodgy... Dealing with permissions on the filesystem as sometimes given us problems in the past, although now that we've moved to a pure Linux environment this should be less of an issue. What are the downsides of storing files as BLOBs in MySQL? I guess that it would massively increase the database size and reduce the effectiveness of caches, but are there other problems? Do the same problems exist with NoSQL systems like Cassandra? Does anyone have any other suggestions that might be appropriate?

    Read the article

  • Protocol specific channel handlers

    - by Mickael Marrache
    I'm writing an application server that will receive SIP and DNS messages from the network. When I receive a message from the network, I understand from the documentation that at first, I get a ChannelBuffer. I would like to determine which kind of message has been received (SIP or DNS) and to decode it. To determine the message type, I can dedicate port to each type of message, but I would be interested to know if there exist another solution for that. My question is more about how to decode the ChannelBuffer. Is there a ChannelHandler provided by Netty to decode SIP or DNS messages? If not, what would be the right place in the type hierarchy to write my custom ChannelHandler? To illustrate my question, let's take as example the HttpRequestDecoder, the hierarchy is: java.lang.Object org.jboss.netty.channel.SimpleChannelUpstreamHandler org.jboss.netty.handler.codec.frame.FrameDecoder org.jboss.netty.handler.codec.replay.ReplayingDecoder<HttpMessageDecoder.State> org.jboss.netty.handler.codec.http.HttpMessageDecoder org.jboss.netty.handler.codec.http.HttpRequestDecoder Also, do I need to use two different ChannelHandler for decoding and encoding, or is there a possibility to use a single ChannelHandler for both? Thanks

    Read the article

  • Win32 DLL importing issues (DllMain)

    - by brady
    I have a native DLL that is a plug-in to a different application (one that I have essentially zero control of). Everything works just great until I link with an additional .lib file (links my DLL to another DLL named ABQSMABasCoreUtils.dll). This file contains some additional API from the parent application that I would like to utilize. I haven't even written any code to use any of the functions exported but just linking in this new DLL is causing problems. Specifically I get the following error when I attempt to run the program: The application failed to initialize properly (0xc0000025). Clock on OK to terminate the application. I believe I have read somewhere that this is typically due to a DllMain function returning FALSE. Also, the following message is written to the standard output: ERROR: Memory allocation attempted before component initialization I am almost 100% sure this error message is coming from the application and is not some type of Windows error. Looking into this a little more (aka flailing around and flipping every switch I know of) I linked with /MAP turned on and found this in the resulting .map file: 0001:000af220 ??3@YAXPEAX@Z 00000001800b0220 f ABQSMABasCoreUtils_import:ABQSMABasCoreUtils.dll 0001:000af226 ??2@YAPEAX_K@Z 00000001800b0226 f ABQSMABasCoreUtils_import:ABQSMABasCoreUtils.dll 0001:000af22c ??_U@YAPEAX_K@Z 00000001800b022c f ABQSMABasCoreUtils_import:ABQSMABasCoreUtils.dll 0001:000af232 ??_V@YAXPEAX@Z 00000001800b0232 f ABQSMABasCoreUtils_import:ABQSMABasCoreUtils.dll If I undecorate those names using "undname" they give the following (same order): void __cdecl operator delete(void * __ptr64) void * __ptr64 __cdecl operator new(unsigned __int64) void * __ptr64 __cdecl operator new[](unsigned __int64) void __cdecl operator delete[](void * __ptr64) I am not sure I understand how anything from ABQSMABasCoreUtils.dll can exist within this .map file or why my DLL is even attempting to load ABQSMABasCoreUtils.dll if I don't have any code that references this DLL. Can anyone help me put this information together and find out why this isn't working? For what it's worth I have confirmed via "dumpbin" that the parent application imports the same DLL (ABQSMABasCoreUtils.dll), so it is being loaded no matter what. I have also tried delay loading this DLL in my DLL but that did not change the results.

    Read the article

  • PHP, MySQL: mysql substitute for php in_array function

    - by Devner
    Hi all, Say if I have an array and I want to check if an element is a part of that array, I can go ahead and use in_array( needle, haystack ) to determine the results. I am trying to see the PHP equivalent of this for my purpose. Now you might have an instant answer for me and you might be tempted to say "Use IN". Yes, I can use IN, but that's not fetching the desired results. Let me explain with an example: I have a column called "pets" in DB table. For a record, it has a value: Cat, dog, Camel (Yes, the column data is a comma separated value). Consider that this row has an id of 1. Now I have a form where I can enter the value in the form input and use that value check against the value in the DB. So say I enter the following comma separated value in the form input: CAT, camel (yes, CAT is uppercase & intentional as some users tend to enter it that way). Now when I enter the above info in the form input and submit, I can collect the POST'ed info and use the following query: $search = $_POST['pets']; $sql = "SELECT id FROM table WHERE pets IN ('$search') "; The above query is not fetching me the row that already exists in the DB (remember the record which has Cat, dog, Camel as the value for the pets column?). I am trying to get the records to act as a superset and the values from the form input as subsets. So in this case I am expecting the id value to show up as the values exist in the column, but this is not happending. Now say if I enter just CAT as the form input and perform the search, it should show me the ID 1 row. Now say if I enter just camel, cAT as the form input and perform the search, it should show me the ID 1 row. How can I achieve the above? Thank you.

    Read the article

  • Curve fitting: Find a CDF (or any function) that satisfies a list of constraints.

    - by dreeves
    I have some constraints on a CDF in the form of a list of x-values and for each x-value, a pair of y-values that the CDF must lie between. We can represent that as a list of {x,y1,y2} triples such as constraints = {{0, 0, 0}, {1, 0.00311936, 0.00416369}, {2, 0.0847077, 0.109064}, {3, 0.272142, 0.354692}, {4, 0.53198, 0.646113}, {5, 0.623413, 0.743102}, {6, 0.744714, 0.905966}} Graphically that looks like this: And since this is a CDF there's an additional implicit constraint of {Infinity, 1, 1} Ie, the function must never exceed 1. Also, it must be monotone. Now, without making any assumptions about its functional form, we want to find a curve that respects those constraints. For example: (I cheated to get that one: I actually started with a nice log-normal distribution and then generated fake constraints based on it.) One possibility is a straight interpolation through the midpoints of the constraints: mids = ({#1, Mean[{#2,#3}]}&) @@@ constraints f = Interpolation[mids, InterpolationOrder->0] Plotted, f looks like this: That sort of technically satisfies the constraints but it needs smoothing. We can increase the interpolation order but now it violates the implicit constraints (always less than one, and monotone): How can I get a curve that looks as much like the first one above as possible? Note that NonLinearModelFit with a LogNormalDistribution will do the trick in this example but is insufficiently general as sometimes there will sometimes not exist a log-normal distribution satisfying the constraints.

    Read the article

  • Custom Model Validator for MVC

    - by scottrakes
    I am trying to add a custom model validation at the property level but need to pass in two values. Below is my class definition and validation implementation. When it runs, the "value" in the IsValid method is always null. I can get this working at the class level but the property level is causing me issues. What am I missing? Event Class: public class Event { public int? EventID {get;set;} [ValidPURL("EventID", "PURLValue")] public string PURLValue { get; set; } ... } Validation Class [AttributeUsage(AttributeTargets.All, AllowMultiple = true, Inherited = true)] public sealed class ValidPURL : ValidationAttribute { private const string _defaultErrorMessage = "Web address already exist."; private readonly object _typeId = new object(); public ValidPURL(int eventID, string purlValue) : base(_defaultErrorMessage) { EventID = eventID; PURLValue = purlValue; } public int EventID { get; private set; } public string PURLValue { get; private set; } public override object TypeId { get { return _typeId; } } public override string FormatErrorMessage(string name) { return String.Format(CultureInfo.CurrentUICulture, ErrorMessageString, EventID, PURLValue); } public override bool IsValid(object value) { PropertyDescriptorCollection properties = TypeDescriptor.GetProperties(value); object eventIDValue = properties.Find(EventID, true /* ignoreCase */).GetValue(value); object purlValue = properties.Find(PURLValue, true /* ignoreCase */).GetValue(value); [Some Validation Logic against the database] return true; } } Thank for the help!

    Read the article

  • MSI Installer start auto-repair when service starts

    - by Josh Clark
    I have a WiX based MSI that installs a service and some shortcuts (and lots of other files that don't). The shortcut is created as described in the WiX docs with a registry key under HKCU as the key file. This is an all users install, but to get past ICE38, this registry key has to be under the current user. When the service starts (it runs under the SYSTEM account) it notices that that registry key isn't valid (at least of that user) and runs the install again to "repair". In the Event Log I get MsiInstaller Events 1001 and 1004 showing that "The resource 'HKEY_CURRENT_USER\SOFTWARE\MyInstaller\Foo' does not exist." This isn't surprising since the SYSTEM user wouldn't have this key. I turned on system wide MSI logging and the auto-repair created its log file in the C:\Windows\Temp folder rather than a specific user's TEMP folder which seems to imply the current user was SYSTEM (plus the log file shows the "Calling process" to be my service). Is there something I can do to disable the auto-repair functionality? Am I doing something wrong or breaking some MSI rule? Any hints on where to look next?

    Read the article

  • how do I get eclipse to use a different compiler version for Java?

    - by codeman73
    It seems like this should be a simple task, with the options in the Preferences menu for different JREs and the ability to set different compiler and build paths per project. However, it also seems to simply not work. For example, I have my JAVA_HOME set to a jre for Java 1.6. It's still not clear to me how Eclipse uses this, but it appears to be defaulting to this and not taking the project overrides. I have also installed Java 1.5, and added a JRE for this in eclipse in the Java-Installed JREs section. In my project, I've set the compiler compliance level to 1.5. In the build path for the project, I've added the System Library for the Java 1.5 JRE. However, I'm getting compile errors for a class that implements PreparedStatement for not implementing abstract methods that only exist in Java 1.6 PreparedStatement. Specifically, the methods setAsciiStream(int, InputStream, long) and setAsciiStream(int, InputStream) Strangely enough, it worked when we were compiling it against Java 1.4, which it was originally written for. We added the JREs for Java 1.4 and referenced that system library in the project, and set the project's compiler level to 1.4, and it works fine. But when I do the same changes to try to point to Java 1.5, it instead uses 1.6. Any ideas why?

    Read the article

  • SSIS - Bulk Update at Database Field Level

    - by Adam
    Hello, Here's our mission: Receive files from clients. Each file contains anywhere from 1 to 1,000,000 records. Records are loaded to a staging area and business-rule validation is applied. Valid records are then pumped into an OLTP database in a batch fashion, with the following rules: If record does not exist (we have a key, so this isn't an issue), create it. If record exists, optionally update each database field. The decision is made based on one of 3 factors...I don't believe it's important what those factors are. Our main problem is finding an efficient method of optionally updating the data at a field level. This is applicable across ~12 different database tables, with anywhere from 10 to 150 fields in each table (original DB design leaves much to be desired, but it is what it is). Our first attempt has been to introduce a table that mirrors the staging environment (1 field in staging for each system field) and contains a masking flag. The value of the masking flag represents the 3 factors. We've then put an UPDATE similar to... UPDATE OLTPTable1 SET Field1 = CASE WHEN Mask.Field1 = 0 THEN Staging.Field1 WHEN Mask.Field1 = 1 THEN COALESCE( Staging.Field1 , OLTPTable1.Field1 ) WHEN Mask.Field1 = 2 THEN COALESCE( OLTPTable1.Field1 , Staging.Field1 ) ... As you can imagine, the performance is rather horrendous. Has anyone tackled a similar requirement? We're a MS shop using a Windows Service to launch SSIS packages that handle the data processing. Unfortunately, we're pretty much novices at this stuff.

    Read the article

  • Enforce strong type checking in C (type strictness for typedefs)

    - by quinmars
    Is there a way to enforce explicit cast for typedefs of the same type? I've to deal with utf8 and sometimes I get confused with the indices for the character count and the byte count. So it be nice to have some typedefs: typedef unsigned int char_idx_t; typedef unsigned int byte_idx_t; With the addition that you need an explicit cast between them: char_idx_t a = 0; byte_idx_t b; b = a; // compile warning b = (byte_idx_t) a; // ok I know that such a feature doesn't exist in C, but maybe you know a trick or a compiler extension (preferable gcc) that does that. EDIT: I still don't really like the Hungarian notation in general, I couldn't used it for this problem because of project coding conventions, but I used it now in another similar case, where also the types are the same and the meanings are very similar. And I have to admit: it helps. I never would go and declare every integer with a starting "i", but as in Joel's example for overlapping types, it can be life saving.

    Read the article

  • Database for Python Twisted

    - by Will
    There's an API for Twisted apps to talk to a database in a scalable way: twisted.enterprise.dbapi The confusing thing is, which database to pick? The database will have a Twisted app that is mostly making inserts and updates and relatively few selects, and then other strictly-read-only clients that are accessing the database directly making selects. (The read-only users are not necessarily selecting the data that the Twisted app is inserting; its not as though the database is being used as a message-queue) My understanding - which I'd like corrected/adviced - is that: Postgres is a great DB, but all the Python bindings - and there is a confusing maze of them - are abandonware There is psycopg2, but that makes a lot of noise about doing its own connection-pooling and things; does this co-exist gracefully/usefully/transparently with the Twisted async database connection pooling and such? SQLLite is a great database for little things but if used in a multi-user way it does whole-database locking, so performance would suck in the usage pattern I envisage MySQL - after the Oracle takeover, who'd want to adopt it now or adopt a fork? Is there anything else out there?

    Read the article

  • Loader.php trying to load Doctrine classes, but we use Propel!

    - by kewpiedoll99
    We are finding cases where we get the following 500 error: File xyz.php does not exist or class "xyz" was not found in the file at () in SF_ROOT_DIR/lib/vendor/Zend/Loader.php line 107 ... where xyz == Memcache (when trying to use symfony cc on the command line) or sfDoctrineAdminGenerator (when using an old-ish AdminGenerator-generated CMS page). We use Propel, but Loader.php is trying to load classes used only for Doctrine. Currently I am using a filthy hack where I request Loader.php to check if the file is either of these two cases, and if so simply return rather than trying to load it. Obviously, this is unacceptable longer term. Has anybody encountered this, and how did you solve it? Edited to add: We have: class ProjectConfiguration extends sfProjectConfiguration { public function setup() { // for compatibility / remove and enable only the plugins you want $this->enableAllPluginsExcept(array('sfDoctrinePlugin')); } } And we have a propel.ini file in our top level config directory. This has only started in the past four weeks or so, and we've had a stable build for over a year now. I'm pretty sure Doctrine is totally disabled.

    Read the article

  • Time delay an external RSS feed

    - by x3ja
    I subscribe to a number of RSS feeds, mostly from within my own timezone (UK: currently GMT+1, a.k.a BST). However I'm also interested in news from New Zealand (currently GMT+12). My problem is caused by my addiction to needing to keep my unread count at, or near, zero. When I load up my RSS reader in the mornings it has gathered all the NZ news at once (normally around 100 items) and I feel compelled either to read them all or to mark them all as read to feed my need for zero-unread-count. I figured a good solution to this would be to time delay the RSS feed somehow, so I would be drip-fed the stories at their time +12 hours, so I could read them through the day as they come in. So my question (or, rather, questions): Does such a thing exist currently & what is it? (no point reworking the wheel) If not: What would be the best way to approach doing this myself? I have access to a Linux web server on which I can run scripts, create databases, store files etc, so there should be a way... I'm most conversant in perl and have done a little fiddling with XML within that, so would naturally process ... or is there some simpler way to do it that I'm missing?

    Read the article

  • .NET client getting "not well formed" XML response from Axis web service

    - by Tex
    I have a simple .NET app that makes a SOAP call to a third party Axis web service. When I trace the HTTP traffic, I see that the Request looks fine, however I'm getting an exception: "Response is not well-formed XML." The return object is null, as it seems the XML can't be deserialized. One question regarding the various namespace declarations inside the wsdl. Several of these declarations point to URLs / domains that no longer exist. Could this cause any problems? From the wsdl document: <wsdl:definitions targetNamespace="http://domaindoesntexist.com/" xmlns:apachesoap="http://xml.apache.org/xml-soap" xmlns:impl="http://domaindoesntexist.com/" xmlns:intf="http://domaindoesntexist.com/" xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/" xmlns:wsdlsoap="http://schemas.xmlsoap.org/wsdl/soap/" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> A sample HTTP response with incriminating data removed: HTTP/1.1 200 OK Server: Apache-Coyote/1.1 Content-Type: text/xml;charset=utf-8 Transfer-Encoding: chunked Date: Fri, 05 Jun 2009 13:54:59 GMT 7cb <?xml version="1.0" encoding="utf-8"?> <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <soapenv:Body> <someMethod xmlns="http://test.com/services/myservice/"> </someMethod> </soapenv:Body> </soapenv:Envelope> 0

    Read the article

  • Reading text files line by line, with exact offset/position reporting

    - by Benjamin Podszun
    Hi. My simple requirement: Reading a huge ( a million) line test file (For this example assume it's a CSV of some sorts) and keeping a reference to the beginning of that line for faster lookup in the future (read a line, starting at X). I tried the naive and easy way first, using a StreamWriter and accessing the underlying BaseStream.Position. Unfortunately that doesn't work as I intended: Given a file containing the following Foo Bar Baz Bla Fasel and this very simple code using (var sr = new StreamReader(@"C:\Temp\LineTest.txt")) { string line; long pos = sr.BaseStream.Position; while ((line = sr.ReadLine()) != null) { Console.Write("{0:d3} ", pos); Console.WriteLine(line); pos = sr.BaseStream.Position; } } the output is: 000 Foo 025 Bar 025 Baz 025 Bla 025 Fasel I can imagine that the stream is trying to be helpful/efficient and probably reads in (big) chunks whenever new data is necessary. For me this is bad.. The question, finally: Any way to get the (byte, char) offset while reading a file line by line without using a basic Stream and messing with \r \n \r\n and string encoding etc. manually? Not a big deal, really, I just don't like to build things that might exist already..

    Read the article

< Previous Page | 220 221 222 223 224 225 226 227 228 229 230 231  | Next Page >