Search Results

Search found 21352 results on 855 pages for 'bit shift'.

Page 684/855 | < Previous Page | 680 681 682 683 684 685 686 687 688 689 690 691  | Next Page >

  • When and how should independent hierarchies be used in clojure?

    - by Rob Lachlan
    Clojure's system for creating an ad hoc hierarchy of keywords is familiar to most people who have spent a bit of time with the language. For example, most demos and presentations of the language include examples such as (derive ::child ::parent) and they go on to show how this can be used for multi-method dispatch. In all of the slides and presentations that I've seen, they use the global hierarchy. But it is possible to put keyword relationships in independent hierarchies, by using (derive h ::child ::parent), where h is created by (make-hierarchy). Some questions, therefore: Are there any guidelines on when this is useful or necessary? Are there any functions for manipulating hierarchies? Merging is particularly useful, so I do this: (defn merge-h [& hierarchies] (apply merge-with (cons #(merge-with clojure.set/union %1 %2) hierarchies)) But I was wondering if such functions already exist somewhere. EDIT: Changed "custom" hierarchy to "independent" hierarchy, since that term better describes this animal. Also, I've done some research and included my own answer below. Further comments are welcome.

    Read the article

  • Not all TFS Build type files are getting copied

    - by k4k4sh1
    Because I have several builds sharing some assemblies containing common build tasks, I have one TFSBuild.proj for all builds and import different targets depending on the build, like the following: <Project DefaultTargets="DesktopBuild" xmlns="http://schemas.microsoft.com/developer/msbuild/2003" ToolsVersion="3.5"> <Import Project="Build_1.targets" Condition="'$(BuildDefinition)'=='Build_1'" /> <Import Project="Build_2.targets" Condition="'$(BuildDefinition)'=='Build_2'" /> <Import Project="Build_3.targets" Condition="'$(BuildDefinition)'=='Build_3'" /> </Project> Each target for a particular build has your usual content for a build type file, but in my case, I also reference some tasks inside assemblies checked into the same folder as TFSBuild.proj in source control. I wanted to add folders to contain some test build targets, since my folder was getting a bit full and cluttered. The following illustrates what I mean. $(TFS project)\build\ TFSBuild.proj Build_1.targets ... Assembly1.dll Assembly2.dll ... Folder\ Test_target_1.targets .... When I stated my build, however, I found that Test_target_1.targets and other files in Folder were not being copied to the build directory, while TFSBuild.proj and other files in the root level, as it were, of the build type folder were being copied. This caused my test build to not be able to reference files inside Folder, causing my build to immediately fail. I realize the simplest work-around would be to get rid of Folder and move all of its contents up to the build folder, but I would really like to have Folder if at all possible. Thanks for your help in advance.

    Read the article

  • Is there any way to access files in your source tree in Android?

    - by Chris Thompson
    Hi all, This is a bit unorthodox but I'm trying to figure out if there's a way to access files stored in the src tree of my applications apk in Android. I'm trying to use i-Jetty (Jetty implementation for Android) and rather than use it as a separate application and manually download my war file, I'd rather just bake i-jetty in. However, in order to use (easily) standard html/jsp I need to be able to give it a document root, preferably within my application's apk file. I know Android specifically works to prevent you from accessing (freely) the stuff on the actual system so this may not be possible, but I'm thinking it might be possible to access something within the apk. One option to work around this would be to have all of the files stored in the res directory and then copy them to the sdcard on startup but this wouldn't allow me to automatically remove the files on uninstall. To give you an idea of what I've tried, currently, the html files are stored in org.webtext.android Context rootContext = new Context(server_, "/", Context.SESSIONS); rootContext.setResourceBase("org/webtext/webapp"); Returns a 404 error. final URL url = this.getClassLoader().getResource("org/webtext/webapp"); Context html = new WebAppContext(url.toExternalForm(), "/"); Blows up with a NullPointerException because no URL is returned from the getResource call. Any thoughts would be greatly appreciated! Thanks, Chris

    Read the article

  • How do I connect to a Java command-line tool with the YourKit Java Profiler?

    - by Daryl Spitzer
    I've build a command-line tool in Java, which I would now like to profile with YourKit. I launch the command-line tool with something like: $ java -classpath .:foo.bar.jar com.foobar.tools.TheTool arg1 arg2 arg3 It runs to completion in less than 2 seconds. After reading http://www.yourkit.com/docs/80/help/agent.jsp, I tried the following: $ java -agentpath:/home/dspitzer/yjp-8.0.24/bin/linux-x86-32/libyjpagent.so -classpath .:foo.bar.jar com.foobar.tools.TheTool arg1 arg2 arg3 ...and I get: [YourKit Java Profiler 8.0.24] JVMTI version 3001016d; 14.3-b01; Sun Microsystems Inc.; mixed mode, sharing; Linux; 32-bit JVM [YourKit Java Profiler 8.0.24] Profiler agent is listening on port 10001... [YourKit Java Profiler 8.0.24] *** HINT ***: To get profiling results, connect to the application from the profiler UI ... But I guess YourKit is designed to only connect to running application. How should I modify my command-line tool to allow connection from YourKit? I could add a command-line option that will have it pause for input, and I won't press return for it to continue until I've connected to it from YourKit. Is there a YourKit API that I could add to my tool that would cause it to block until I've connected with YourKit? Is there a YourKit API or a java command-line option that would create a profiling "snapshot" that I could load and analyze later (after the command-line tool has completed) with YourKit?

    Read the article

  • Good resources for building web-app in Tapestry

    - by Rich
    Hi, I'm currently researching into Tapestry for my company and trying to decide if I think we can port our pre-existing proprietary web applications to something better. Currently we are running Tomcat and using JSP for our front end backed by our own framework that eventually uses JDBC to connect to an Oracle database. I've gone through the Tapestry tutorial, which was really neat and got me interested, but now I'm faced with what seems to be a common issue of documentation. There are a lot of things I'd need to be sure that I could accomplish with Tapestry before I'd be ready to commit fully to it. Does anyone have any good resources, be it a book or web article or anything else, that go into more detail beyond what the Tapestry tutorial explains? I am also considering integrating with Hibernate, and have read a little bit about Spring too. I'm still having a hard time understanding how Spring would be more useful than cumbersome in tandem with Tapestry,as they seem to have a lot of overlapping features. An example I read seemed to use Spring to interface with Hibernate, and then Tapestry to Spring, but I was under the impression Tapestry integrates to the same degree with Hibernate. The resource I'm speaking of is http://wiki.apache.org/tapestry/Tapstry5First_project_with_Tapestry5,_Spring_and_Hibernate . I was interested because I hadn't found information anywhere else on how to maintain user levels and sessions through a Tapestry application before, but wasn't exactly impressed by the need to use Spring in the example.

    Read the article

  • Convert 4 bytes to int

    - by Oscar Reyes
    I'm reading a binary file like this: InputStream in = new FileInputStream( file ); byte[] buffer = new byte[1024]; while( ( in.read(buffer ) > -1 ) { int a = // ??? } What I want to do it to read up to 4 bytes and create a int value from those but, I don't know how to do it. I kind of feel like I have to grab 4 bytes at a time, and perform one "byte" operation ( like << & FF and stuff like that ) to create the new int What's the idiom for this? EDIT Ooops this turn out to be a bit more complex ( to explain ) What I'm trying to do is, read a file ( may be ascii, binary, it doesn't matter ) and extract the integers it may have. For instance suppose the binary content ( in base 2 ) : 00000000 00000000 00000000 00000001 00000000 00000000 00000000 00000010 The integer representation should be 1 , 2 right? :- / 1 for the first 32 bits, and 2 for the remaining 32 bits. 11111111 11111111 11111111 11111111 Would be -1 and 01111111 11111111 11111111 11111111 Would be Integer.MAX_VALUE ( 2147483647 )

    Read the article

  • What is best strategy to handle exceptions & errors in Rails?

    - by Nick Gorbikoff
    Hello. I was wondering if people would share their best practices / strategies on handling exceptions & errors. Now I'm not asking when to throw an exception ( it has been throroughly answered here: SO: When to throw and Exception) . And I'm not using this for my application flow - but there are legitimate exceptions that happen all the time. For example the most popular one would be ActiveRecordNotFound. What would be the best way to handle it? The DRY way? Right now I'm doing a lot of checking within my controller so if Post.find(5) returns Nil - I check for that and throw a flash message. However while this is very granular - it's a bit cumbersome in a sense that I need to check for exceptions like that in every controller, while most of them are essentially the same and have to do with record not found or related records not found - such as either Post.find(5) not found or if you are trying to display comments related to post that doesn't exist, that would throw an exception (something like Post.find(5).comments[0].created_at) I know you can do something like this in ApplicationController and overwrite it later in a particular controller/method to get more granular support, however would that be a proper way to do it? class ApplicationController < ActionController::Base rescue_from ActiveRecord::RecordInvalid do |exception| render :action => (exception.record.new_record? ? :new : :edit) end end

    Read the article

  • DWM and painting unresponsive apps

    - by Doug Kavendek
    In Vista and later, if an app becomes unresponsive, the Desktop Window Manager is able to handle redrawing it when necessary (move a window over it, drag it around, etc.) because it has kept a pixel buffer for it. Windows also tries to detect when an app has become unresponsive after some timeout, and tries to make the best of the situation -- I believe it dims out the window, adds "Not Responding" to its title bar, and perhaps some other effects. Now, we have a skinned app that uses window regions and layered windows, and it doesn't play well with these effects. We've been developing on XP, but have noticed a strange effect when testing on Vista. At some points the app may spend a few moments on some calculation or callback, and if it passes the unresponsive threshold (I've read that it's a five second timeout, but I cannot find a link), a strange graphical problem occurs: any pixels that would be 100% transparent due to the window regions turn black, which effectively makes the window rectangular again, with a black background. There seem to be other anomalies, with the original window's pixels being shifted a bit in some child dialogs. I am working on reducing such delays (ideally Windows will never need to step in like this), and trying to maintain responsiveness while it's busy, but I'd still like to figure out what is causing it to render like that, as I can't guarantee I can eliminate all delays. Basically, I just would like to know what Windows is doing when this happens, and how I can make my app behave properly with it. Skinned apps have to still work on Vista and later, so I need to figure out what I'm doing that's non-standard. I don't even know exactly how to look for information into how Windows now handles unresponsive apps, as my searches only return people having issues with apps that are unresponsive, or very rudimentary explanations of what the DWM does with such apps. Heck I'm not even 100% sure it's the DWM responsible, but it seems likely. Any potential leads? Photo of problem; screen shots won't capture the effect (note that the white dialog's buffer is shifted -- it is shifted exactly by the distance it has been offset from the main (blue) window):

    Read the article

  • Problems using a library in Xcode

    - by Pablo
    Hi! I'm actually developping an application for iPhone and I need to use a library, initially dedicated to a Linux environment. Since I'm using a Mac (with Snow Leopard and Intel Core Duo), I guess it's possible to use this library in my app. My library has 3 files: a file .h, a file .a and a file .so (both .a and .so are in /Developer/usr/lib). In addition I have included the .h i nmy code and I've added the .a in XCode has a framework (and it works because XCode find the .so compiling). For your info when I use the command "file" for the file .so, I have: ELF 32-bit LSB shared object, Intel 80386, version 1 (SYSV), dynamically linked, not stripped When I compile for the Xcode Simulator, I have a warning and an error. The warning is: In /Developer/usr/lib/mylib.so, file was built for unsupported file format which is not the architecture being linked (i386) The error is: "_mylib_fct", referenced from: -[MyAppAppDelegate applicationDidBecomeActive:] in MyAppAppDelegate.o Symbol(s) not found Collect2: ld returned 1 exit status When I compile for the Device 3.0 with architecture arm6, I also have the same error, but the warning is quite different: ln /Users/Pablo/MyApp/mylib.a file is not of required architecture I try to solve this and make the app working with this lib since days, and I don't understand why the compiler is complaining... is it a 32/64 bits issues ? How can I deal with that ? Your help will be very appreciated. Thx!

    Read the article

  • Simple, fast SQL queries for flat files.

    - by plinehan
    Does anyone know of any tools to provide simple, fast queries of flat files using a SQL-like declarative query language? I'd rather not pay the overhead of loading the file into a DB since the input data is typically thrown out almost immediately after the query is run. Consider the data file, "animals.txt": dog 15 cat 20 dog 10 cat 30 dog 5 cat 40 Suppose I want to extract the highest value for each unique animal. I would like to write something like: cat animals.txt | foo "select $1, max(convert($2 using decimal)) group by $1" I can get nearly the same result using sort: cat animals.txt | sort -t " " -k1,1 -k2,2nr And I can always drop into awk from there, but this all feels a bit awkward (couldn't resist) when a SQL-like language would seem to solve the problem so cleanly. I've considered writing a wrapper for SQLite that would automatically create a table based on the input data, and I've looked into using Hive in single-processor mode, but I can't help but feel this problem has been solved before. Am I missing something? Is this functionality already implemented by another standard tool? Halp!

    Read the article

  • Dilemma of Sending a Byte Array from JavaScript to COM.

    - by Voulnet
    Hello there, I'm having a bit of a problem because neither Javascript nor ActiveX (written in C++) are behaving like good little children. All I'm asking them to do is for Javascript to send a byte array and for the ActiveX to receive the byte array correctly in order to do more computation. This is how I declared my byte array in JS, and I verified it inside JS: var arr = new Array(0x00, 0xA4, 0x04, 0x00, 0x10,0xA0, 0x00, 00, 00, 0x18, 0x30, 0x03, 0x01, 00, 00, 00, 00, 00, 00, 00, 00); Javascript sends this array as an argument to an ActiveX method. Here is the tricky part; I want the ActiveX method to receive the byte array as a SAFEARRAY or VARIANT, but I can't get it to work for the life of me. I tried debugging and seeing the contents received inside the ActiveX as both SAFEARRAY or VARIANT, but to no avail. Here is the IDL segment: [id(7), helpstring("NEW Sends APDU to Card and Reads the Response")] HRESULT NewSendAPDU( [in] VARIANT pAPDU, [out, retval]VARIANT* pAPDUResponse); Any help is greatly appreciated. Thanks in advance!

    Read the article

  • Asynchronous background processes in Python?

    - by Geuis
    I have been using this as a reference, but not able to accomplish exactly what I need: http://stackoverflow.com/questions/89228/how-to-call-external-command-in-python/92395#92395 I also was reading this: http://www.python.org/dev/peps/pep-3145/ For our project, we have 5 svn checkouts that need to update before we can deploy our application. In my dev environment, where speedy deployments are a bit more important for productivity than a production deployment, I have been working on speeding up the process. I have a bash script that has been working decently but has some limitations. I fire up multiple 'svn updates' with the following bash command: (svn update /repo1) & (svn update /repo2) & (svn update /repo3) & These all run in parallel and it works pretty well. I also use this pattern in the rest of the build script for firing off each ant build, then moving the wars to Tomcat. However, I have no control over stopping deployment if one of the updates or a build fails. I'm re-writing my bash script with Python so I have more control over branches and the deployment process. I am using subprocess.call() to fire off the 'svn update /repo' commands, but each one is acting sequentially. I try '(svn update /repo) &' and those all fire off, but the result code returns immediately. So I have no way to determine if a particular command fails or not in the asynchronous mode. import subprocess subprocess.call( 'svn update /repo1', shell=True ) subprocess.call( 'svn update /repo2', shell=True ) subprocess.call( 'svn update /repo3', shell=True ) I'd love to find a way to have Python fire off each Unix command, and if any of the calls fails at any time the entire script stops.

    Read the article

  • Java repaint is slow under certain conditions.

    - by Gabriel A. Zorrilla
    I'm doing a simple grid which each square is highlighted by the cursor: They are a couple of JPanels, mapgrid and overlay inside a JLayeredPane, with mapgrid on the bottom. Mapgrid just draws on initialization the grid, its paint metodh is: public void paintComponent(Graphics g) { super.paintComponent(g); Graphics2D g2d = (Graphics2D) g; g2d.setRenderingHint(RenderingHints.KEY_ANTIALIASING, RenderingHints.VALUE_ANTIALIAS_ON); for (int i = 0; i < h; i++) { for (int j = 0; j < w; j++) { g2d.setColor(new Color(128, 128, 128, 255)); g2d.drawRect(tileSize * j, i * tileSize, tileSize, tileSize); } } In the overlay JPanel is where the highlighting occurs, this is what is repainted when the mouse is moved: public void paintComponent(Graphics g) { super.paintComponent(g); Graphics2D g2d = (Graphics2D) g; g2d.setRenderingHint(RenderingHints.KEY_ANTIALIASING, RenderingHints.VALUE_ANTIALIAS_ON); g2d.setColor(new Color(255, 255, 128, 255)); g2d.drawRect((pointerX/tileSize)*tileSize,(pointerY/ tileSize)*tileSize, tileSize, tileSize); } I noticed that even though the base layer (mapgrid) is NOT repainted when the mouse moves, just the transparent overlay layer, the performance is lacking. If i give the overlay JPanel a background, its way faster. If i remove the mapgrid Antialiasing, its a bit faster too. I don't know why giving a background to the overlay layer (and thus, hiding the mapgrid) or disabling antialiasing in the mapgrid leads to much better performance. Is there a better way to do this? Why does this happen?

    Read the article

  • x86_64 Assembly Command Line Arguments

    - by Brandon oubiub
    I'm new to assembly, and I just got familiar with the call stack, so bare with me. To get the command line arguments in x86_64 on Mac OS X, I can do the following: _main: sub rsp, 8 ; 16 bit stack alignment mov rax, 0 mov rdi, format mov rsi, [rsp + 32] call _printf Where format is "%s". rsi gets set to argv[0]. So, from this, I drew out what (I think) the stack looks like initially: top of stack <- rsp after alignment return address <- rsp at beginning (aligned rsp + 8) [something] <- rsp + 16 argc <- rsp + 24 argv[0] <- rsp + 32 argv[1] <- rsp + 40 ... ... bottom of stack And so on. Sorry if that's hard to read. I'm wondering what [something] is. After a few tests, I find that it is usually just 0. However, occasionally, it is some (seemingly) random number. EDIT: Also, could you tell me if the rest of my stack drawing is correct?

    Read the article

  • C# CreateElement method - how to add an child element with xmlns=""

    - by NealWalters
    How can I get the following code to add the element with "xmlns=''"? using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Xml; namespace ConsoleApplication1 { class Program { static void Main(string[] args) { string strXML = "<myroot>" + " <group3 xmlns='myGroup3SerializerStyle'>" + " <firstname xmlns=''>Neal3</firstname>" + " </group3>" + "</myroot>"; XmlDocument xmlDoc = new XmlDocument(); xmlDoc.LoadXml(strXML); XmlElement elem = xmlDoc.CreateElement(null, "lastname", null); elem.InnerText = "New-Value"; string strXPath = "/myroot/*[local-name()='group3' and namespace-uri()='myGroup3SerializerStyle']/firstname"; XmlNode insertPoint = xmlDoc.SelectSingleNode(strXPath); insertPoint.AppendChild(elem); string resultOuter = xmlDoc.OuterXml; Console.WriteLine("\n resultOuter=" + resultOuter); Console.ReadLine(); } } } My current output: resultOuter=<myroot><group3 xmlns="myGroup3SerializerStyle"><firstname xmlns="" >Neal3<lastname>New-Value</lastname></firstname></group3></myroot> The desired output: resultOuter=<myroot><group3 xmlns="myGroup3SerializerStyle"><firstname xmlns="" >Neal3<lastname xmlns="">New-Value</lastname></firstname></group3></myroot> For background, see related posts: http://www.stylusstudio.com/ssdn/default.asp?fid=23 (today) http://stackoverflow.com/questions/2410620/net-xmlserializer-to-element-formdefaultunqualified-xml (March 9, thought I fixed it, but bit me again today!)

    Read the article

  • delayed evaluation of code in subroutines - 5.8 vs. 5.10 and 5.12

    - by Brock
    This bit of code behaves differently under perl 5.8 than it does under perl 5.12: my $badcode = sub { 1 / 0 }; print "Made it past the bad code.\n"; [brock@chase tmp]$ /usr/bin/perl -v This is perl, v5.8.8 built for i486-linux-gnu-thread-multi [brock@chase tmp]$ /usr/bin/perl badcode.pl Illegal division by zero at badcode.pl line 1. [brock@chase tmp]$ /usr/local/bin/perl -v This is perl 5, version 12, subversion 0 (v5.12.0) built for i686-linux [brock@chase tmp]$ /usr/local/bin/perl badcode.pl Made it past the bad code. Under perl 5.10.1, it behaves as it does under 5.12: brock@laptop:/var/tmp$ perl -v This is perl, v5.10.1 (*) built for i486-linux-gnu-thread-multi brock@laptop:/var/tmp$ perl badcode.pl Made it past the bad code. I get the same results with a named subroutine, e.g. sub badcode { 1 / 0 } I don't see anything about this in the perl5100delta pod. Is this an undocumented change? A unintended side effect of some other change? (For the record, I think 5.10 and 5.12 are doing the Right Thing.)

    Read the article

  • Subversion - Do I need to reintegrate if I don't merge from trunk

    - by user314584
    Hi, I have read quite a bit about the need to re-integrate when you merge from a branch back to the trunk in SVN (This article was really helpful http://blogs.open.collab.net/svn/2008/07/subversion-merg.html). The problem seems to come from the fact that people are regularly updating the branch from the trunk which means that the final merge back is reflective. In my use-case, we want to create a release branch which will live for as long as it takes to stabilise the branch and fix any bugs. To maintain stability we don't want to merge up from the trunk but we do want to regularly merge fixes down from the release branch so that trunk gets all the bug fixes for free. We also don't want to wait until the end of QA to merge back to trunk. We therefore want to: 1.) Create the branch 2.) Make regular changes to the branch (and trunk) 3.) Merge back to trunk regularly (daily perhaps) Since we will never merge up from trunk I don't think that we need to worry about the problems that re-intergrating is designed to fix. Can anyone see a problem with this approach? Cheers, Matt

    Read the article

  • JSON.Net and Linq

    - by user1745143
    I'm a bit of a newbie when it comes to linq and I'm working on a site that parses a json feed using json.net. The problem that I'm having is that I need to be able to pull multiple fields from the json feed and use them for a foreach block. The documentation for json.net only shows how to pull just one field. I've done a few variations after checking out the linq documentation, but I've not found anything that works best. Here's what I've got so far: WebResponse objResponse; WebRequest objRequest = HttpWebRequest.Create(url); objResponse = objRequest.GetResponse(); using (StreamReader reader = new StreamReader(objResponse.GetResponseStream())) { string json = reader.ReadToEnd(); JObject rss = JObject.Parse(json); var postTitles = from p in rss["feedArray"].Children() select (string)p["item"], //These are the fields I need to also query //(string)p["title"], (string)p["message"]; //I've also tried this with console.write and labeling the field indicies for each pulled field foreach (var item in postTitles) { lbl_slides.Text += "<div class='slide'><div class='slide_inner'><div class='slide_box'><div class='slide_content'></div><!-- slide content --></div><!-- slide box --></div><div class='rotator_photo'><img src='" + item + "' alt='' /></div><!-- rotator photo --></div><!-- slide -->"; } } Has anyone seen how to pull multiple fields from a json feed and use them as part of a foreach block (or something similar?

    Read the article

  • Mode specific key bindings

    - by rejeep
    Hey, I have a minor mode that also comes with a global mode. The mode have some key bindings and I want the user to have the posibility to specify what bindings should work for each mode. (my-minor-mode-bindings-for-mode 'some-mode '(key1 key2 ...)) (my-minor-mode-bindings-for-mode 'some-other-mode '(key3 key4 ...)) So I need some kind of mode/buffer-local key map. Buffer local is a bit problematic since the user can change the major mode. I have tried some solutions of which neither works any good. Bind all possible keys always and when the user types the key, check if the key should be active in that mode. Execute action if true, otherwise fall back. Like the previous case only that no keys are bound. Instead I use a pre command hook and check if the key pressed should do anything. For each buffer update (whatever that means), run a function that first clears the key map and then updates it with the bindings for that particular mode. I have tried these approaches and I found problems with all of them. Do you know of any good way to solve this? Thanks!

    Read the article

  • How to import data to SAP

    - by Mehmet AVSAR
    Hi, As a complete stranger in town of SAP, I want to transfer my own application's (mobile salesforce automation) data to SAP. My application has records of customers, stocks, inventory, invoices (and waybills), cheques, payments, collections, stock transfer data etc. I have an additional database which holds matchings of records. ie. A customer with ID 345 in my application has key 120-035-0223 in SAP. Every record, for sure, has to know it's counterpart, including parameters. After searching Google and SAP help site for a day, I covered that it's going to be a bit more pain than I expected. Especially SAP site does not give even a clue on it. Say I couldn't find. We transferred our data to some other ERP systems, some of which wanted XML files, some other exposed their APIs. My point is, is Sql Server's SSIS an option for me? I hope it is, so I can fight on my own territory. Since client requests would vary a lot, I count flexibility as most important criteria. Also, I want to transfer as much data as I could. Any help is appreciated. Regards,

    Read the article

  • Draw LINE_STRIP with Unity

    - by Boozzz
    For a new project I am thinking about whether to use OpenGL or Unity3d. I have a bit of experience with OpenGL, but I am completely new to Unity. I already read through the Unity documentation and tutorials on the Unity Website. However, I could not find a way to draw a simple Line-Strip with Unity. In the following example (C#, OpenGL/SharpGL) I draw a round trajectory from a predifined point to an obstacle, which can be imagined as a divided circle with midpoint [cx,cy] and radius r. The position (x-y coordinates) of the obstacle is given by obst_x and obst_y. Question 1: How could I do the same with Unity? Question 2: In my new project, I will have to draw quite a lot of such geometric primitives. Does it make any sense to use Unity for those things? void drawCircle(float cx, float cy, float r, const float obst_x, const float obst_y) { float theta = 0.0f, pos_x, pos_y, dist; const float delta = 0.1; glBegin(GL_LINE_STRIP); while (theta < 180) { theta += delta; //get the current angle float x = r * cosf(theta); //calculate the x component float y = r * sinf(theta); //calculate the y component pos_x = x + cx; //calculate current x position pos_y = y + cy; //calculate current y position //calculate distance from current vertex to obstacle dist = sqrt(pow(pos_x - obst_x) + pow(pos_y - obst_y)); //check if current vertex intersects with obstacle if dist <= 0 { break; //stop drawing circle } else { glVertex2f(pos_x, pos_y); //draw vertex } } glEnd(); }

    Read the article

  • (C#) Get index of current foreach iteration

    - by Graphain
    Hi, Is there some rare language construct I haven't encountered (like the few I've learned recently, some on Stack Overflow) in C# to get a value representing the current iteration of a foreach loop? For instance, I currently do something like this depending on the circumstances: int i=0; foreach (Object o in collection) { ... i++; } Answers: @bryansh: I am setting the class of an element in a view page based on the position in the list. I guess I could add a method that gets the CSSClass for the Objects I am iterating through but that almost feels like a violation of the interface of that class. @Brad Wilson: I really like that - I've often thought about something like that when using the ternary operator but never really given it enough thought. As a bit of food for thought it would be nice if you could do something similar to somehow add (generically to all IEnumerable objects) a handle on the enumerator to increment the value that an extension method returns i.e. inject a method into the IEnumerable interface that returns an iterationindex. Of course this would be blatant hacks and witchcraft... Cool though... @crucible: Awesome I totally forgot to check the LINQ methods. Hmm appears to be a terrible library implementation though. I don't see why people are downvoting you though. You'd expect the method to either use some sort of HashTable of indices or even another SQL call, not an O(N) iteration... (@Jonathan Holland yes you are right, expecting SQL was wrong) @Joseph Daigle: The difficulty is that I assume the foreach casting/retrieval is optimised more than my own code would be. @Jonathan Holland: Ah, cheers for explaining how it works and ha at firing someone for using it.

    Read the article

  • Wrappers/law of demeter seems to be an anti-pattern...

    - by Robert Fraser
    I've been reading up on this "Law of Demeter" thing, and it (and pure "wrapper" classes in general) seem to generally be anti patterns. Consider an implementation class: class Foo { void doSomething() { /* whatever */ } } Now consider two different implementations of another class: class Bar1 { private static Foo _foo = new Foo(); public static Foo getFoo() { return _foo; } } class Bar2 { private static Foo _foo = new Foo(); public static void doSomething() { _foo.doSomething(); } } And the ways to call said methods: callingMethod() { Bar1.getFoo().doSomething(); // Version 1 Bar2.doSomething(); // Version 2 } At first blush, version 1 seems a bit simpler, and follows the "rule of Demeter", hide Foo's implementation, etc, etc. But this ties any changes in Foo to Bar. For example, if a parameter is added to doSomething, then we have: class Foo { void doSomething(int x) { /* whatever */ } } class Bar1 { private static Foo _foo = new Foo(); public static Foo getFoo() { return _foo; } } class Bar2 { private static Foo _foo = new Foo(); public static void doSomething(int x) { _foo.doSomething(x); } } callingMethod() { Bar1.getFoo().doSomething(5); // Version 1 Bar2.doSomething(5); // Version 2 } In both versions, Foo and callingMethod need to be changed, but in Version 2, Bar also needs to be changed. Can someone explain the advantage of having a wrapper/facade (with the exception of adapters or wrapping an external API or exposing an internal one).

    Read the article

  • How can I store HTML in a Doctrine YML fixture

    - by argibson
    I am working with a CMS-type site in Symfony 1.4 (Doctrine 1.2) and one of the things that is frustrating me is not being able to store HTML pages in YML fixtures. Instead I have to create SQL backups of the data if I want to drop and rebuild which is a bit of a pest when Symfony/Doctrine has a fantastic mechanism for doing exactly this. I could write a mechanism that reads in a set of HTML files for each page and fills the data in that way (or even write it as a task). But before I go down that road I am wondering if there is any way for HTML to be stored in a YML fixture so that Doctrine can simply import it into the database. Update: I have tried using symfony doctrine:data-dump and symfony doctrine:data-load however despite the dump correctly creating the fixture with the HTML, the load task appears to 'skip' the value of the column with the HTML and enters everything else into the row. In the database the field doesn't show up as 'NULL' but rather empty so I believe Doctrine is adding the value of the column as ''. Below is a sample of the YML fixture that symfony doctrine:data-dump created. I have tried running symfony doctrine:data-load against various forms of this including removing all the escaped characters (new lines and quotes leaving only angle brackets) but it still doesn't work. Product_69: name: 'My Product' Developer: Developer_30 tagline: 'Text that briefly describes the product' version: '2008' first_published: '' price_code: A79 summary: '' box_image: '' description: "<div id=\"featureSlider\">\n <ul class=\"slider\">\n <li class=\"sliderItem\" title=\"Summary\">\n <div class=\"feature\">\n Some text goes in here</div>\n </li>\n </ul>\n </div>\n" is_visible: true

    Read the article

  • Building a python module and linking it against a MacOSX framework

    - by madflo
    I'm trying to build a Python extension on MacOSX 10.6 and to link it against several frameworks (i386 only). I made a setup.py file, using distutils and the Extension object. I order to link against my frameworks, my LDFLAGS env var should look like : LDFLAGS = -lc -arch i386 -framework fwk1 -framework fwk2 As I did not find any 'framework' keyword in the Extension module documentation, I used the extra_link_args keyword instead. Extension('test', define_macros = [('MAJOR_VERSION', '1'), ,('MINOR_VERSION', '0')], include_dirs = ['/usr/local/include', 'include/', 'include/vitale'], extra_link_args = ['-arch i386', '-framework fwk1', '-framework fwk2'], sources = "testmodule.cpp", language = 'c++' ) Everything is compiling and linking fine. If I remove the -framework line from the extra_link_args, my linker fails, as expected. Here is the last two lines produced by a python setup.py build : /usr/bin/g++-4.2 -arch x86_64 -arch i386 -isysroot / -L/opt/local/lib -arch x86_64 -arch i386 -bundle -undefined dynamic_lookup build/temp.macosx-10.6-intel-2.6/testmodule.o -o build/lib.macosx-10.6-intel-2.6/test.so -arch i386 -framework sgdosx -framework srtosx -framework ssvosx -framework stsosx Unfortunately, the .so that I just produced is unable to find several symbols provided by this framework. I tried to check the linked framework with otool. None of them is appearing. $ otool -L test.so test.so: /usr/lib/libstdc++.6.dylib (compatibility version 7.0.0, current version 7.9.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 125.0.1) There is the output of otool run on a test binary, made with g++ and ldd using the LDFLAGS described at the top of my post. On this example, the -framework did work. $ otool -L vitaosx vitaosx: /Library/Frameworks/sgdosx.framework/Versions/A/sgdosx (compatibility version 1.0.0, current version 1.0.0) /Library/Frameworks/ssvosx.framework/Versions/A/ssvosx (compatibility version 1.0.0, current version 1.0.0) /usr/lib/libstdc++.6.dylib (compatibility version 7.0.0, current version 7.9.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 125.0.1) May this issue be linked to the "-undefined dynamic_lookup" flag on the linking step ? I'm a little bit confused by the few lines of documentation that I'm finding on Google. Cheers,

    Read the article

< Previous Page | 680 681 682 683 684 685 686 687 688 689 690 691  | Next Page >