Search Results

Search found 9183 results on 368 pages for 'implementation latitude'.

Page 75/368 | < Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >

  • Java Graphics not displaying on successive function calls, why?

    - by primehunter326
    Hi, I'm making a visualization for a BST implementation (I posted another question about it the other day). I've created a GUI which displays the viewing area and buttons. I've added code to the BST implementation to recursively traverse the tree, the function takes in coordinates along with the Graphics object which are initially passed in by the main GUI class. My idea was that I'd just have this function re-draw the tree after every update (add, delete, etc...), drawing a rectangle over everything first to "refresh" the viewing area. This also means I could alter the BST implementation (i.e by adding a balance operation) and it wouldn't affect the visualization. The issue I'm having is that the draw function only works the first time it is called, after that it doesn't display anything. I guess I don't fully understand how the Graphics object works since it doesn't behave the way I'd expect it to when getting passed/called from different functions. I know the getGraphics function has something to do with it. Relevant code: private void draw(){ Graphics g = vPanel.getGraphics(); tree.drawTree(g,ORIGIN,ORIGIN); } vPanel is what I'm drawing on private void drawTree(Graphics g, BinaryNode<AnyType> n, int x, int y){ if( n != null ){ drawTree(g, n.left, x-10,y+10 ); if(n.selected){ g.setColor(Color.blue); } else{ g.setColor(Color.gray); } g.fillOval(x,y,20,20); g.setColor(Color.black); g.drawString(n.element.toString(),x,y); drawTree(g,n.right, x+10,y+10); } } It is passed the root node when it is called by the public function. Do I have to have: Graphics g = vPanel.getGraphics(); ...within the drawTree function? This doesn't make sense!! Thanks for your help.

    Read the article

  • In Java, is there a performance gain in using interfaces for complex models?

    - by Gnoupi
    The title is hardly understandable, but I'm not sure how to summarize that another way. Any edit to clarify is welcome. I have been told, and recommended to use interfaces to improve performances, even in a case which doesn't especially call for the regular "interface" role. In this case, the objects are big models (in a MVC meaning), with many methods and fields. The "good use" that has been recommended to me is to create an interface, with its unique implementation. There won't be any other class implementing this interface, for sure. I have been told that this is better to do so, because it "exposes less" (or something close) to the other classes which will use methods from this class, as these objects are referring to the object from its interface (all public methods from the implementation being reproduced in the interface). This seems quite strange to me, as it seems like a C++ use to me (with header files). There I see the point, but in Java? Is there really a point in making an interface for such unique implementation? I would really appreciate some clarifications on the topic, so I could justify not following such kind of behavior, and the hassle it creates from duplicating all declarations. Edit: Plenty of valid points in most answers, I'm wondering if I won't switch this question for a community wiki, so we can regroup these points in more structured answers.

    Read the article

  • How string accepting interface should look like?

    - by ybungalobill
    Hello, This is a follow up of this question. Suppose I write a C++ interface that accepts or returns a const string. I can use a const char* zero-terminated string: void f(const char* str); // (1) The other way would be to use an std::string: void f(const string& str); // (2) It's also possible to write an overload and accept both: void f(const char* str); // (3) void f(const string& str); Or even a template in conjunction with boost string algorithms: template<class Range> void f(const Range& str); // (4) My thoughts are: (1) is not C++ish and may be less efficient when subsequent operations may need to know the string length. (2) is bad because now f("long very long C string"); invokes a construction of std::string which involves a heap allocation. If f uses that string just to pass it to some low-level interface that expects a C-string (like fopen) then it is just a waste of resources. (3) causes code duplication. Although one f can call the other depending on what is the most efficient implementation. However we can't overload based on return type, like in case of std::exception::what() that returns a const char*. (4) doesn't work with separate compilation and may cause even larger code bloat. Choosing between (1) and (2) based on what's needed by the implementation is, well, leaking an implementation detail to the interface. The question is: what is the preffered way? Is there any single guideline I can follow? What's your experience?

    Read the article

  • Tiny MCE (jquery plugin version) not showing toolbar in IE (works fine in other browsers)

    - by I am Wonder
    I had an implementation working well but for some reason IE decided it was tired of playing nice. I have an advanced implementation of TinyMCE (the jquery plugin version - see http://tinymce.moxiecode.com/examples/example_23.php for details). It still works great in all browsers but IE. In IE it shows the drop-down options for Format, Font family, and Font size, but only as text.. not as a drop down normally looks. All other buttons on the toolbar are missing. (I've tried IE8 and IE8 Compatibility Mode) I get a javascript error: Syntax error Line 36 Char 1. Unfortunately the javascript is being loaded dynamically so this doesn't help me. Here is my implementation code for the TinyMCE editor: $(function () { $('#InputStuffHere').tinymce({ // General options theme: "advanced", plugins: "safari,pagebreak,style,layer,table,save,advhr,advimage,advlink,inlinepopups,preview,media,searchreplace,contextmenu,paste,fullscreen,noneditable,visualchars,nonbreaking,template", // Theme options theme_advanced_buttons1: "bold,italic,underline,strikethrough,|,justifyleft,justifycenter,justifyright,justifyfull,formatselect,fontselect,fontsizeselect,|,hr,removeformat", theme_advanced_buttons2: "cut,copy,paste,pastetext,pasteword,|,search,replace,|,bullist,numlist,|,outdent,indent,blockquote,|,undo,redo,|,link,unlink,image,|,forecolor,backcolor,|,spellchecker", theme_advanced_buttons3 : "", theme_advanced_toolbar_location: "top", theme_advanced_toolbar_align: "left", theme_advanced_statusbar_location: "bottom", theme_advanced_resizing: true, // Drop lists for link/image/media/template dialogs template_external_list_url: "lists/template_list.js", external_link_list_url: "lists/link_list.js", external_image_list_url: "lists/image_list.js", media_external_list_url: "lists/media_list.js", //initialization callback init_instance_callback: "TinyMCEReady", add_form_submit_trigger : false }); }); So... anyone seen anything like this or have any ideas for me? Thank you so much everyone!

    Read the article

  • boost::asio::io_service throws exception

    - by Ace
    Okay, I seriously cannot figure this out. I have a DLL project in MSVC that is attempting to use Asio (from Boost 1.45.0), but whenever I create my io_service, an exception is thrown. Here is what I am doing for testing purposes: void run() { boost::this_thread::sleep(boost::posix_time::seconds(5)); try { boost::asio::io_service io_service; } catch (std::exception & e) { MessageBox(NULL, e.what(), "Exception", MB_OK); } } BOOL WINAPI DllMain(HINSTANCE hinstDLL, DWORD fdwReason, LPVOID lpvReserved) { if (fdwReason == DLL_PROCESS_ATTACH) { boost::thread thread(run); } return TRUE; } This is what the message box shows: winsock: WSAStartup cannot function at this time because the underlying system it uses to provide network services is currently unavailable Here is what MSDN says about it (error code 10091, WSASYSNOTREADY): Network subsystem is unavailable. This error is returned by WSAStartup if the Windows Sockets implementation cannot function at because the underlying system it uses to provide network services is currently unavailable. Users should check: That the appropriate Windows Sockets DLL file is in the current path. That they are not trying to use more than one Windows Sockets implementation simultaneously. If there is more than one Winsock DLL on your system, be sure the first one in the path is appropriate for the network subsystem currently loaded. The Windows Sockets implementation documentation to be sure all necessary components are currently installed and configured correctly. Yet none of this seems to apply to me (or so I think). Here is my command line: /O2 /GL /D "_WIN32_WINNT=0x0501" /D "_WINDLL" /FD /EHsc /MD /Gy /Fo"Release\" /Fd"Release\vc90.pdb" /W3 /WX /nologo /c /TP /errorReport:prompt If anyone knows what might be wrong, please help me out! Thanks.

    Read the article

  • operator overloading and inheritance

    - by user168715
    I was given the following code: class FibHeapNode { //... // These all have trivial implementation virtual void operator =(FibHeapNode& RHS); virtual int operator ==(FibHeapNode& RHS); virtual int operator <(FibHeapNode& RHS); }; class Event : public FibHeapNode { // These have nontrivial implementation virtual void operator=(FibHeapNode& RHS); virtual int operator==(FibHeapNode& RHS); virtual int operator<(FibHeapNode& RHS); }; class FibHeap { //... int DecreaseKey(FibHeapNode *theNode, FibHeapNode& NewKey) { FibHeapNode *theParent; // Some code if (theParent != NULL && *theNode < *theParent) { //... } //... return 1; } }; Much of FibHeap's implementation is similar: FibHeapNode pointers are dereferenced and then compared. Why does this code work? (or is it buggy?) I would think that the virtuals here would have no effect: since *theNode and *theParent aren't pointer or reference types, no dynamic dispatch occurs and FibHeapNode::operator< gets called no matter what's written in Event.

    Read the article

  • Why doesn't g++ pay attention to __attribute__((pure)) for virtual functions?

    - by jchl
    According to the GCC documentation, __attribute__((pure)) tells the compiler that a function has no side-effects, and so it can be subject to common subexpression elimination. This attribute appears to work for non-virtual functions, but not for virtual functions. For example, consider the following code: extern void f( int ); class C { public: int a1(); int a2() __attribute__((pure)); virtual int b1(); virtual int b2() __attribute__((pure)); }; void test_a1( C *c ) { if( c->a1() ) { f( c->a1() ); } } void test_a2( C *c ) { if( c->a2() ) { f( c->a2() ); } } void test_b1( C *c ) { if( c->b1() ) { f( c->b1() ); } } void test_b2( C *c ) { if( c->b2() ) { f( c->b2() ); } } When compiled with optimization enabled (either -O2 or -Os), test_a2() only calls C::a2() once, but test_b2() calls b2() twice. Is there a reason for this? Is it because, even though the implementation in class C is pure, g++ can't assume that the implementation in every subclass will also be pure? If so, is there a way to tell g++ that this virtual function and every subclass's implementation will be pure?

    Read the article

  • C Population Count of unsigned 64-bit integer with a maximum value of 15

    - by BitTwiddler1011
    I use a population count (hamming weight) function intensively in a windows c application and have to optimize it as much as possible in order to boost performance. More than half the cases where I use the function I only need to know the value to a maximum of 15. The software will run on a wide range of processors, both old and new. I already make use of the POPCNT instruction when Intel's SSE4.2 or AMD's SSE4a is present, but would like to optimize the software implementation (used as a fall back if no SSE4 is present) as much as possible. Currently I have the following software implementation of the function: inline int population_count64(unsigned __int64 w) { w -= (w 1) & 0x5555555555555555ULL; w = (w & 0x3333333333333333ULL) + ((w 2) & 0x3333333333333333ULL); w = (w + (w 4)) & 0x0f0f0f0f0f0f0f0fULL; return int(w * 0x0101010101010101ULL) 56; } So to summarize: (1) I would like to know if it is possible to optimize this for the case when I only want to know the value to a maximum of 15. (2) Is there a faster software implementation (for both Intel and AMD CPU's) than the function above?

    Read the article

  • Can't use method from class in other file

    - by user1833848
    I am not able to use one of my methods that i implemented in my tableviewcell file in my tableview controller implementation. I tried searching the web and xcode help with no luck. My codes looks like this: TableViewController.h: #import TableViewCell.h @interface TableViewController : UITableViewController @property (nonatomic, strong) IBOutlet UIBarButtonItem *A1Buy; @property (nonatomic, getter = isUserInteractionEnabled) BOOL userInteractionEnabled; - (IBAction)A1Buy:(UIBarButtonItem *)sender; TableViewController.m: @implementation A1ViewController @synthesize A1Buy = _A1Buy; @synthesize userInteractionEnabled; - (IBAction)A1Buy:(UIBarButtonItem *)sender { [TableViewCell Enable]; //this is where it gives an error } TableViewCell.h: @interface TableViewCell : UITableViewCell { BOOL Enable; BOOL Disable; } @property (nonatomic, getter = isUserInteractionEnabled) BOOL userInteractionEnabled; TableViewCell.m: @implementation TableViewCell; @synthesize userInteractionEnabled; - (BOOL) Enable { return userInteractionEnabled = YES; } - (BOOL) Disable { return userInteractionEnabled = NO; } As you can see i am trying to enable user interaction with a button, but Xcode only gives me errors like "class does not have this method" and stuff like that. All files are importet correctly so thats not why. Would appreciate any help. Thanks!

    Read the article

  • Simple Oracle File repository with folder hierarchy

    - by Ope
    I have an application that stores large amount of files (XML and binary) in folder hierarchies. Currently the main method is storing them in file system or using a legacy CMS, which we want to get rid of. The CMS supports Oracle and a customer wants to keep the files in Oracle because of enterprise policies (backup etc.) The question is: Is there a simple implementation of file repository with folder hierarchy for Oracle? What I am looking for is a small .Net component or example code (PL/SQL and/or .Net) that would have the following methods: Create, Delete, Exists Folder CRUD file Move and potentially Copy file or directory Access to files and folders with paths like "/root/folder1/folder2/file.xml" Ability to get all the files and folders in a folder and potentially also the entire directory tree Tree traversal, getting the parent, all children etc. needs to be fast. I need the implementation in .Net, but if it was just the stored procedures, I could create the .Net calling code. I have pointers to generic articles for creating hierarchies in DB, so if I need to do it from scratch, I know where to start. What I am asking here, is there already an implementation that I could take without doing this from scratch? It seems like such a generic requirement... If the answer is a CMS, Document management system or such it should be Open Source or at least quite cheap (some hundreds / server) and it should be possible to deploy it XCopy - hopefully only couple of DLL:s. I do not need - or want - a full featured big CMS with dozens of dlls and especially not an msi-installation. I have tried to google this, but the words "repository", "CMS", "file hierarchy" etc. give so many answers, the searches are pretty much useless. Thanks, OPe

    Read the article

  • Objective C - creating concrete class instances from base class depending upon type

    - by indiantroy
    Just to give a real world example, say the base class is Vehicle and concrete classes are TwoWheeler and FourWheeler. Now the type of the vehicle - TwoWheeler or FourWheeler, is decided by the base class Vehicle. When I create an instance of TwoWheeler/FourWheeler using alloc-init method, it calls the super implementation like below to set the value of common properties defined in the Vehicle class and out of these properties one of them is type that actually decides if the type is TwoWheeler or FourWheeler. if (self = [super initWithDictionary:dict]){ [self setOtherAttributes:dict]; return self; } Now when I get a collection of vehicles some of them could be TwoWheeler and others will be FourWheeler. Hence I cannot directly create an instance of TwoWheeler or FourWheeler like this Vehicle *v = [[TwoWheeler alloc] initWithDictionary:dict]; Is there any way I can create an instance of base class and once I know the type, create an instance of child class depending upon type and return it. With the current implementation, it would result in infinite loop because I call super implementation from concrete class. What would be the perfect design to handle this scenario when I don't know which concrete class should be instantiated beforehand?

    Read the article

  • Unit Tests Architecture Question

    - by Tom Tresansky
    So I've started to layout unit tests for the following bit of code: public interface MyInterface { void MyInterfaceMethod1(); void MyInterfaceMethod2(); } public class MyImplementation1 implements MyInterface { void MyInterfaceMethod1() { // do something } void MyInterfaceMethod2() { // do something else } void SubRoutineP() { // other functionality specific to this implementation } } public class MyImplementation2 implements MyInterface { void MyInterfaceMethod1() { // do a 3rd thing } void MyInterfaceMethod2() { // do something completely different } void SubRoutineQ() { // other functionality specific to this implementation } } with several implementations and the expectation of more to come. My initial thought was to save myself time re-writing unit tests with something like this: public abstract class MyInterfaceTester { protected MyInterface m_object; @Setup public void setUp() { m_object = getTestedImplementation(); } public abstract MyInterface getTestedImplementation(); @Test public void testMyInterfaceMethod1() { // use m_object to run tests } @Test public void testMyInterfaceMethod2() { // use m_object to run tests } } which I could then subclass easily to test the implementation specific additional methods like so: public class MyImplementation1Tester extends MyInterfaceTester { public MyInterface getTestedImplementation() { return new MyImplementation1(); } @Test public void testSubRoutineP() { // use m_object to run tests } } and likewise for implmentation 2 onwards. So my question really is: is there any reason not to do this? JUnit seems to like it just fine, and it serves my needs, but I haven't really seen anything like it in any of the unit testing books and examples I've been reading. Is there some best practice I'm unwittingly violating? Am I setting myself up for heartache down the road? Is there simply a much better way out there I haven't considered? Thanks for any help.

    Read the article

  • Why can't I create a templated sublcass of System::Collections::Generic::IEnumerable<T>?

    - by fiirhok
    I want to create a generic IEnumerable implementation, to make it easier to wrap some native C++ classes. When I try to create the implementation using a template parameter as the parameter to IEnumerable, I get an error. Here's a simple version of what I came up with that demonstrates my problem: ref class A {}; template<class B> ref class Test : public System::Collections::Generic::IEnumerable<B^> // error C3225... {}; void test() { Test<A> ^a = gcnew Test<A>(); } On the indicated line, I get this error: error C3225: generic type argument for 'T' cannot be 'B ^', it must be a value type or a handle to a reference type If I use a different parent class, I don't see the problem: template<class P> ref class Parent {}; ref class A {}; template<class B> ref class Test : public Parent<B^> // no problem here {}; void test() { Test<A> ^a = gcnew Test<A>(); } I can work around it by adding another template parameter to the implementation type: ref class A {}; template<class B, class Enumerable> ref class Test : public Enumerable {}; void test() { using namespace System::Collections::Generic; Test<A, IEnumerable<A^>> ^a = gcnew Test<A, IEnumerable<A^>>(); } But this seems messy to me. Also, I'd just like to understand what's going on here - why doesn't the first way work?

    Read the article

  • Luminis and Google Apps Single Sign On

    - by J.Zimmerman
    We are a community college that is plodding forward with a Google Apps for Education implementation. There is support for Single Sign On and a fair amount of documentation for implementing it. We are looking to provide integration with our portal which is currently Luminis 3 (we are upgrading to Luminis 4 within a year). There is documentation available for Luminis specific integration, but apparently it is by request only. I have put the request in to SungardHE (where we license Luminis) and am waiting for a response. My questions are as follows... Is anyone here running Luminis? Have you tried to integrate it with a 3rd party email service like Google Apps for Education or Microsoft LiveEDU? If so, can you elaborate on implementation details above and beyond your Luminis installation and Google Apps setup? Looking for more of a general road map and differences between integration options with Luminis 3 and Luminis 4. Thanks!

    Read the article

  • Luminis and Google Apps Single Sign On

    - by J.Zimmerman
    We are a community college that is plodding forward with a Google Apps for Education implementation. There is support for Single Sign On and a fair amount of documentation for implementing it. We are looking to provide integration with our portal which is currently Luminis 3 (we are upgrading to Luminis 4 within a year). There is documentation available for Luminis specific integration, but apparently it is by request only. I have put the request in to SungardHE (where we license Luminis) and am waiting for a response. My questions are as follows... Is anyone here running Luminis? Have you tried to integrate it with a 3rd party email service like Google Apps for Education or Microsoft LiveEDU? If so, can you elaborate on implementation details above and beyond your Luminis installation and Google Apps setup? Looking for more of a general road map and differences between integration options with Luminis 3 and Luminis 4. Thanks!

    Read the article

  • Do Seagate Momentus XT SSD Hybrid drives perform better than a good hard drive + flash on ReadyBoost

    - by Chris W. Rea
    Seagate has released a product called the Momentus XT Solid State Hybrid Drive. At a glance, this looks exactly like what Windows ReadyBoost attempts to do with software at the OS level: Pairing the benefits of a large hard drive together with the performance of solid-state flash memory. Does the Momentus XT out-perform a similar ad-hoc pairing of a decent hard drive with similar flash memory storage under Windows ReadyBoost? Other than the obvious "a hardware implementation ought to be faster than a software implementation", why would ReadyBoost not be able to perform as well as such a hybrid device?

    Read the article

  • Getting live traffic/visitor analytics when using a reverse proxy

    - by jotto
    I'm in process of implementing Varnish as a reverse proxy for a Ruby on Rails app and I'm using Google Analytics (JS/client side script to record visitor data) but it's several hours delayed so its useless for knowing what's going on now. I need at a glance live data that includes referring traffic and what current req/sec is. Right now I am using a simple Rack middleware application to do the live stats (gist.github.com/235745) but if the majority of traffic hits Varnish, Rack will never be hit so this won't work. The closest solution I've found so far is http://www.reinvigorate.net/ but it's in beta (there are also no implementation details on their front page). Does Varnish have traffic logs that I can custom format to match my Apache logs so I can combine them, or will I have to roll my own JS implementation like GA that shows the data in real time?

    Read the article

  • Do SSD hybrid drives perform better than HDD + ReadyBoost flash?

    - by Chris W. Rea
    Seagate has released a product called the Momentus XT Solid State Hybrid Drive. This looks exactly like what Windows ReadyBoost attempts to do with software at the OS level: Pairing the benefits of a large hard drive together with the performance of solid-state flash memory. Does the Momentus XT out-perform a similar ad-hoc pairing of a decent hard drive with similar flash memory storage under Windows ReadyBoost? Other than the obvious "a hardware implementation ought to be faster than a software implementation", why would ReadyBoost not be able to perform as well as such a hybrid device?

    Read the article

  • Difference in behavior of reboot

    - by LinuxPenseur
    Hi, I have 2 machines running linux. In one machine, the reboot command is an executable normally found in all linux distributions. In the second machine the reboot command is a shell script customized using some other hardware tool commands to reboot the system. One behavior difference between the 2 machines is that when i execute reboot command on the first machine, it shows another shell prompt and then only reboots. But in the case of second machine, it reboots without showing a shell prompt. I expect the second machine to behave the same way as first machine when reboot command is given. Currently i am analyzing the source code of shutdown.c and halt.c normally found in linux distributions, so that i can find out the implementation which produces the shell propmt on reboot and use that in the shell script in second machine. Kindly give me some pointers on where i should start looking to find the implementation. Thanks

    Read the article

  • Linux mdadm software RAID 6 - does it support bit corruption recovery?

    - by user101203
    Wikipedia says "RAID 2 is the only standard RAID level, other than some implementations of RAID 6, which can automatically recover accurate data from single-bit corruption in data." Does anyone know if the RAID 6 mdadm implementation in Linux is one such implementation that can automatically detect and recover from single-bit data corruption. This pertains to CentOS / Red Hat 6 if those are different from other versions. I tried searching online but didn't have much luck. With SATA error rates being 1 in 1E14 bits, and a 2TB SATA disk containing 1.6E13 bits, this is especially relevant to preventing data corruption. Thanks!

    Read the article

  • OpenGL extension vs OpenGL core

    - by user209347
    I was doubting: I'm writing a cross-platform engine OpenGL C++, I figured out windows forces the developers to access OpenGL features above 1.1 through extensions. Now the thing is, on Linux, I know that I can directly access functions if the version supports it through glext.h and opengl version. The problem is that if on Linux, the core doesn't support it, is it possible there is an extensions that supports the same functionality, in my case vertex buffer objects? I'm doing something like this: Windows: (hashdeck) define glFunction functionpointer_to_the_extension (apparently the layout changes font size if I use #) Linux: Since glext already defined glFunction, I can write in client code glFunction, and compile it both on Windows AND Linux without changing a single line in my client code using the engine (my goal). Now the thing is, I saw a tutorial use only the extension on Linux, and not checking for the opengl implementation version. If the functionality is available in the core, is it also available as extension (VBO's e.g.)? Or is an extension something you never know is available? I want to write an engine that gets all the possibilities on hardware, so I need to check (on Linux) for extensions as well as core version for possible functionality implementation.

    Read the article

  • LinqPad with Azure Table Storage

    - by Sarang
    LinqPad as we all know has been a wonderful tool for running ad-hoc queries. With Windows Azure Table storage in picture LinqPad was no longer in picture and we shifted focus to Cloud Storage Studio only to realize the limited and strange querying capabilities of CSS. With some tweaking to Linqpad we can get the comfortable old shoe of ad-hoc queries with LinqPad in the Windows Azure Table storage. Steps: 1. Start LinqPad 2. Right Click in the query window and select “Query Properties” 3. In The Additional References add reference to Microsoft.WindowsAzure.StorageClient, System.Data.Services.Client.dll and the assembly containing the implementation of the DataServiceContext class tied to the Windows Azure table storage. 4. In the additional namespace imports import the same three namespaces mentioned above. 5. Then we need to provide following details. a. Table storage account name and shared key. b. DataServiceContext implementing class in your code. c. A LINQ query. e.x.         var storageAccountName = "myStorageAccount";  // Enter valid storage account name         var storageSharedKey = "mysharedKey"; // Enter valid storage account shared key         var uri = new System.Uri("http://table.core.windows.net/");         var storageAccountInfo = new CloudStorageAccount(new StorageCredentialsAccountKey(storageAccountName, storageSharedKey), false);         var serviceContext = new TweetPollDataServiceContext(storageAccountInfo); // Specify the DataServiceContext implementation         // The query         var query = from row in serviceContext.Table                     select row;         query.Dump(); Thanks LinqPad! Technorati Tags: LinqPad,Azure Table Storage,Linq

    Read the article

  • Is it illegal to rewrite every line of an open source project in a slightly different way, and use it in a closed source project?

    - by Chris Barry
    There is some code which is GPL or LGPL that I am considering using for an iPhone project. If I took that code (JavaScript) and rewrote it in a different language for use on the iPhone would that be a legal issue? In theory the process that has happened is that I have gone through each line of the project, learnt what it is doing, and then reimplemented the ideas in a new language. To me it seems this is like learning how to implement something, but then reimplementing it separately from the original licence. Therefore you have only copied the algorithm, which arguably you could have learnt from somewhere else other than the original project. Does the licence cover the specific implementation or the algorithm as well? EDIT------ Really glad to see this topic create a good conversation. To give a bit more backing to the project, the code involved does some kind of audio analysis. I believe it is non-trivial to learn or implement, although I was prepared to embark on this task (I'm at the level where I can implement an FFT algorithm, and this was going to go beyond that.) It is a fairly low LOC script, so I didn't think it would be too hard to do a straight port. I really like the idea of rereleasing my port as well as using it in the application. I don't see any problem with that, and it would be a great way to give something back to the community. I was going to add a line about not wanting to discuss the moral issues, but I'm quite glad I didn't as it seems to have fired the debate a bit. I still feel a bit odd about using open source code to learn from. Does this mean that anything one learns from an open source project is not allowed to be used in a closed source project? And how long after or different does an implementation have to be to not be considered violation of the licence? Murky! EDIT 2 -------- Follow up question

    Read the article

< Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >