Search Results

Search found 21861 results on 875 pages for 'external library'.

Page 608/875 | < Previous Page | 604 605 606 607 608 609 610 611 612 613 614 615  | Next Page >

  • Website is not clickable in Windows XP Internet Explorer 7

    - by c-sharp newbie
    I have installed Windows XP Virtual PC on Windows 7 to test a site that is having issues in IE7 on windows xp - the website loads up but you cannot click on hyperlinks - its like the website has frozen - now is this a browser support issue or OS issue? Can anyone shed any light on this? Is there any browser tools i can use to spot any problems? Sorry if i have been too vague - not much else to say really - completely lost.. Maybe this might help a little; Any guidance is appreciated UPDATE: I think i know what the problem maybe - its the jQuery UI reference that is causing issues with the site. Has anyone else experienced similar problems? jquery library used was jquery-1.8.0.min.js

    Read the article

  • MacBook repeatedly disconnects from Wi-Fi

    - by redwall_hp
    I have an early 2008-model MacBook (2.4 GHz). The Wi-Fi router I have at home is a Linksys WRT54GX2 that I have had for a few years. My MacBook has recently started disconnecting from the router every few minutes, which is rather annoying. I can reconnect again without having to restart the router or anything, as it seems that the MacBook is just dropping the connection. I have tried changing the channel on the router, and upgrading the laptop from Leopard to Snow Leopard made no difference either. I'm only about six feet from the Linksys device, so distance isn't an issue. This only happens with the Linksys router, while I can use the local library's open network without any issues. The problem also seemingly becomes more pronounced after midnight. What could the problem be? Edit: Here are the logs that Spiff requested: http://pastie.org/951761

    Read the article

  • Is memcached pre-installed with Xcode on OS X?

    - by GeneQ
    The answer seems to be yes, according to Apple's documentation. To make sure I'm not hallucinating, I checked Apple's developer tools documentation. Yes, memcached is supported and documented by Apple: Apple Developer Library : memcached(1) Apparently, memcached is installed with Xcode since at least 10.6 onwards. The reason I'm asking this is that on the Web and on SO there are lots people asking how to install memcached on OS X but curiously no one seems to mention that the easiest way to do so is to simply install Xcode via the AppStore (or using a DMG). All the answers given involves using homebrew or some other complicated way to install memcached from source. Is there any compelling reason why the Apple shipped memcached is not good enough? I don't see any advantages of compiling and installing memcached from source since there's an Apple supported version.

    Read the article

  • Solving &ldquo;XmlSchemaException: The global element '&lt;elementName&gt;' has already been declare

    - by ChrisD
    I recently encountered this error when I attempted to consume a new hosted WCF service.  The service used the Request/Response model and had been properly decorated.  The response and request objects were marked as DataContracts and had a specified namespace.   My WCF service interface was marked as a ServiceContract and shared the namespace attribute value.   Everything should have been fine, right? [ServiceContract(Namespace = "http://schemas.myclient.com/09/12")] public interface IProductActivationService { [OperationContract] ActivateSoftwareResponse ActivateSoftware(ActivateSoftwareRequest request); } well, not exactly.  Apparently the WSDL generator was having an issue: System.Xml.Schema.XmlSchemaException: The global element 'http://schemas.myclient.com/09/12:ActivateSoftwareResponse' has already been declared. After digging I’ve found the problem; the WSDL generator has some reserved suffixes for its entities, including Response, Request, Solicit (see http://msdn.microsoft.com/en-us/library/ms731045.aspx).  The error message is actually the result of a naming conflict.  The WSDL generator uses the namespace of the service to build its reserved types.  The service contract and data contract share a namespace, which coupled with the response/request name suffixes I was using in my class names, resulted in the SchemaException. The Fix: Two options: Rename my data contract entities to use a non-reserved keyword suffix (i.e.  change ActivateSoftwareResponse to ActivateSoftwareResp). or; Change the namespace of the data contracts to differ from the service contract namespace. I chose option 2 and changed all my data contracts to use a “http://schemas.myclient.com/09/12/data” namespace value. This avoided a name collision and I was able to produce my WSDL and consume my service.

    Read the article

  • Poor quality when trying to stream a 720p video to XBox 360 using Media Center Extender

    - by MBraedley
    I have my XBox 360 set up as a Media Center Extender for my Windows 7 desktop. SD quality avi videos stream fine to my XBox, either though the video library or through Media Center Extender, but when I try a 720p mkv file, the frame rate plummets and the A/V sync is completely lost. I don't want to transcode or switch container formats (mkv isn't supported by the 360), but still want to stream. Both my desktop and 360 are plugged into the same gigabit switch, which is plugged into my ISP supplied modem/router. The video plays fine on my machine in a number of programs. Considering that I should have more than enough bandwidth to accommodate this video, why won't it play back properly?

    Read the article

  • Software Architecture Analysis Method (SAAM)

    Software Architecture Analysis Method (SAAM) is a methodology used to determine how specific application quality attributes were achieved and how possible changes in the future will affect quality attributes based on hypothetical cases studies. Common quality attributes that can be utilized by this methodology include modifiability, robustness, portability, and extensibility. Quality Attribute: Application Modifiability The Modifiability quality attribute refers to how easy it changing the system in the future will be. This to me is a very open-ended attribute because a business could decide to transform a Point of Sale (POS) system in to a Lead Tracking system overnight. (Yes, this did actually happen to me) In order for SAAM to be properly applied for checking this attribute specific hypothetical case studies need to be created and review for the modifiability attribute due to the fact that various scenarios would return various results based on the amount of changes. In the case of the POS change out a payment gateway or adding an additional payment would have scored very high in comparison to changing the system over to a lead management system. I personally would evaluate this quality attribute based on the S.O.I.L.D Principles of software design. I have found from my experience the use of S.O.I.L.D in software design allows for the adoption of changes within a system. Quality Attribute: Application Robustness The Robustness quality attribute refers to how an application handles the unexpected. The unexpected can be defined but is not limited to anything not anticipated in the originating design of the system. For example: Bad Data, Limited to no network connectivity, invalid permissions, or any unexpected application exceptions. I would personally evaluate this quality attribute based on how the system handled the exceptions. Robustness Considerations Did the system stop or did it handle the unexpected error? Did the system log the unexpected error for future debugging? What message did the user receive about the error? Quality Attribute: Application Portability The Portability quality attribute refers to the ease of porting an application to run in a new operating system or device. For example, It is much easier to alter an ASP.net website to be accessible by a PC, Mac, IPhone, Android Phone, Mini PC, or Table in comparison to desktop application written in VB.net because a lot more work would be involved to get the desktop app to the point where it would be viable to port the application over to the various environments and devices. I would personally evaluate this quality attribute based on each new environment for which the hypothetical case study identifies. I would pay particular attention to the following items. Portability Considerations Hardware Dependencies Operating System Dependencies Data Source Dependencies Network Dependencies and Availabilities  Quality Attribute: Application Extensibility The Extensibility quality attribute refers to the ease of adding new features to an existing application without impacting existing functionality. I would personally evaluate this quality attribute based on each new environment for the following Extensibility  Considerations Hard coded Variables versus Configurable variables Application Documentation (External Documents and Codebase Documentation.) The use of Solid Design Principles

    Read the article

  • How to install PyQt on Mac OS X 10.6

    - by Albert
    I want to install PyQt. This seems kind of complicated to install on OS X. I haven't found any precompiled packages of it (are there any? I would really prefer those). So I downloaded PyQt. And SIP, because it depends on that. These files: http://www.riverbankcomputing.co.uk/static/Downloads/PyQt4/PyQt-mac-gpl-4.7.3.tar.gz http://www.riverbankcomputing.co.uk/static/Downloads/sip4/sip-4.10.2.tar.gz Did a python configure.py && make && sudo make install on SIP -- installed without any problems. Tried the same on PyQt -- and failed of course: /Library/Frameworks/QtCore.framework/Headers/qglobal.h:288:2: error: #error "You are building a 64-bit application, but using a 32-bit version of Qt. Check your build configuration." Ok, so I tried with python configure.py --use-arch=i386. Same error. Any idea?

    Read the article

  • View scheduled recordings remotely

    - by Scott
    I have one computer which is equipped with a TV tuner card. The Recorded TV folder is shared so that other computers on the network can watch recordings. Now that I am upgrading my computers to Windows 7, they have Media Center, which makes for a nicer viewing experience than a Windows Explorer folder view. I have set up Media Center on my laptop to considering the Recorded TV folder on the tuner-equipped PC as an extended library location, so now I can view all the recordings from within Media Center on the laptop. To make this experience even better, is there a way to view the list of scheduled recordings on the tuner-equipped PC from within Media Center on my laptop?

    Read the article

  • How can I create an orthographic display that handles different screen dimensions?

    - by Piku
    I'm trying to create an iPad/iPhone game using GLES2.0 that contains a 3D scene with a heads-up-display/GUI overlaid on the top. However, this problem would also apply if I were to port my game to a computer and run the game in a resizable window, or allow the user to change screen resolutions... When trying to make the 2D GUI/HUD work I've made the assumption that all I'm really doing is drawing a load of 2D textured 'quads' on the screen and am trying to treat the orthographic projection as an old-style 2D display with 0,0 in the upper left and screenWidth,ScreenHeight in the lower right. This causes me all sorts of confusion when I rotate my ipad into Landscape mode since I can't work out what to put into my projection and modelview matrices to turn everything around the right way. It also gets messy if I want to support the iPad's large screen, an iPhone or a Retina display since I have to then draw three sets of textures for everything and work out which ones to use. Should I be trying to map the 2D OpenGL co-ords 1:1 with the screen? While typing out this question it occurs to me that I could keep my origin in the centre, still running -1/+1 along the axes. This would let me scale my 2D content appropriately on the different screen sizes, but wouldn't I end up with the textures being scaled and possibly losing quality? I'm using OpenGLES 2.0 and have a matrix library that has equivalents to the GLES1.1 glOrthof() and glFrustrum() calls.

    Read the article

  • How can I instruct nautilus to pre-generate PDF thumbnails?

    - by Glutanimate
    I have a large library of PDF documents (papers, lectures, handouts) that I want to be able to quickly navigate through. For that I need thumbnails. At the same time however, I see that the ~/.thumbnails folder is piling up with thumbs I don't really need. Deleting thumbnail junk without removing the important thumbs is impossible. If I were to delete them, I'd have to go to each and every folder with important PDF documents and let the thumbnail cache regenerate. I would love to able to automate this process. Is there any way I can tell nautilus to pre-cache the thumbs for a set of given directories? Note: I did find a set of bash scripts that appear to do this for pictures and videos, but not for any other documents. Maybe someone more experienced with scripting might be able to adjust these for PDF documents or at least point me in the right direction on what I'd have to modify for this to work with PDF documents as well. Edit: Unfortunately neither of the answers provided below work. See my comments below for more information. Is there anyone who can solve this?

    Read the article

  • Choosing an open source license such that maximum value is added to a startup

    - by echo-flow
    There are many companies that produce open source software products, and many business models that these companies can use. I'm particularly interested in companies like 280 North, the company behind Objective-J and Cappucino frameworks. My understanding of this organization's business model is that they: worked to develop a tool which added significant value to developers, released the tool under an open source license, built a community around the tool (which was helped by the project's open source licensing), created interesting demos illustrating the project's value All of these things added value to the project, and the company that owned it. Finally, 280 North was sold to Motorola. My question has to do with the role of software licensing in this particular business model. 280 North licensed their software projects under the LGPL, which gave them some proprietary control over how the project could be used. I believe that the LGPL is what's known as a "weak copyleft" license, meaning that the project can be linked to, without the linking code also being licensed under the LGPL; but software derived directly from the project would need to be licensed under the LGPL. For web-oriented libraries in particular, weak copyleft, or non-copyleft licensing seems to be quite common; I can't think of a single example of a popular or well-known web-oriented library that is licensed under the GPL (or AGPL). The question then, is, how much value would a weak copyleft license like the LGPL add to a software venture like 280 North, versus a non-copyleft license, such as the BSD license or the Apache Software License? I'd really appreciate any insight anyone can offer into this, but I'd be most interested in answers that can cite other companies as case studies or examples.

    Read the article

  • Migrate TFS 2010 Application Tier to another server on the same domain

    - by Liam
    I'm in the process of looking into the possibility of moving our TFS 2010 Application Tier to another server from the one it is on at the moment so that we can repurpose the hardware. I've been looking through the Microsoft Documentation over at http://msdn.microsoft.com/en-us/library/ms404869.aspx, but this assumes that everything is stored on one box (Application and Data tiers). In my setup however, our Data tier is separate to the Application tier and will be staying where it is. I think I should be able to do this but for my own peace of mind, would there be any issues or implications if I merely installed the Application Tier on the new hardware and then connected it to the existing data tier?

    Read the article

  • PDF export printing in Internet Explorer [closed]

    - by user619804
    protected static byte[] exportReportToPdf(JasperPrint jasperPrint) throws JRException { JRPdfExporter exporter = new JRPdfExporter(); ByteArrayOutputStream baos = new ByteArrayOutputStream(); exporter.setParameter(JRExporterParameter.JASPER_PRINT, jasperPrint); exporter.setParameter(JRExporterParameter.OUTPUT_STREAM, baos); exporter.setParameter(JRPdfExporterParameter.PDF_JAVASCRIPT, "this.print({bUI: true,bSilent: false,bShrinkToFit: true});"); exporter.exportReport(); return baos.toByteArray(); } We are using code like this to export a PDF document from a Jasper application. The line exporter.setParameter(JRPdfExporterParameter.PDF_JAVASCRIPT, "this.print({bUI: true,bSilent: false,bShrinkToFit: true});"); adds JavaScript to send the PDF document directly to the printer. The expected behavior is that a print dialog will come up with a preview of the PDF document. This works fine most of the time - except I am having problems about one out of every 5-6 times in Internet Explorer 8 and Firefox. What happens is - the print preview dialog with the PDF document does not appear or it appears with a blank document in the preview window. -I've tried a number of different JavaScripts (different params to this.print() via exporter.setParameter -I've tried setting different response headers such as response.setContentType("application/pdf"); response.setHeader("Content-disposition","inline; filename=\"" + reportName + "\""); response.setContentLength(baos.size()); these did not seem to help This seems to be an IE and FF issue. Has anyone ever dealt with this problem? I need to get it to work across all browsers 100% of the time. Perhaps a different approach to accomplish the goal of sending the PDF document export directly to the printer? or a third party library that will work across browsers?

    Read the article

  • V4L2 and ALSA: Kernel SPI or User API?

    - by pnongrata
    I'm trying to understand what Video for linux and ALSA are (exactly), and I can't discern whether they're APIs for Linux application to use (the userspace) or if they are backend services that are only available to the Linux kernel (sort of a kernalspace SPI). Or, if they are something entirely different. On one hand, those articles make it sound like its an API for applications to use. However, on the V4L2 page it has a section title Software supporting Video4Linux... So is V4L2 a library that applications use, or is it a module that "snaps into" the kernel? I'm so coonfused, thanks in advance.

    Read the article

  • Unable to remotely schedule tasks from the command line

    - by Eptin
    I'm on a Windows 7 machine, attempting to use the command line to schedule a task on another Windows 7 machine in my company's network. I have administrative-level credentials for both computers. With help from http://msdn.microsoft.com/en-us/library/windows/desktop/bb736357.aspx I have created this line to run on the command prompt: schtasks /Create /S machinename /U username /P password /SC ONCE /TN Test1 /TR C:\Windows\System32\calc.exe /ST 16:30 Whenever I launch that, I get the following error: ERROR: User credentials are not allowed on the local machine. How can I fix this?

    Read the article

  • What are some techniques I can use to refactor Object Oriented code into Functional code?

    - by tieTYT
    I've spent about 20-40 hours developing part of a game using JavaScript and HTML5 canvas. When I started I had no idea what I was doing. So it started as a proof of concept and is coming along nicely now, but it has no automated tests. The game is starting to become complex enough that it could benefit from some automated testing, but it seems tough to do because the code depends on mutating global state. I'd like to refactor the whole thing using Underscore.js, a functional programming library for JavaScript. Part of me thinks I should just start from scratch using a Functional Programming style and testing. But, I think refactoring the imperative code into declarative code might be a better learning experience and a safer way to get to my current state of functionality. Problem is, I know what I want my code to look like in the end, but I don't know how to turn my current code into it. I'm hoping some people here could give me some tips a la the Refactoring book and Working Effectively With Legacy Code. For example, as a first step I'm thinking about "banning" global state. Take every function that uses a global variable and pass it in as a parameter instead. Next step may be to "ban" mutation, and to always return a new object. Any advice would be appreciated. I've never taken OO code and refactored it into Functional code before.

    Read the article

  • Font display issue (Mac OS X)?

    - by avenas8808
    I used a font manager on Mac OS X, for additional fonts in my graphic design projects without installing them to the fonts folder (I think that's how it works) - using Font Book and Font Explorer X Version 1.2.3 on OS X 10.6. Most fonts work fine, but Interstate has a problem: Interstate Regular is installed, but for some reason it's probably not seeing it; it's seeing all the Bold and Condensed versions fine. In the above image, it displays the second font as Interstate Regular, but it isn't that font... why? Also, how do I reset the system fonts folder back to the default-installed fonts (I think it's in the library folder) if worst comes to worst, and is using a font manager on Mac or Windows a good idea? I don't want to wreck my system, fairly new to using Mac, especially OS X, so any help would be gratefully accepted.

    Read the article

  • Designing a system with different business rules for different customers

    - by user1595846
    My company is rewriting our proprietary business application. The current architecture is poorly done and inflexible. It is coded more procedural oriented as opposed to object oriented. It has become difficult to maintain. Our system is a web application written in .Net Webforms. I am considering ASP.Net MVC for the rewrite. We intend to rewrite it with a good, solid architecture with the goal of maintainability and reusable classes for some of our other systems and services. We would also like the system to be customizable for different customers in the event that we market the system. I am considering redesigning the system based on the layered architecture (Presentation, Business, Data Access layers) described in the Microsoft Patterns and Practices Application Architecture Guide. http://msdn.microsoft.com/en-us/library/ff650706.aspx Hopefully this isn't too open ended, but how would you recommend allowing for different business logic/rules for different customers? I'm aware of Windows Workflow Foundation, but from what I've read about it, it seems many business rules could be too complicated to handle there. Also, Can anyone point me to where I can download an example of a .net solution that is based on the Application Architecture Guide? I have already downloaded the Layered Architecture Solution Guidance and the Expense Sample on codeplex. I was looking for something a bit larger and more robust that I could step through the code and see how it works. If you feel there are better architectures to base our redesign on please feel free to share. I appreciate your help!

    Read the article

  • Some Problems Can't Be Outsourced

    - by mikef
    More and more companies are becoming attracted to the idea of Infrastructure as a Service (or IaaS). It would seem that you can outsource the provisioning and management of your services, encompassing everything from Email, through to your servers, workstations and software, all the way down to your LAN and internet services. This type of outsourcing can be a very attractive option for companies who have tight budgets who are short of technical skills or don't have the means to provide long-term IT support. Essentially, they can outsource your services at low short-term costs that are knowable and controllable, are quickly and easily scalable, and generate a minimum of hassle for your internal staff. If you want to get a sophisticated IT infrastructure set up in a hurry without the usual high buy-in costs, or the task of finding and hiring the right specialists. It would seem the way to go, particularly when their salesmen are hypnotizing you with oleaginous phrases such as "we are closely aligned with our client organization's core business requirements, providing agile services". It sounds too good to be true, and so it is. Whereas the costs will have initially been calculated on the annual renewal fees and service fees for ongoing support, there are other charges too which aren't so obvious. It can end up costing far more than the conventional solution once you take into account the extra costs, the fees for customization and upgrades. The Total Cost of Ownership (TCO) only becomes apparent when it is too late to extract the company easily from the arrangement. After a few years, these annual fees can add up to more than the initial cost of implementing a traditional in-house system. Worse than that is that you can then lose your power to determine your priorities: When you become reliant on this company, with its own schedule of priorities, to implement every change, however simple, you have effectively lost control of your technical infrastructure. This will make senior management very nervous. There is definitely a requirement for this sort of service. If you urgently need an exceptionally high class of service or more expertise than you currently possess, then outsourcing is probably for you. You and your IT colleagues will always have something to do, be it user assistance, smoothing out integrations with an external provider, or working on something entirely new. Heck, if you outsource to IBM, the SysAdmins can go along for the ride and polish their expertise. What you need to figure out is how much your time is worth, because time is ultimately all that outsourcing will buy you and your organization. Now you just need to convince your nervous CEO. Cheers, Michael

    Read the article

  • Pros and Cons of Facebook's React vs. Web Components (Polymer)

    - by CletusW
    What are the main benefits of Facebook's React over the upcoming Web Components spec and vice versa (or perhaps a more apples-to-apples comparison would be to Google's Polymer library)? According to this JSConf EU talk and the React homepage, the main benefits of React are: Decoupling and increased cohesion using a component model Abstraction, Composition and Expressivity Virtual DOM & Synthetic events (which basically means they completely re-implemented the DOM and its event system) Enables modern HTML5 event stuff on IE 8 Server-side rendering Testability Bindings to SVG, VML, and <canvas> Almost everything mentioned is being integrated into browsers natively through Web Components except this virtual DOM concept (obviously). I can see how the virtual DOM and synthetic events can be beneficial today to support old browsers, but isn't throwing away a huge chunk of native browser code kind of like shooting yourself in the foot in the long term? As far as modern browsers are concerned, isn't that a lot of unnecessary overhead/reinventing of the wheel? Here are some things I think React is missing that Web Components will care of. Correct me if I'm wrong. Native browser support (read "guaranteed to be faster") Write script in a scripting language, write styles in a styling language, write markup in a markup language. Style encapsulation using Shadow DOM React instead has this, which requires writing CSS in JavaScript. Not pretty. Two-way binding

    Read the article

  • How do you create virtual folders from saved search

    - by Jérôme Radix
    I would like to have on unix-like platforms, the same functionality as to Windows 7 Library folders (aka virtual folders) you see in Windows Explorer. Gnome Nautilus do that kind of virtual folders through saved search. But I want a system-wide solution, not a gnome-wide solution. Is there a tool that creates virtual folders from the concatenation of multiple search queries (the result of multiple find commands ?). The solution should index files for better performances and you should be able to define the default folder for copy operations. I assume the solution of this kind of problem certainly use FUSE, but I can't see a complete solution to this kind of task in FUSE applications.

    Read the article

  • Should I make a seperate unit test for a method, if it only modifies the parent state?

    - by Dante
    Should classes, that modify the state of the parent class, but not itself, be unit tested separately? And by separately, I mean putting the test in the corresponding unit test class, that tests that specific class. I'm developing a library based on chained methods, that return a new instance of a new type in most cases, where a chained method is called. The returned instances only modify the root parent state, but not itself. Overly simplified example, to get the point across: public class BoxedRabbits { private readonly Box _box; public BoxedRabbits(Box box) { _box = box; } public void SetCount(int count) { _box.Items += count; } } public class Box { public int Items { get; set; } public BoxedRabbits AddRabbits() { return new BoxedRabbits(this); } } var box = new Box(); box.AddRabbits().SetCount(14); Say, if I write a unit test under the Box class unit tests: box.AddRabbits().SetCount(14) I could effectively say, that I've already tested the BoxedRabbits class as well. Is this the wrong way of approaching this, even though it's far simpler to first write a test for the above call, then to first write a unit test for the BoxedRabbits separately?

    Read the article

  • Sub-systems in game engines

    - by Hillel
    So here's the problem- I'm writing my own engine library, and it works fine with stuff like menus and the actual game screen. The thing is, I can't really figure out how to integrate something like an intro or dialogue preceding certain levels into this system. Let's look at another example- say I have a game-specific engine which gets a Level object and runs it. Engine would have its own collision and physics system, all hard coded. Now, what if at some point in a level, I want the player to enter a mini-game with different rules? How do I morph the Engine class to support these sub-systems without having to deal with their code all the time (as in: if(regular game) ... else if(mini game) ...)? And what if I want an intro animation at the start of a level, and I want the player to be able to assume control of his character once the animation ends, do I implement the animation into the Engine class itself? Or maybe I need to run another class, CutScene, and when it ends, it calls Engine and starts the level? What if I want to add a dialogue system, where at the start of each level there's a short dialogue and the player can't control his character, and once it ends, he can? Would I then run the dialogue code inside the Engine code? Maybe these sub-systems should all be scripted? I don't know anything about scripting, is it necessary for this kind of situation? Any help would be appreciated.

    Read the article

  • How do you create virtual folders from saved search

    - by Jérôme Radix
    I would like to have on unix-like platforms, the same functionality as to Windows 7 Library folders (aka virtual folders) you see in Windows Explorer. Gnome Nautilus do that kind of virtual folders through saved search. But I want a system-wide solution, not a gnome-wide solution. Is there a tool that creates virtual folders from the concatenation of multiple search queries (the result of multiple find commands ?). The solution should index files for better performances and you should be able to define the default folder for copy operations. I assume the solution of this kind of problem certainly use FUSE, but I can't see a complete solution to this kind of task in FUSE applications.

    Read the article

  • How should I start refactoring my mostly-procedural C++ application?

    - by oob
    We have a program written in C++ that is mostly procedural, but we do use some C++ containers from the standard library (vector, map, list, etc). We are constantly making changes to this code, so I wouldn't call it a stagnant piece of legacy code that we can just wrap up. There are a lot of issues with this code making it harder and harder for us to make changes, but I see the three biggest issues being: Many of the functions do more (way more) than one thing We violate the DRY principle left and right We have global variables and global state up the wazoo. I was thinking we should attack areas 1 and 2 first. Along the way, we can "de-globalize" our smaller functions from the bottom up by passing in information that is currently global as parameters to the lower level functions from the higher level functions and then concentrate on figuring out how to removing the need for global variables as much as possible. I just finished reading Code Complete 2 and The Pragmatic Programmer, and I learned a lot, but I am feeling overwhelmed. I would like to implement unit testing, change from a procedural to OO approach, automate testing, use a better logging system, fully validate all input, implement better error handling and many other things, but I know if we start all this at once, we would screw ourselves. I am thinking the three I listed are the most important to start with. Any suggestions are welcome. We are a team of two programmers mostly with experience with in-house scripting. It is going to be hard to justify taking the time to refactor, especially if we can't bill the time to a client. Believe it or not, this project has been successful enough to keep us busy full time and also keep several consultants busy using it for client work.

    Read the article

< Previous Page | 604 605 606 607 608 609 610 611 612 613 614 615  | Next Page >