Search Results

Search found 24226 results on 970 pages for 'team foundation build'.

Page 81/970 | < Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >

  • Is there a chroot build script somewhere?

    - by Nils
    I am about to develop a little script to gather information for a chroot-jail. In my case this looks (at the first glance) pretty simple: The application has a clean rpm-install and did install almost all files into a sub-directory of /opt. My idea is: Do a find of all binaries Check their library-dependencies Record the results into a list Do a rsync of that list into the chroot-target-directory before startup of the application Now I wonder - ist there any script around that already does such a job (perl/bash/python)? So far I found only specialized solutions for single applications (like sftp-chroot). Update I see three close-votes for the reason "off topic". This is a question that arose because I have to install that ancient piece of software on a server at work. So if you still feel this is off-topic - leave a comment...

    Read the article

  • VMware Workstation 7 build 203739 - Capture Movie Mouse Pointer Not Visible

    - by BMIVM
    Hi, I have noticed that whenever I create a Capture Movie, the movie is fine but the mouse pointer is not visible at all. The clicks are on buttons file menus are executed but is hard for a viewer of the capturre movie session to follow the recording smoothly. This was not a problem in previous versions of WorkStation. Is this a bug or is there a setting I can set to see the mouse pointer? Note: VM Tools are installed, Host is Vista Ultimate Edtion SP1, x64 and the guest is a Win XP SP2. Thanks in advance for your help.

    Read the article

  • Trying to build a history of popular laptop models

    - by John
    A requirement on a software project is it should run on typical business laptops up to X years old. However while given a specific model number I can normally find out when it was sold, I can't find data to do the reverse... for a given year I want to see what model numbers were released/discontinued. We're talking big-name, popular models like Dell Latitude/Precision/Vostro, Thinkpads, HP, etc. The data for any model is out there but getting a timeline is proving hard. Sites like Dell are (unsurprisingly) geared around current products, and even Wikipedia isn't proving very reliable. You'd think this data must have been collated by manufacturers or enthusiasts, surely?

    Read the article

  • Is there any way to shut up my ATI HD 5770?

    - by slpsys
    So to preface, I basically built Jeff's machine; I already had some of the components, including (scarily enough) the exact same case1. I've been buying bits and pieces over the past few months, which coincided perfectly with his recent post about three monitors, though not being a gamer outright, I opted for the second-from-the-bottom option. After finally plopping all the pieces lovingly into the case this evening, I turn it on...and it sounds like four professional grade hair-driers. Some quick regression analysis determined that with the video card out, the running machine sounded no louder than our house's vents. Basically, my last desktop build included a $45-at-the-time graphics card, and it's been Macbook Pros and workstations since then, so I have zero idea whether I'll just be able to tune the fan speed later on. Will I be able to get this thing to quiet down every time I'm not playing Modern Warfare 2 at maximum framerate, or should I just send this thing back now, and get the quietest card in my pricerange? 1 One thing of note is that I do not have noise-absorbing foam in the case, as is pictured in the article. I'm only mentioning that because I suspect it could drop the overall output a few decibels, but obviously not that many.

    Read the article

  • how to build network across buildings ?

    - by Omie
    Hi ! I need some help in building a network between hundreds of computers spread across multiple buildings of my college. Yes, I'll be doing this as a part of my college project. Please see this image, it will give you enough idea of what I'm trying to achieve. http://i.imgur.com/rOohx.png All the computers in all buildings should be able to connect server. Once network is up, there will be a set of services over intranet and network use will be moderate. well, say there will be an email server and a http server. My point is, I cannot afford much of performance loss. It feels easy to connect computers inside 1 building to each other, however, I'm clueless as to how to connect all of them to server. I mean, just 1 cable won't be enough to connect 1 building to server, right ? How should I go with it ? I am not expecting detailed configuration. Just heads up will do :) Thanks

    Read the article

  • My card reader doesn't show up at all, but previously did in 10.10

    - by Nathan J. Brauer
    I bought a pin-based-USB powered internal media card reader and it worked perfectly when I first installed Ubuntu 10.10 a month ago. I used it a few times since and today I booted up the computer and it's not working. Here's how it used to work: In nautilus->computer, 5 "drives" would display even when no card was inserted. One for each slot (SD, XD, CF/MS, etc). Opening one w/o a card would initiate a "Please insert card." dialog. Inserting a card would automatically open nautilus. Now: No drives display at all whether cards are inserted or not. lsusb lists the following (which seems to indicate that it's not being detected) Bus 008 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 007 Device 002: ID 046d:c52e Logitech, Inc. (my keyboard/mouse) Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 002: ID 0a12:0001 Cambridge Silicon Radio, Ltd Bluetooth Dongle (HCI mode) Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub (A side note, all of my USB ports are 2.0 + 1 USB port which is 3.0, why does everything say 1.1/2.0?) I tried using ubuntu-bug but for USB devices it expects you to be able to remove and insert them while the computer is running -- obviously something you probably shouldn't be doing when you're dealing with devices plugged straight into the USB pins. Thanks in advance!

    Read the article

  • How do I mount my Android phone?

    - by Amanda
    I'm puzzled because my phone used to just appear when I plugged it in. It doesn't anymore and The development options are definitely set to allow USB debugging. The phone is charging via USB but doesn't appear in lsusb [0 amanda@luna android-sdk-linux_86]$ lsusb Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 008 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 004: ID 17ef:4807 Lenovo UVC Camera Bus 003 Device 012: ID 413c:1003 Dell Computer Corp. Keyboard Hub Bus 003 Device 003: ID 08ff:2810 AuthenTec, Inc. AES2810 Bus 003 Device 013: ID 413c:2010 Dell Computer Corp. Keyboard Bus 003 Device 014: ID 046d:c001 Logitech, Inc. N48/M-BB48 [FirstMouse Plus] adb devices -l shows nothing. In my Wireless and Network settings I changed the USB connection settings to "Mass storage" -- they were set to "Ask on connection" though I definitely wasn't getting asked. I don't get any Click here to connect via USB alert either. I'm not even sure whether the issue is my phone or my computer. It seems odd that it isn't even appearing in lsusb Not for nothing, the thumb drive on my keyring also does not appear in lsusb -- I've tried both in a bunch of different ports. I kind of assume the thumb drive is just borked, but it could be my OS.

    Read the article

  • Davicom DM9601 USB LAN NIC Ubuntu 11.10 issue

    - by Gaurav_Java
    I have a davicom dm9601 USB ethernet card. When I plug in the device, it is detected and drivers are loaded, but I can't connect to internet using it. It works perfectly on XP, other laptop but not working on Ubuntu 11.10 How can I install the driver for this? I have tried many things But nothing is working. If I go to this link driver but not compiling or may I be doing something wrong. I found this one but don't know how to follow these steps . This is my lsusb output: Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 008 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 004: ID 064e:a103 Suyin Corp. Acer/HP Integrated Webcam [CN0314] Bus 003 Device 002: ID 08ff:1600 AuthenTec, Inc. AES1600 Bus 005 Device 002: ID 0a46:9601 Davicom Semiconductor, Inc. DM9601 Fast Ethernet Adapter Bus 006 Device 002: ID 046d:c045 Logitech, Inc. Optical Mouse Bus 003 Device 003: ID 0a5c:2101 Broadcom Corp. Bluetooth Controller Bus 004 Device 002: ID 04d9:1702 Holtek Semiconductor, Inc. But when I connected my Internet from different system its start working.

    Read the article

  • USB Keyboard doesn't work in Ubuntu 14.04

    - by Steven Crossan
    My USB keyboard isn't working in Ubuntu 14.04, but also didn't work in 13.10. I upgraded today in the hope that the issue would be resolved, but it wasn't. The keyboard works in BIOS and GRUB but stops working when I reach the login screen. It is detected by the system, but just doesn't work. Output of lsusb: Bus 002 Device 002: ID 0846:9011 NetGear, Inc. WNDA3100v2 802.11abgn [Broadcom BCM4323] Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 008 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 007 Device 002: ID 1532:000d Razer USA, Ltd Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 006 Device 002: ID 060b:2231 Solid Year KSK-6001 UELX Keyboard Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub I added: hid usbhid hid_generic ohci_pci To /etc/initramfs-tools/modules and did update-initramfs -u, but that didn't work. I'm new to Ubuntu/Linux and any help you could provide would be greatly appreciated!

    Read the article

  • Build problems when adding `__str__` method to Boost Python C++ class

    - by Rickard
    I have started to play around with boost python a bit and ran into a problem. I tried to expose a C++ class to python which posed no problems. But I can't seem to manage to implement the __str__ functionality for the class without getting build errors I don't understand. I'm using boost 1_42 prebuild by boostpro. I build the library using cmake and the vs2010 compiler. I have a very simple setup. The header-file (tutorial.h) looks like the following: #include <iostream> namespace TestBoostPython{ class TestClass { private: double m_x; public: TestClass(double x); double Get_x() const; void Set_x(double x); }; std::ostream &operator<<(std::ostream &ostr, const TestClass &ts); }; and the corresponding cpp-file looks like: #include <boost/python.hpp> #include "tutorial.h" using namespace TestBoostPython; TestClass::TestClass(double x) { m_x = x; } double TestClass::Get_x() const { return m_x; } void TestClass::Set_x(double x) { m_x = x; } std::ostream &operator<<(std::ostream &ostr, TestClass &ts) { ostr << ts.Get_x() << "\n"; return ostr; } BOOST_PYTHON_MODULE(testme) { using namespace boost::python; class_<TestClass>("TestClass", init<double>()) .add_property("x", &TestClass::Get_x, &TestClass::Set_x) .def(str(self)) ; } The CMakeLists.txt looks like the following: CMAKE_MINIMUM_REQUIRED(VERSION 2.8) project (testme) FIND_PACKAGE( Boost REQUIRED ) FIND_PACKAGE( Boost COMPONENTS python REQUIRED ) FIND_PACKAGE( PythonLibs REQUIRED ) set(Boost_USE_STATIC_LIBS OFF) set(Boost_USE_MULTITHREAD ON) INCLUDE_DIRECTORIES(${Boost_INCLUDE_DIRS}) INCLUDE_DIRECTORIES ( ${PYTHON_INCLUDE_PATH} ) add_library(testme SHARED tutorial.cpp) target_link_libraries(testme ${Boost_PYTHON_LIBRARY}) target_link_libraries(testme ${PYTHON_LIBRARY} The build error I get is the following: Compiling... tutorial.cpp C:\Program Files (x86)\boost\boost_1_42\boost/python/def_visitor.hpp(31) : error C2780: 'void boost::python::api::object_operators::visit(ClassT &,const char *,const boost::python::detail::def_helper &) const' : expects 3 arguments - 1 provided with [ U=boost::python::api::object ] C:\Program Files (x86)\boost\boost_1_42\boost/python/object_core.hpp(203) : see declaration of 'boost::python::api::object_operators::visit' with [ U=boost::python::api::object ] C:\Program Files (x86)\boost\boost_1_42\boost/python/def_visitor.hpp(67) : see reference to function template instantiation 'void boost::python::def_visitor_access::visit,classT>(const V &,classT &)' being compiled with [ DerivedVisitor=boost::python::api::object, classT=boost::python::class_, V=boost::python::def_visitor ] C:\Program Files (x86)\boost\boost_1_42\boost/python/class.hpp(225) : see reference to function template instantiation 'void boost::python::def_visitor::visit>(classT &) const' being compiled with [ DerivedVisitor=boost::python::api::object, W=TestBoostPython::TestClass, classT=boost::python::class_ ] .\tutorial.cpp(29) : see reference to function template instantiation 'boost::python::class_ &boost::python::class_::def(const boost::python::def_visitor &)' being compiled with [ W=TestBoostPython::TestClass, U=boost::python::api::object, DerivedVisitor=boost::python::api::object ] Does anyone have any idea on what went wrrong? If I remove the .def(str(self)) part from the wrapper code, everything compiles fine and the class is usable from python. I'd be very greatful for assistance. Thank you, Rickard

    Read the article

  • How to manage maintenance/bug-fix branches in Subversion when setup projects need to be built?

    - by Mike Spross
    We have a suite of related products written in VB6, with some C# and VB.NET projects, and all the source is kept in a single Subversion repository. We haven't been using branches in Subversion (although we do tag releases now), and simply do all development in trunk, creating new releases when the trunk is stable enough. This causes no end of grief when we release a new version, issues are found with it, and we have already begun working on new features or major changes to the trunk. In the past, we would address this in one of two ways, depending on the severity of the issues and how stable we thought the trunk was: Hurry to stabilize the trunk, fix the issues, and then release a maintenance update based on the HEAD revision, but this had the side effect of releases that fixed the bugs but introduced new issues because of half-finished features or bugfixes that were in trunk. Make customers wait until the next official release, which is usually a few months. We want to change our policies to better deal with this situation. I was considering creating a "maintenance branch" in Subversion whenever I tag an official release. Then, new development would continue in trunk, and I can periodically merge specific fixes from trunk into the maintenance branch, and create a maintenance release when enough fixes are accumulated, while we continue to work on the next major update in parallel. I know we could also have a more stable trunk and create a branch for new updates instead, but keeping current development in trunk seems simpler to me. The major problem is that while we can easily branch the source code from a release tag and recompile it to get the binaries for that release, I'm not sure how to handle the setup and installer projects. We use QSetup to create all of our setup programs, and right now when we need to modify a setup project, we just edit the project file in-place (all the setup projects and any dependencies that we don't compile ourselves are stored on a separate server, and we make sure to always compile the setup projects on that machine only). However, since we may add or remove files to the setup as our code changes, there is no guarantee that today's setup projects will work with yesterday's source code. I was going to put all the QSetup projects in Subversion to deal with this, but I see some problems with this approach. I want the creation of setup programs to be as automated as possible, and at the very least, I want a separate build machine where I can build the release that I want (grabbing the code from Subversion first), grab the setup project for that release from Subversion, recompile the setup, and then copy the setup to another place on the network for QA testing and eventual release to customers. However, when someone needs to change a setup project (to add a new dependency that trunk now requires or to make other changes), there is a problem. If they treat it like a source file and check it out on their own machine to edit it, they won't be able to add files to the project unless they first copy the files they need to add to the build machine (so they are available to other developers), then copy all the other dependencies from the build machine to their machine, making sure to match the folder structure exactly. The issue here is that QSetup uses absolute paths for any files added to a setup project. However, this means installing a bunch of setup dependencies onto development machines, which seems messy (and which could destabilize the development environment if someone accidentally runs the setup project on their machine). Also, how do we manage third-party dependencies? For example, if the current maintenance branch used MSXML 3.0 and the trunk now requires MSXML 4.0, we can't go back and create a maintenance release if we have already replaced the MSXML library on the build machine with the latest version (assuming both versions have the same filename). The only solution I can think is to either put all the third-party dependencies in Subversion along with the source code, or to make sure we put different library versions in separate folders (i.e. C:\Setup\Dependencies\MSXML\v3.0 and C:\Setup\Dependencies\MSXML\v4.0). Is one way "better" or more common than the other? Are there any best practices for dealing with this situation? Basically, if we release v2.0 of our software, we want to be able to release v2.0.1, v2.0.2, and v.2.0.3 while we work on v2.1, but the whole setup/installation project and setup dependency issue is making this more complicated than the typical "just create a branch in Subversion and recompile as needed" answer.

    Read the article

  • How to manage maintenance/bug-fix branches in Subversion when third-party installers are involved?

    - by Mike Spross
    We have a suite of related products written in VB6, with some C# and VB.NET projects, and all the source is kept in a single Subversion repository. We haven't been using branches in Subversion (although we do tag releases now), and simply do all development in trunk, creating new releases when the trunk is stable enough. This causes no end of grief when we release a new version, issues are found with it, and we have already begun working on new features or major changes to the trunk. In the past, we would address this in one of two ways, depending on the severity of the issues and how stable we thought the trunk was: Hurry to stabilize the trunk, fix the issues, and then release a maintenance update based on the HEAD revision, but this had the side effect of releases that fixed the bugs but introduced new issues because of half-finished features or bugfixes that were in trunk. Make customers wait until the next official release, which is usually a few months. We want to change our policies to better deal with this situation. I was considering creating a "maintenance branch" in Subversion whenever I tag an official release. Then, new development would continue in trunk, and I can periodically merge specific fixes from trunk into the maintenance branch, and create a maintenance release when enough fixes are accumulated, while we continue to work on the next major update in parallel. I know we could also have a more stable trunk and create a branch for new updates instead, but keeping current development in trunk seems simpler to me. The major problem is that while we can easily branch the source code from a release tag and recompile it to get the binaries for that release, I'm not sure how to handle the setup and installer projects. We use QSetup to create all of our setup programs, and right now when we need to modify a setup project, we just edit the project file in-place (all the setup projects and any dependencies that we don't compile ourselves are stored on a separate server, and we make sure to always compile the setup projects on that machine only). However, since we may add or remove files to the setup as our code changes, there is no guarantee that today's setup projects will work with yesterday's source code. I was going to put all the QSetup projects in Subversion to deal with this, but I see some problems with this approach. I want the creation of setup programs to be as automated as possible, and at the very least, I want a separate build machine where I can build the release that I want (grabbing the code from Subversion first), grab the setup project for that release from Subversion, recompile the setup, and then copy the setup to another place on the network for QA testing and eventual release to customers. However, when someone needs to change a setup project (to add a new dependency that trunk now requires or to make other changes), there is a problem. If they treat it like a source file and check it out on their own machine to edit it, they won't be able to add files to the project unless they first copy the files they need to add to the build machine (so they are available to other developers), then copy all the other dependencies from the build machine to their machine, making sure to match the folder structure exactly. The issue here is that QSetup uses absolute paths for any files added to a setup project. However, this means installing a bunch of setup dependencies onto development machines, which seems messy (and which could destabilize the development environment if someone accidentally runs the setup project on their machine). Also, how do we manage third-party dependencies? For example, if the current maintenance branch used MSXML 3.0 and the trunk now requires MSXML 4.0, we can't go back and create a maintenance release if we have already replaced the MSXML library on the build machine with the latest version (assuming both versions have the same filename). The only solution I can think is to either put all the third-party dependencies in Subversion along with the source code, or to make sure we put different library versions in separate folders (i.e. C:\Setup\Dependencies\MSXML\v3.0 and C:\Setup\Dependencies\MSXML\v4.0). Is one way "better" or more common than the other? Are there any best practices for dealing with this situation? Basically, if we release v2.0 of our software, we want to be able to release v2.0.1, v2.0.2, and v.2.0.3 while we work on v2.1, but the whole setup/installation project and setup dependency issue is making this more complicated than the the typical "just create a branch in Subversion and recompile as needed" answer.

    Read the article

  • A UI over Windows Workflow

    - by Tom
    Hi there, Are there any build in UI capabilities when using Windows Workflow.. Lets say I have a workflow that takes an hour to run where different activities are happening all the time. While it's running I want to see what activity is currently active, what activities have already ran etc.. Do I have to code this UI myself or does WF have built in features that graphically show the status etc of the workflow?

    Read the article

  • Help with Custom Workflow that monitors an objects state

    - by zSysop
    I need to write a workflow that monitors the status of an object. (It could wait for days or hours for a state to change) I have the following states for the object (lets call it an Issue object): 1) Created 2) Unowned 3) Owned 4) UnAssigned 4) Assigned 6) In Progress 7) Signed Off 8) Closed I would also need to take some action on an object if the object was within a certain state for a defined period (not really sure on how this can be accomplished either). The object's owner/assignee can change at any point (i.e. Go from In Progress to UnOwned) so i am guessing that a state machine diagram is what i would need to use. If my thinking is incorrect then please let me know. My application is written in c# .net 3.5. I was thinking about having a service method called CreateIssue that would insert the ticket into the db and then begin an instance of a workflow (with the object or an id of the object as parameters). I wasn't sure of how the workflow would then know when a particular object has been updated, or if the object's state has changed. I've done some really simple "hello world" type of apps with windows workflow foundation 3.5 but have not yet grasped how to do go about implementing something like this. Any direction on this will be extremely helpful. Thanks in advance.

    Read the article

  • WF performance with new 20,000 persisted workflow instances each month

    - by Nikola Stjelja
    Windows Workflow Foundation has a problem that is slow when doing WF instances persistace. I'm planning to do a project whose bussiness layer will be based on WF exposed WCF services. The project will have 20,000 new workflow instances created each month, each instance could take up to 2 months to finish. What I was lead to belive that given WF slownes when doing peristance my given problem would be unattainable given performance reasons. I have the following questions: Is this true? Will my performance be crap with that load(given WF persitance speed limitations) How can I solve the problem? We currently have two possible solutions: 1. Each new buisiness process request(e.g. Give me a new drivers license) will be a new WF instance, and the number of persistance operations will be limited by forwarding all status request operations to saved state values in a separate database. 2. Have only a small amount of Workflow Instances up at any give time, without any persistance ofso ever(only in case of system crashes etc.), by breaking each workflow stap in to a separate worklof and that workflow handling each business process request instance in the system that is at that current step(e.g. I'm submitting my driver license reques form, which is step one... we have 100 cases of that, and my step one workflow will handle every case simultaneusly). I'm very insterested in solution for that problem. If you want to discuss that problem pleas be free to mail me at [email protected]

    Read the article

  • What are some commonly used source code check-in policies?

    - by rwmnau
    I'm curious what code review policies other development shops apply to their source code when it's checked into the source control repository. I'm setting up a TFS (Team Foundation) server, and I'd like to apply some check-in policies to start to stamp out bad practices. For example, I was thinking of starting with the following couple, so this is the kind of stuff I'm looking for: Prohibit empty "Catch" blocks. This would prevent applications from swallowing any exceptions without at least requiring a comment explaining why it's not necessary to do anything with the exception. Prohibit "Catch ex as Exception" generic exception handling. Instead, require code to catch specific types of exceptions and deal with them appropriately, instead of just building catch-all handling. Require a check-in comment. This one should be self-explanatory, though it seems that TFS (and most other source-control systems) don't require a comment by default. While these are just examples, they're where I'm thinking of starting, and while I'd like some additional examples of what's popular, I'm open to feedback on these. Also, though we're a mostly .NET shop, I imagine the popular policies are universal across languages and IDEs (we have some Java development and a few people who will use the repository develop with Eclipse).

    Read the article

  • Need some advice before starting coding my next iPhone app...

    - by Tom
    Hi! I need some advice about how should I start coding something. So here is the context: I've just finished building a CMS that manage a SQLite database. My application will be picking this database and use its content as the application's content. So far it's pretty simple. The application will have a navigation that will browse through various workflows, and once at the end workflow, it'll show contents from the database. A consultation kind a thing, example: Liquids - Juice - Orange Juice - Informations about Orange Juice. For my SQLite transactions, so far I believe I'll be using fmdb. It looks like a great wrapper. Here's a simple schema from one of the database: Workflow: id: { type: integer(3), primary: true, autoincrement: true } workflow_id: { type: integer(1) } name: { type: string(255) } That table's rows will be my navigations. Do you believe I should use a navigation controller? If so, then how could I generate the navigation tree from it? I have a good working knowledge of Objective-C and Foundation framework, but never went too far with it so that is why I'm asking before starting in the wrong direction :) Thanks a lot.

    Read the article

  • "SignTool error: Access is denied" in TFS 2010 build process

    - by user351352
    I'm getting "SignTool Error: Access is Denied" when I attempt to sign a file. When I use an administrator cmd, all works fine. However, this process is going to be used in a TFS 2010 build process and using the InvokeProcess task with signtool gives the same access denied message as a non-administrator command prompt. More info: On a Win2008 R2 enterprise machine. User is machine admin and on the domain. The TFS Build service is also set to run as this user. Using a self signed certificate created using these instructions: How do I create a self-signed certificate for code signing on Windows? After following these instructions I have the following files: MyCA.cer MyCA.pvk MySPC.cer MySPC.pvk MySPC.pfx MyCA is in my Trusted Root Certification Authorities I imported MySPC.pfx into personal certificates, following the advice here: SignTool error: Access is denied To do the signing I'm using the thumbprint of the MySPC.pfx that was imported into the Personal section so my signtool command looks like: sign /sha1 1e9d7b5ad98552d9c58944e3f3903e6b929f4819 /t http://timestamp.verisign.com/scripts/timestamp.dll "FileName" Once again this works in Admin mode. This also works when running cmd as administrator: sign /f "C:\Code Signing Non-Release\MySPC.pfx" /t http://timestamp.verisign.com/scripts/timestamp.dll "FileName" New to code signing in general, so any help is welcome.

    Read the article

  • No attachable databases were found on the SQL Server

    - by George
    I'm fumbling my way through a basic installation of TFS 2010... I see that the TFS_Configuration and Tfs_DefaultCollection databases were successfully created on my local machine. The installation seemed to go smoothly, but it looks like I need to configure a Team Project Collect before I can start using it. Yet, the wizard seems unable to locate the appropriate database that I expected should have been created during the setup process. The server name defaulted properly to the name of the local machine. Did I miss a step somewhere?

    Read the article

  • Is make -j distcc possible to scale over 5 times?

    - by holmes
    Since distcc cannot keep states and just possible to send jobs and headers and let those servers to use only the data just sent and preprocess and compile, I think the lastest distcc has problem in scalability. In my local build environment which has appx. 10,000 c/c++ files to build, I could only make 2 times faster than not using distcc (but using make -j) when having 20 build servers. What do you think is the problem? If anyone has achieved scalability more than 10 - 20 times using make -j and distcc, please let me know. The following product claims that it is impossible to scale make -j and distcc faster than 5 times. http://www.electric-cloud.com/products/electricaccelerator.php I think this can be improved by: Letting the distccd server to maintain sessions Tied to those sessions, they will cache their own header directories Preprocess will be done demand base from the distccd server This will be done through a LD_PRELOADed library libdistcc.so which will replace stat/open syscalls and fetches the header files over network. ... Has anyone done this kind of thing?

    Read the article

  • Eclipse: What is the minimum Eclipse installation needed for a headless PDE build?

    - by Christoph
    Hi, I am currently using PDE build in headless mode to build my OSGI Bundle project. The PDE Antrunner task uses an Eclipse installation and I am just pointing it to my local Eclipse installation. unfortunatelly my eclipse installation is about 260MB big, but I assume that a PDE build does NOT require all of those plugins in a standard eclipse installation. Does anyone now what is the minimum list of plugins I need for doing a headless PDE build? All of my dependencies I actually have in a custom target platform folder, so I guess the only thing I need from my eclipse installation are the dependencies which PDE build actually needs. But what are those? Can I shrink my installation to a very minimum? My goal is to also check-in this "build-eclipse" folder into my project's SVN so that when you check it out, you have everything you need to start a full build, without touching any build.properties. But I don't want to commit 266MB of eclipse if I maybe need only 20MB of it. Thanks Christoph

    Read the article

  • Android Studio Could not call IncrementalTask.taskAction() on task ':project:dexDebug'

    - by akenawell85x
    I recently decided to switch from Eclipse to Android Studio. I imported a project I was working on and am now getting this error when I try to run the project. Gradle: Execution failed for task ':project:dexDebug'. > Could not call IncrementalTask.taskAction() on task ':project:dexDebug' I've been cruising this site for 2 days now and trying different suggestions to no avail. I did run gradlew compileDebug --stacktrace and this is what I got: C:\Users\adam\AndroidStudioProjects\projectProject>gradlew compileDebug --stacktrace Relying on packaging to define the extension of the main artifact has been deprecated and is scheduled to be removed in Gradle 2.0 :project:preBuild UP-TO-DATE :project:preDebugBuild UP-TO-DATE :project:preReleaseBuild UP-TO-DATE :project:prepareComAndroidSupportAppcompatV71800Library UP-TO-DATE :project:prepareComGoogleAndroidGmsPlayServices3225Library UP-TO-DATE :project:prepareDebugDependencies :project:compileDebugAidl UP-TO-DATE :project:compileDebugRenderscript UP-TO-DATE :project:generateDebugBuildConfig UP-TO-DATE :project:mergeDebugAssets UP-TO-DATE :project:mergeDebugResources UP-TO-DATE :project:processDebugManifest UP-TO-DATE :project:processDebugResources UP-TO-DATE :project:generateDebugSources UP-TO-DATE :project:compileDebug UP-TO-DATE BUILD SUCCESSFUL Total time: 10.459 secs However I am still getting that error when I try to actually run the project. Here is my build.gradle (i do have a 'libs' folder in my project with all the jars for a google maps/places app): buildscript { repositories { mavenCentral() } dependencies { classpath 'com.android.tools.build:gradle:0.6.+' } } apply plugin: 'android' repositories { mavenCentral() } android { compileSdkVersion 18 buildToolsVersion "18.1.1" defaultConfig { minSdkVersion 8 targetSdkVersion 18 } } dependencies { compile fileTree(dir: 'libs') compile 'com.google.android.gms:play-services:3.2.25' compile 'com.android.support:support-v4:18.0.0' compile 'com.android.support:appcompat-v7:+' } and my settings.gradle: include ':project', ':project:libs:android-support-v4', ':project:libs:google-api-client-1.10.3-beta', ':project:libs:google-api-client-android2-1.10.3-beta', ':project:libs:google-http-client-1.10.3-beta', ':project:libs:google-http-client-android2-1.10.3-beta', ':project:libs:google-oauth-client-1.10.1-beta', ':project:libs:gson-2.1', ':project:libs:guava-11.0.1', ':project:libs:jackson-core-asl-1.9.4', ':project:libs:jsr305-1.3.9', ':project:libs:protobuf-java-2.2.0', ':project:libs:GoogleAdMobAdsSdk-6.4.1' As I said, I've tried pretty much everything I have read on here about this error and am having no luck. Any help would be greatly appreciated.

    Read the article

  • Tool to aid Code Review

    - by Prakash
    For our small team of 20 developers, we used do code review like: Make a label in svn and publish the label to the reviewers Reviewers checkout the code and add comments in line (with marker like: // REVIEWER_NAME::REVIEW COMMENT:) After all comments are in, reviewer checks in the code, preferably with new label. Developer checks the comments and makes changes (if appropriate) Developer keeps an excel sheet report for considered changes and reasons for ignored comments Problem: Developer needs to keep track of multiple labels which might have same comments Sometimes we even do One on One review and if we really have time, even do Table review (team of reviewers looks at the code on projector, on the fly, and pass comment) I was wondering: Are you guys using any specific tool which helps to do code reviews smoother? I have heard of Code Collaborator. But have anyone used that? Is it worth the money?

    Read the article

  • Why is Assembly.GetCustomAttributes suddenly throwing TypeLoadException on build machine with Silver

    - by andrej351
    A short while back i had to display the current version of our Silverlight app. After some googling the following code gave me the desired result: var fileVersionAttributes = typeof(MyClass).Assembly. GetCustomAttributes(typeof(AssemblyFileVersionAttribute), false) as AssemblyFileVersionAttribute[]; var version = fileVersionAttributes[0].Version; This worked a treat in our .NET 3.5 Silverlight 3 environment. However, we recently upgraded to .NET 4 and Silverlight 4. We just finished getting our build machine working and found that the unit test for this code was throwing the following exception: Exception Message: System.TypeLoadException: Error 0x80131522. Debugging resource strings are unavailable. See http://go.microsoft.com/fwlink/?linkid=106663&Version=3.0.50106.0&File=mscorrc.dll&Key=0x80131522 at System.ModuleHandle.ResolveType(ModuleHandle module, Int32 typeToken, RuntimeTypeHandle* typeInstArgs, Int32 typeInstCount, RuntimeTypeHandle* methodInstArgs, Int32 methodInstCount) at System.ModuleHandle.ResolveTypeHandle(Int32 typeToken, RuntimeTypeHandle[] typeInstantiationContext, RuntimeTypeHandle[] methodInstantiationContext) at System.Reflection.Module.ResolveType(Int32 metadataToken, Type[] genericTypeArguments, Type[] genericMethodArguments) at System.Reflection.CustomAttribute.FilterCustomAttributeRecord(CustomAttributeRecord caRecord, MetadataImport scope, Assembly& lastAptcaOkAssembly, Module decoratedModule, MetadataToken decoratedToken, RuntimeType attributeFilterType, Boolean mustBeInheritable, Object[] attributes, IList derivedAttributes, RuntimeType& attributeType, RuntimeMethodHandle& ctor, Boolean& ctorHasParameters, Boolean& isVarArg) at System.Reflection.CustomAttribute.GetCustomAttributes(Module decoratedModule, Int32 decoratedMetadataToken, Int32 pcaCount, RuntimeType attributeFilterType, Boolean mustBeInheritable, IList derivedAttributes, Boolean isDecoratedTargetSecurityTransparent) at System.Reflection.CustomAttribute.GetCustomAttributes(Module decoratedModule, Int32 decoratedMetadataToken, Int32 pcaCount, RuntimeType attributeFilterType, Boolean isDecoratedTargetSecurityTransparent) at System.Reflection.CustomAttribute.GetCustomAttributes(Assembly assembly, Type caType) at System.Reflection.Assembly.GetCustomAttributes(Type attributeType, Boolean inherit) at MyCode.VersionTest() I have never seen this exception before and the link in it points nowhere. It is only throwing on the build machine and not on my development box, so i'm going through a process of trial and error to see any differences between the two. Any idea why this might be happening?? Cheers, Andrej.

    Read the article

< Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >